Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Build and upload LLVM build artifact #1

Conversation

vadorovsky
Copy link
Member

@vadorovsky vadorovsky commented Aug 3, 2023

Co-authored-by: Tamir Duberstein [email protected]

@vadorovsky vadorovsky force-pushed the aya/ci-build-artifacts branch 2 times, most recently from 5e06e10 to cf231e6 Compare August 3, 2023 12:27
uses: actions/checkout@v3

- id: cache-key
run: echo "cache-key=llvm-$(git rev-parse --short HEAD)-1" >> "$GITHUB_ENV"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably want to do something smarter now. Unlike the usage in the bpf-linker repo, we should always use the cache here. The cache key should be just an encoding of the arguments we pass to cmake.

Put differently: imagine that we are applying our changes to a new LLVM branch from rustc -- we want to make use of whatever cached build products are available to us.

Then we should always run the LLVM build and upload the result.

There are also some vestiges of the reusable workflow here (see workflow_call above), which we probably no longer need.

Lastly: how are we going to consume the build product generated here from:

  • the bpf-linker automation
  • a user's local machine
    ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could probably put a link in bpf-linker's README and tell them how to set the env var to pick it up. And we should probably also build for mac :P

Also I don't think we need to build dynamic libs? Nothing uses them and their usage is discouraged

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dynamic libs are very useful when building the linker in CI because the binaries are way way smaller.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dynamic libs are actually disabled here - dylibs are enabled by the -DBUILD_SHARED_LIBS option, which is not used here. I would prefer to stick to static ones even though they are bigger - as Alessandro said, dylibs are discouraged and I don't think we will hit any storage limits with the static ones.

About consuming the artifacts, I was thinking about having a cargo xtask build command which would by default pull the newest artifact from Github (we could just look for a newest artifact from the newest rustc/* branch). Of course only on feature/fix-di branch for now. Once everything is ready an we are going to release bpf-linker with BTF support, I think the best would be releasing binaries with the newest LLVM artifact, statically linked - in CI, on every tag. WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have hallucinated I swear I saw those options 😂 Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you

Copy link
Member Author

@vadorovsky vadorovsky Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have hallucinated I swear I saw those options 😂

It wasn't a microdose! 😏 Seriously speaking, we have that option in bpf-linker CI, but I removed it here. I guess we could also remove it there?

Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you

Yeah, I was thinking about doing exactly that in xtask. I have a strong preference for that, because asking checking what's the newest rustc/* branch and what's the newest artifact isn't something I want to do manually. And I would also prefer to not do it in Bash. 😛

And we should probably also build for mac :P

Good call, although I will have to figure out how to cross-build for M1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.

Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?

Copy link
Member Author

@vadorovsky vadorovsky Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?

That won't work, because Github actions has Intel Mac runners, not M1. So I think I will have to cross-compile. I mean, I can also do that with a matrix strategy. Maybe I could even use a Mac runner to compile the Mac version for both architectures. But again, compiling for aarch64-darwin will require cross-compilation for sure, no matter on which kind of runner we do it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.

To be precise - you were running out of space when building? Here it doesn't seem to happen. And I don't think that the size of artifacts at the end exceeds 2 GB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point about cross compilation.

To be precise - you were running out of space when building? Here it doesn't seem to happen.

It doesn't happen here because you're not building bpf-linker.

And I don't think that the size of artifacts at the end exceeds 2 GB?

Best to check. Also, as I said, it is the size of the bpf-linker artifacts that is much bigger without dynamic linking.

I spent a lot of time making this work, I strongly encourage you to test thoroughly, it's about to become much harder to iterate once these things span two repos.

@vadorovsky vadorovsky force-pushed the aya/ci-build-artifacts branch 4 times, most recently from a40ebe2 to c344d3d Compare August 10, 2023 11:55
@tamird
Copy link
Member

tamird commented Aug 10, 2023

FYI rustc is now on LLVM 17 and we use https://github.com/aya-rs/llvm-project/tree/rustc/17.0-2023-07-29 in bpf-linker CI

@vadorovsky vadorovsky changed the base branch from rustc/16.0-2023-06-05 to rustc/17.0-2023-07-29 August 10, 2023 23:00
@vadorovsky
Copy link
Member Author

I'm closing this. It seems like we cannot use GitHub artifacts in the way we intended.

It's impossible to download artifacts without PAT. When you access the zip link directly, either with curl, browser or any HTTP client, without being authenticated:

https://api.github.com/repos/aya-rs/llvm-project/actions/artifacts/856190639/zip (that link shows in https://api.github.com/repos/aya-rs/llvm-project/actions/artifacts/856190639)

it always responds with:

{
  "message": "You must have the actions scope to download artifacts.",
  "documentation_url": "https://docs.github.com/rest/actions/artifacts#download-an-artifact"
}

And since LLVM 17 is in rustc already, I think we could just use it. We can rethink about static linking with LLVM when we are about to release bpf-linker, so we could official LLVM builds to link.

@vadorovsky vadorovsky closed this Aug 25, 2023
vadorovsky pushed a commit that referenced this pull request Sep 15, 2024
```
  UBSan-Standalone-sparc :: TestCases/Misc/Linux/diag-stacktrace.cpp
```
`FAIL`s on 32 and 64-bit Linux/sparc64 (and on Solaris/sparcv9, too: the
test isn't Linux-specific at all). With
`UBSAN_OPTIONS=fast_unwind_on_fatal=1`, the stack trace shows a
duplicate innermost frame:
```
compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:31: runtime error: execution reached the end of a value-returning function without returning a value
    #0 0x7003a708 in f() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:35
    #1 0x7003a708 in f() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:35
    #2 0x7003a714 in g() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:17:38
```
which isn't seen with `fast_unwind_on_fatal=0`.

This turns out to be another fallout from fixing
`__builtin_return_address`/`__builtin_extract_return_addr` on SPARC. In
`sanitizer_stacktrace_sparc.cpp` (`BufferedStackTrace::UnwindFast`) the
`pc` arg is the return address, while `pc1` from the stack frame
(`fr_savpc`) is the address of the `call` insn, leading to a double
entry for the innermost frame in `trace_buffer[]`.

This patch fixes this by moving the adjustment before all uses.

Tested on `sparc64-unknown-linux-gnu` and `sparcv9-sun-solaris2.11`
(with the `ubsan/TestCases/Misc/Linux` tests enabled).

(cherry picked from commit 3368a32)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants