-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Build and upload LLVM build artifact #1
ci: Build and upload LLVM build artifact #1
Conversation
5e06e10
to
cf231e6
Compare
uses: actions/checkout@v3 | ||
|
||
- id: cache-key | ||
run: echo "cache-key=llvm-$(git rev-parse --short HEAD)-1" >> "$GITHUB_ENV" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we probably want to do something smarter now. Unlike the usage in the bpf-linker repo, we should always use the cache here. The cache key should be just an encoding of the arguments we pass to cmake.
Put differently: imagine that we are applying our changes to a new LLVM branch from rustc -- we want to make use of whatever cached build products are available to us.
Then we should always run the LLVM build and upload the result.
There are also some vestiges of the reusable workflow here (see workflow_call above), which we probably no longer need.
Lastly: how are we going to consume the build product generated here from:
- the bpf-linker automation
- a user's local machine
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably put a link in bpf-linker's README and tell them how to set the env var to pick it up. And we should probably also build for mac :P
Also I don't think we need to build dynamic libs? Nothing uses them and their usage is discouraged
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dynamic libs are very useful when building the linker in CI because the binaries are way way smaller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dynamic libs are actually disabled here - dylibs are enabled by the -DBUILD_SHARED_LIBS
option, which is not used here. I would prefer to stick to static ones even though they are bigger - as Alessandro said, dylibs are discouraged and I don't think we will hit any storage limits with the static ones.
About consuming the artifacts, I was thinking about having a cargo xtask build
command which would by default pull the newest artifact from Github (we could just look for a newest artifact from the newest rustc/*
branch). Of course only on feature/fix-di
branch for now. Once everything is ready an we are going to release bpf-linker with BTF support, I think the best would be releasing binaries with the newest LLVM artifact, statically linked - in CI, on every tag. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must have hallucinated I swear I saw those options 😂 Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must have hallucinated I swear I saw those options 😂
It wasn't a microdose! 😏 Seriously speaking, we have that option in bpf-linker CI, but I removed it here. I guess we could also remove it there?
Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you
Yeah, I was thinking about doing exactly that in xtask. I have a strong preference for that, because asking checking what's the newest rustc/*
branch and what's the newest artifact isn't something I want to do manually. And I would also prefer to not do it in Bash. 😛
And we should probably also build for mac :P
Good call, although I will have to figure out how to cross-build for M1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.
Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?
That won't work, because Github actions has Intel Mac runners, not M1. So I think I will have to cross-compile. I mean, I can also do that with a matrix strategy. Maybe I could even use a Mac runner to compile the Mac version for both architectures. But again, compiling for aarch64-darwin will require cross-compilation for sure, no matter on which kind of runner we do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.
To be precise - you were running out of space when building? Here it doesn't seem to happen. And I don't think that the size of artifacts at the end exceeds 2 GB?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point about cross compilation.
To be precise - you were running out of space when building? Here it doesn't seem to happen.
It doesn't happen here because you're not building bpf-linker.
And I don't think that the size of artifacts at the end exceeds 2 GB?
Best to check. Also, as I said, it is the size of the bpf-linker artifacts that is much bigger without dynamic linking.
I spent a lot of time making this work, I strongly encourage you to test thoroughly, it's about to become much harder to iterate once these things span two repos.
a40ebe2
to
c344d3d
Compare
FYI rustc is now on LLVM 17 and we use https://github.com/aya-rs/llvm-project/tree/rustc/17.0-2023-07-29 in bpf-linker CI |
c344d3d
to
fac40e1
Compare
Co-authored-by: Tamir Duberstein <[email protected]>
fac40e1
to
4dd261d
Compare
I'm closing this. It seems like we cannot use GitHub artifacts in the way we intended. It's impossible to download artifacts without PAT. When you access the zip link directly, either with curl, browser or any HTTP client, without being authenticated: https://api.github.com/repos/aya-rs/llvm-project/actions/artifacts/856190639/zip (that link shows in https://api.github.com/repos/aya-rs/llvm-project/actions/artifacts/856190639) it always responds with:
And since LLVM 17 is in rustc already, I think we could just use it. We can rethink about static linking with LLVM when we are about to release bpf-linker, so we could official LLVM builds to link. |
``` UBSan-Standalone-sparc :: TestCases/Misc/Linux/diag-stacktrace.cpp ``` `FAIL`s on 32 and 64-bit Linux/sparc64 (and on Solaris/sparcv9, too: the test isn't Linux-specific at all). With `UBSAN_OPTIONS=fast_unwind_on_fatal=1`, the stack trace shows a duplicate innermost frame: ``` compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:31: runtime error: execution reached the end of a value-returning function without returning a value #0 0x7003a708 in f() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:35 #1 0x7003a708 in f() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:14:35 #2 0x7003a714 in g() compiler-rt/test/ubsan/TestCases/Misc/Linux/diag-stacktrace.cpp:17:38 ``` which isn't seen with `fast_unwind_on_fatal=0`. This turns out to be another fallout from fixing `__builtin_return_address`/`__builtin_extract_return_addr` on SPARC. In `sanitizer_stacktrace_sparc.cpp` (`BufferedStackTrace::UnwindFast`) the `pc` arg is the return address, while `pc1` from the stack frame (`fr_savpc`) is the address of the `call` insn, leading to a double entry for the innermost frame in `trace_buffer[]`. This patch fixes this by moving the adjustment before all uses. Tested on `sparc64-unknown-linux-gnu` and `sparcv9-sun-solaris2.11` (with the `ubsan/TestCases/Misc/Linux` tests enabled). (cherry picked from commit 3368a32)
Co-authored-by: Tamir Duberstein [email protected]