Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Build and upload LLVM build artifact #1

Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
name: Build LLVM

on:
push:
branches:
- 'rustc/*'
pull_request:
branches:
- 'rustc/*'

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v3

- id: cache-key
run: echo "cache-key=llvm-$(git rev-parse --short HEAD)-1" >> "$GITHUB_ENV"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably want to do something smarter now. Unlike the usage in the bpf-linker repo, we should always use the cache here. The cache key should be just an encoding of the arguments we pass to cmake.

Put differently: imagine that we are applying our changes to a new LLVM branch from rustc -- we want to make use of whatever cached build products are available to us.

Then we should always run the LLVM build and upload the result.

There are also some vestiges of the reusable workflow here (see workflow_call above), which we probably no longer need.

Lastly: how are we going to consume the build product generated here from:

  • the bpf-linker automation
  • a user's local machine
    ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could probably put a link in bpf-linker's README and tell them how to set the env var to pick it up. And we should probably also build for mac :P

Also I don't think we need to build dynamic libs? Nothing uses them and their usage is discouraged

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dynamic libs are very useful when building the linker in CI because the binaries are way way smaller.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dynamic libs are actually disabled here - dylibs are enabled by the -DBUILD_SHARED_LIBS option, which is not used here. I would prefer to stick to static ones even though they are bigger - as Alessandro said, dylibs are discouraged and I don't think we will hit any storage limits with the static ones.

About consuming the artifacts, I was thinking about having a cargo xtask build command which would by default pull the newest artifact from Github (we could just look for a newest artifact from the newest rustc/* branch). Of course only on feature/fix-di branch for now. Once everything is ready an we are going to release bpf-linker with BTF support, I think the best would be releasing binaries with the newest LLVM artifact, statically linked - in CI, on every tag. WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have hallucinated I swear I saw those options 😂 Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you

Copy link
Member Author

@vadorovsky vadorovsky Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have hallucinated I swear I saw those options 😂

It wasn't a microdose! 😏 Seriously speaking, we have that option in bpf-linker CI, but I removed it here. I guess we could also remove it there?

Yeah I don't have a strong opinion, personally I'd just download the tarball somewhere and then set LLVM_SYS_BLA to point to it, but up to you

Yeah, I was thinking about doing exactly that in xtask. I have a strong preference for that, because asking checking what's the newest rustc/* branch and what's the newest artifact isn't something I want to do manually. And I would also prefer to not do it in Bash. 😛

And we should probably also build for mac :P

Good call, although I will have to figure out how to cross-build for M1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.

Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?

Copy link
Member Author

@vadorovsky vadorovsky Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding building for Mac: no need to cross compile, just run this workflow a second time on a Mac using a matrix strategy?

That won't work, because Github actions has Intel Mac runners, not M1. So I think I will have to cross-compile. I mean, I can also do that with a matrix strategy. Maybe I could even use a Mac runner to compile the Mac version for both architectures. But again, compiling for aarch64-darwin will require cross-compilation for sure, no matter on which kind of runner we do it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The equivalent script in bpf-linker enables dynamic libs because I did in fact observe running out of space. Remember that each feature permutation links another copy of LLVM, and again for test binaries. It's not unusual for this to reach several gigabytes.

To be precise - you were running out of space when building? Here it doesn't seem to happen. And I don't think that the size of artifacts at the end exceeds 2 GB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point about cross compilation.

To be precise - you were running out of space when building? Here it doesn't seem to happen.

It doesn't happen here because you're not building bpf-linker.

And I don't think that the size of artifacts at the end exceeds 2 GB?

Best to check. Also, as I said, it is the size of the bpf-linker artifacts that is much bigger without dynamic linking.

I spent a lot of time making this work, I strongly encourage you to test thoroughly, it's about to become much harder to iterate once these things span two repos.


- name: Cache LLVM
id: cache-llvm
uses: actions/cache@v3
with:
path: llvm-install
key: aya-llvm

- name: Install Tools
if: steps.cache-llvm.outputs.cache-hit != 'true'
run: |
set -euxo pipefail
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | \
gpg --dearmor - | \
sudo tee /usr/share/keyrings/kitware-archive-keyring.gpg >/dev/null
echo 'deb [signed-by=/usr/share/keyrings/kitware-archive-keyring.gpg] https://apt.kitware.com/ubuntu/ focal main' | \
sudo tee /etc/apt/sources.list.d/kitware.list >/dev/null
sudo apt-get update
sudo apt-get -y install cmake ninja-build clang lld

- name: Configure LLVM
if: steps.cache-llvm.outputs.cache-hit != 'true'
run: |
set -euxo pipefail
cmake \
-S llvm \
-B build \
-G Ninja \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DCMAKE_INSTALL_PREFIX="${{ github.workspace }}/llvm-install" \
-DLLVM_BUILD_LLVM_DYLIB=ON \
-DLLVM_ENABLE_ASSERTIONS=ON \
-DLLVM_ENABLE_PROJECTS= \
-DLLVM_ENABLE_RUNTIMES= \
-DLLVM_INSTALL_UTILS=ON \
-DLLVM_LINK_LLVM_DYLIB=ON \
-DLLVM_TARGETS_TO_BUILD=BPF \
-DLLVM_USE_LINKER=lld

- name: Install LLVM
if: steps.cache-llvm.outputs.cache-hit != 'true'
run: |
set -euxo pipefail
cmake --build build --target install

- name: Remove all files except llvm-config from llvm-install/bin
run: |
find "${{ github.workspace }}/llvm-install/bin" ! -name 'llvm-config' -type f -exec rm -f {} +

- name: Upload Artifacts
# if: github.event_name == 'push'
uses: actions/upload-artifact@v3
with:
name: llvm-artifacts
path: |
${{ github.workspace }}/llvm-install/**