-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Building multiple charms simultaneously can cause caching errors #1845
Comments
Some more points:
|
@orfeas-k thanks for the update! The simultaneous building makes a lot of sense. In that case, I think what's happening is that while one instance is still downloading the wheel, another instance is trying to use that wheel file. A workaround is to set different |
latest/edge
fails
Thank you for reporting us your feedback! The internal ticket has been created: https://warthogs.atlassian.net/browse/CRAFT-3313.
|
Thinking more about this: I think the best way to go about it is to use hard links to copy the cache structure on a per-container basis. When we start a new container, copy the existing cache this way. Then when charmcraft completes, copy the new wheels from its cache back to the outer cache. @carlcsaposs-canonical since you've got a lot of experience manipulating charmcraft's cache, I'd like to hear from you about caching issues as well if you have time. |
fyi, I think "hard links" and "copy" have very different meanings
regarding partially downloaded wheels, charmcraftcache has handling for this: https://github.com/canonical/charmcraftcache/blob/0a2d8acbe8113d74dd6acdd9287e5511674e433c/charmcraftcache/main.py#L369-L370 In my personal opinion, if you want to build charms in parallel (e.g. on ci), you should use multiple hosts |
Bumping since we also encountered this same issue in the CI for kfp-operators, the logs of the run can be found here. I also reproduced the same error locally. When it comes to the version of
|
What we're going to do here is detect parallel builds and, if they're running, write a warning and disable the cache. |
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Locks the shared cache directory to prevent concurrency issues. Fixes #1845 CRAFT-3313
Bug Description
Using
charmcraft
fromlatest/edge
in GH runners results in istio charms failing to be built. More specifically, this results in the following zipfile bug. This used to happen intermittently but withlatest/edge
it happens 90% of the time (example where the CI has been re-run 7 times).Note that, as mentioned in this canonical/bundle-kubeflow#1005 (comment):
A pointer by @lengau was that it could be an issue with the GH runner space, but it was attempted to free up some space before running the tests canonical/istio-operators#532 (44G up from 21G), but the issue persisted.
Apart from logs pasted below, here's from another run for reference.
To Reproduce
Rerun the CI in this draft PR canonical/istio-operators#506 (where charmcraft form
latest/edge
is being used) compared to when using latest/candidate.Environment
Microk8s 1.25,1.26
Juju 3.4.5
Some charmcraft versions that have been failing:
But the charms are built with 3.1.2.
charmcraft.yaml
Relevant log output
The text was updated successfully, but these errors were encountered: