-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote exec is not resilient to remote build farm worker deaths #18319
Comments
I think our stance is that Bazel should fallback on transient or infrastructure failures, and fail otherwise. In order for that to happen, the remote protocol must be able to distinguish the two kinds of failure. bazelbuild/remote-apis#244 seems relevant. |
bazelbuild/remote-apis#244 would not fix this. If I'm reading that one correctly, that change is to distinguish between process exit codes and signals---which is a fine thing to do... but is unrelated to this problem. For example: if an action is invoking the compiler and the compiler crashes, it is good to report that to the user as a crash instead of a numeric exit code (graceful exit). But retrying a compiler failure is likely going to result in another crash. However, if the action failed because the worker crashed (maybe the worker timed out saving the output to the cache, and the timeout isn't gracefully handled in the worker code), then that is an infrastructure failure and a retry is warranted because the retry might work. There is a fine line between the two though. For example, consider what happens if the remote workers have tight memory constraints. In such a situation, running a very memory-hungry compiler action on them could result in the compiler being killed. While this is not the compiler's fault (it's the worker who decided to kill the process), it's still not an infrastructure failure because a retry will result in the same crash. |
Yes, I agree that bazelbuild/remote-apis#244 as it currently stands isn't sufficient; I was saying more generally that the process termination reason must contain enough information for Bazel to decide whether to retry and/or fall back. |
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --experimental_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319.
What would you think about something like Snowflake-Labs@c3e8c47 to deal with this issue until the Remote Build protocol and its implementations are adjusted to properly deal with this issue? |
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --experimental_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319.
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --snowflake_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319. Author: Julio Merino <[email protected]> Date: Wed May 24 07:12:43 2023 -0700
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --snowflake_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319. Author: Julio Merino <[email protected]> Date: Wed May 24 07:12:43 2023 -0700
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --snowflake_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319. Author: Julio Merino <[email protected]> Date: Wed May 24 07:12:43 2023 -0700
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --snowflake_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319. Author: Julio Merino <[email protected]> Date: Wed May 24 07:12:43 2023 -0700
This adds logic to treat remote actions that terminate due to a signal as retryable errors, assuming that such terminations are caused by a worker crash. Because this is a hack to paper over a current Remote Build defficiency, and because this heuristic may be wrong, this feature is hidden behind a new --snowflake_remote_exit_signals_are_transient_errors flag. Mitigates bazelbuild#18319. Author: Julio Merino <[email protected]> Date: Wed May 24 07:12:43 2023 -0700
Thank you for contributing to the Bazel repository! This issue has been marked as stale since it has not had any activity in the last 1+ years. It will be closed in the next 90 days unless any other activity occurs. If you think this issue is still relevant and should stay open, please post any comment here and the issue will no longer be marked as stale. |
And this is intentional. Because if a worker dies, you should simply make sure to set |
Who is "you" here? |
Your infrastructure. Buildbarn, Buildfarm, Buildgrid, whatever you're using. |
Question: how are you terminating this worker? Make sure you only send a termination signal to bb_worker/bb_runner itself. Do NOT send it to any of the processes that bb_runner spawns that belong to the build action. If you are using tini to launch bb_runner, do NOT pass in the -g flag: |
Description of the bug:
We are observing random build breakages when our deployment of Build Barn is suffering from unstability.
To further diagnose this, I've been trying to inject manual failures into our Build Barn (by forcibly terminating individual server processes while a build is ongoing) to see how Bazel reacts. In the vast majority of the cases, Bazel correctly detects the failure and falls back to local execution (provided
--remote_local_fallback
is enabled) / retries the failed action.However, there seems to be one case from which Bazel cannot recover from. If I terminate a remote Build Barn worker while a long-running action is being executed on it, Bazel will immediately fail the build with:
Neither remote retries nor the local fallback kick in, failing the build for what should be a retryable error.
I'm not sure if this is something that ought to be fixed at the Bazel level or at the Build Barn level. I suspect this is a Build Barn issue based on what I'll explain below, but I'm not completely sure if that's the case or if Bazel could do better -- hence why I'm starting here. cc @EdSchouten
What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
sleep 120 ; touch $@
).--remote_local_fallback
.Observe Bazel immediately abort the build as described above.
Which operating system are you running Bazel on?
N/A
What is the output of
bazel info release
?bazel-6.1.1
If
bazel info release
returnsdevelopment version
or(@non-git)
, tell us how you built Bazel.No response
What's the output of
git remote get-url origin; git rev-parse master; git rev-parse HEAD
?No response
Have you found anything relevant by searching the web?
No response
Any other information, logs, or outputs that you want to share?
One detail that caught my attention in the above error message is
Exit -1
. Process exit codes are 8 bits and should not be negative. And if we try to run a genrule that doesexit -1
, Bazel will (correctly) claim (Exit 255
) in the failure.But Bazel (and the RBE protocol) use 32-bit signed integers to propagate exit codes. So I suspect the -1 is coming from Build Barn itself or from the Bazel remote execution code – and if that’s true, we could potentially use this to discern between real action failures and infrastructure failures, and implement retries or local execution fallback.
Bazel’s remote execution module seems to propagate the
ExitCode
of the remote worker verbatim in theSpawnResult
instances, so I do not think Bazel is generating this fictitious -1. It has to come from Build Barn.The only place where I see this -1 originating from is in the
pkg/runner/local_runner.go
file in bb-remote-execution where the result ofcmd.ProcessState.ExitCode()
is propagated as the exit code of the action result. Go has the following to say about this function:However… this is in the “local runner” code, which I understand runs within the worker. Forcibly terminating the worker process wouldn’t give a chance to the worker code to process and return the -1, unless the worker caught a graceful termination signal and tried to do something with it. But the bb-remote-execution code does not seem to catch any signal… so this is very strange.
This all means that we cannot use -1 as a discriminating factor for worker deaths. A genrule that does
kill -9 $$
is also reported asExit -1
from the Build Farm due to the above explanation from Go. (But, interestingly, running this same genrule locally reportsKilled
instead ofExit -1
because, well, the code in Bazel is doing the right thing regarding exit codes—unlike Go’s simplistic view of the world.)The text was updated successfully, but these errors were encountered: