-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Retry foghorn LighthouseJob update using original variable instead of DeepCopy variable #1591
fix: Retry foghorn LighthouseJob update using original variable instead of DeepCopy variable #1591
Conversation
Hi @dippynark. Thanks for your PR. I'm waiting for a jenkins-x member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. |
/ok-to-test |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: msvticket The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Failed to merge this PR due to:
|
We are using Lighthouse foghorn v1.14.4 and very occasionally it is getting into a continuous loop trying to update a LighthouseJob when there is a conflict.
Initially there is a failure with the following message:
It does try again and succeeds:
took 2 attempts to update Job
However, I think this is false positive because
jobCopy
(instead of&job
) is being passed intoretryModifyJob
so on retryr.client.Get
is overwritingjobCopy
and reverting the updated status, meaning the retry is just updating the LighthouseJob to its current value.This PR instead passes
&job
intoretryModifyJob
which matches with the Tekton controller:lighthouse/pkg/engines/tekton/controller.go
Line 146 in 867b0b5
I don't fully understand why foghorn is getting into a continuous loop, I would have thought the update just wouldn't happen (perhaps the noop update is actually triggering another event and foghorn is continuously conflicting with itself), but either way this should fix the problem because on next reconciliation we shouldn't get past the
reflect.DeepEqual
check.For a future improvement, Lighthouse foghorn and other controllers could instead use RetryOnConflict from client-go to handle conflicts instead of the custom
retryModifyJob
function: https://pkg.go.dev/k8s.io/client-go/util/retry#RetryOnConflict