-
-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] queue_job: max retry #622
base: 15.0
Are you sure you want to change the base?
Conversation
When job fails because of concurrent update error, it does not respect max retries set by the job. Problem is ``perform`` method logic that handles re-try is never called, because ``runjob`` in controller that triggers jobs, catches expected exception and silences it. Though it is done to not pollute logs. So for now, adding extra check before job is run, to make sure max retries are handled if it reached it.
Hi @guewen, |
There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days. |
@guewen can you check this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable, but broad enough in scope that it should be covered by unit tests?
There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days. |
When job fails because of concurrent update error, it does not respect max retries set by the job. Problem is that
perform
method logic that handles re-try is never called, becauserunjob
in controller that triggers jobs, catches expected exception and silences it. Though it is done to not pollute logs.So for now, adding extra check before job is run, to make sure max retries are handled if it reached it.
Some context:
It looks like code that supposed to handle max retries, is never called. But I am not sure what would be the right way to trigger up exception as there is some logic in here
queue/queue_job/controllers/main.py
Line 125 in e2c6bab
Not having max retries can be very problematic if your jobs can have many concurrent updates. Had some issue where somehow same job record (yes job record itself, not some other records, job would update) was being updated by two job runners at the same time and it would always fail and re-try. It had over 400 re-tries. And the only way to stop it, was to restart odoo.
For example, without this fix we can end up in situation like this: