-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent executions count with ActiveJob.retry_on #41
Comments
checked commenting out the upon further thinking about this i am not sure what should be the expected way of working. sidekiq works this way also, but i haven't checked the execution count specificallly, eg for a failing task that is |
I don't have full background as to why ActiveJob decided to go with the I'm going to close this issue until there is a strong use case for it, in which case we can reopen. |
well for me i agree it's a bit confusing, but providing an empty block or error tracking simply solves the backend mechanism kicking in. i feel the overlap and confusion comes because the backends all want to provide a way to be usable without rails or activejob so they "duplicate" its functionality (like cloudtasker's on_error and on_dead, which are already baked in activejob). i understand if you are opposing this change, so i am not pushing it, but i thought i will share my 2cents. |
just checked with sidekiq, and can confirm, it works as described, e.g. with edit: and just to be explicit, without |
Fair enough. I've reopened the issue. If you feel like opening a PR for it, feel free to do so. Otherwise I'll check how to address this some time in the future. |
i can cook up a PR for this the next week, do you have any suggestions? should i just ignore the values from headers (when using the JobWrapper only, so only for ActiveJob)? should i add helper methods to activejob (so users can actually get those values if they want to)? |
It has been a while, but I've made a proposal for an intermediate solution based on some of the suggestions in the issue. It comes with a few sharp edges but would already provide an escape hatch to the override and could be further extended as needed. If you can spare some time, it would be great to get some feedback 🙏 . |
Rails 5.2.2, gem version 0.11.
I understand
retry_on
and company not supported yet (thoughdiscard_on
simply works), so this is not strictly a bug.executions
(withprovider_id
andpriority
though these 2 are not problematic) is filtered from serialization (ActiveJob::QueueAdapters#build_worker
andActiveJob::QueueAdapters::SERIALIZATION_FILTERED_KEYS
) and supplied by google cloud tasks (also see #6), however because of retrying/rescheduling (new task id, retry count is 0) this keeps resetting, which in turn leads to never ending retrial.If
retry_on
functionality is desirable i was thinking maybe putting this information injob_meta
(or even in rootworker_payload
) it could be retained. If you are not opposing this change i would gladly try to make a PR for it.The text was updated successfully, but these errors were encountered: