You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you need to handle failed jobs, there is no clear way to do it. For example, I have group of jobs that are part of chain. And that chain ends with postprocess job. If all group jobs were successful, then postprocess job is run and we can handle success path (e.g. sending message to users informing about it etc). But it can't handle failure path.
If any jobs fail, the last job that handles group jobs, is not run (which is OK in some cases). But then jobs are just stuck. I know there are notifications about failed jobs, but these kind of notifications are mainly for technical people and in my case, simple users don't use queue jobs directly. They don't even have access to queue jobs application. These jobs are run without them even knowing it.
So it would be great if there would be a way to trigger job on failure, so it could postprocess all failed jobs in a graph. And my case is handling group of jobs, not single job as those group jobs do same thing, just process separate batches of data (but such feature most likely would handle any number of jobs, its just an example).
Describe the solution you'd like
GIVEN
Jobs graph, we can create job in a chain, that is run on failure (as far as I know, jobs currently proceed only on done, but not failed state). WHEN
Any job fails that "fail processing job" depends on THEN
It will be run only if any previous job it depends on, failed (but it should wait till all dependent jobs are run regardless if there are some failed jobs already).
GIVEN
Jobs graph, we can create job in a chain, that is run on failure (as far as I know, jobs currently proceed only on done, but not failed state). WHEN
All dependent jobs are done THEN
failure handling job is skipped (maybe would need new state skipped?)
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I've considered hooking on failure message posting that already exists with queue jobs, which is related to my current problem (to send aggregated message about all failed jobs, not for specific job), but job that handles failed jobs would be more universal as there could be various cases when failure needs to be handled.
Also, maybe simpler solution would be to allow postprocessing job to just run on both done and failed states (with some extra parameter passed). So this way it would still be possible to handle failed job, but then developer would be responsible for recognizing what to do inside that job and detect whether related jobs failed or all passed.
The text was updated successfully, but these errors were encountered:
There hasn't been any activity on this issue in the past 6 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this issue to never become stale, please ask a PSC member to apply the "no stale" label.
When you need to handle failed jobs, there is no clear way to do it. For example, I have group of jobs that are part of chain. And that chain ends with postprocess job. If all group jobs were successful, then postprocess job is run and we can handle success path (e.g. sending message to users informing about it etc). But it can't handle failure path.
If any jobs fail, the last job that handles group jobs, is not run (which is OK in some cases). But then jobs are just stuck. I know there are notifications about failed jobs, but these kind of notifications are mainly for technical people and in my case, simple users don't use queue jobs directly. They don't even have access to queue jobs application. These jobs are run without them even knowing it.
So it would be great if there would be a way to trigger job on failure, so it could postprocess all failed jobs in a graph. And my case is handling group of jobs, not single job as those group jobs do same thing, just process separate batches of data (but such feature most likely would handle any number of jobs, its just an example).
Describe the solution you'd like
GIVEN
Jobs graph, we can create job in a chain, that is run on failure (as far as I know, jobs currently proceed only on done, but not failed state).
WHEN
Any job fails that "fail processing job" depends on
THEN
It will be run only if any previous job it depends on, failed (but it should wait till all dependent jobs are run regardless if there are some failed jobs already).
GIVEN
Jobs graph, we can create job in a chain, that is run on failure (as far as I know, jobs currently proceed only on done, but not failed state).
WHEN
All dependent jobs are done
THEN
failure handling job is skipped (maybe would need new state skipped?)
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I've considered hooking on failure message posting that already exists with queue jobs, which is related to my current problem (to send aggregated message about all failed jobs, not for specific job), but job that handles failed jobs would be more universal as there could be various cases when failure needs to be handled.
Also, maybe simpler solution would be to allow postprocessing job to just run on both done and failed states (with some extra parameter passed). So this way it would still be possible to handle failed job, but then developer would be responsible for recognizing what to do inside that job and detect whether related jobs failed or all passed.
The text was updated successfully, but these errors were encountered: