Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[16.0][IMP] queue_job: run specific hook method after max_retries #674

Open
wants to merge 2 commits into
base: 16.0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions queue_job/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -397,6 +397,33 @@ Based on this configuration, we can tell that:
* retries 10 to 15 postponed 30 seconds later
* all subsequent retries postponed 5 minutes later

**Job function: reach max retryable times**

When a job has reached the maximum number of retries and still fails,
the job's status will be set to ``Failed``.
You can define a specific method to handle this event.
The method should be named ``{method_name}_on_max_retries_reached``.

Here's an example:

.. code-block:: python

from odoo import models, fields, api

class MyModel(models.Model):
_name = 'my.model'

def button_done(self):
self.env['my.model'].with_delay().my_method('a', k=2)

def my_method_on_max_retries_reached(self):
# This method is called when the job reaches the maximum retries and fails
# Add your custom logic here

In this example, ``my_method_on_max_retries_reached`` is the method
that will be called when the job my_method fails after reaching the maximum retries.
You can add your custom logic inside this method to handle the event.

**Job Context**

The context of the recordset of the job, or any recordset passed in arguments of
Expand Down
15 changes: 15 additions & 0 deletions queue_job/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -527,6 +527,21 @@
elif not self.max_retries: # infinite retries
raise
elif self.retry >= self.max_retries:
hook = f"{self.method_name}_on_max_retries_reached"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am generally not a fan of interpolating method names. Pass on_exception as an additional argument to delayable/with_delay instead?

Perhaps the scope could be slightly broader as well? Give the developer a chance to handle all types of exception, not just FailedJobError?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • interpolating method names is quite a common pattern in odoo code: see lots of getattr in the codebase :)
  • quite elegant imho to be able to define method_name and method_name_on_max_retries_reached nearby, but of course it's a bit subjective
  • regarding your last point, that's an interesting idea but it feels quite natural to handle exceptions in the job code itself, e.g. in the EDI framework here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A more declarative approach could be to use a decorator but it will likely add complexity.
@QuocDuong1306 could you please update the docs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @simahawk , I updated the docs

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say whenever job reaches failed state, it would be useful to have a hook, to do something, not when it just failed after max retries, but failed for any reason?

For example, issue described here: #618

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. Yet, I think you can subscribe to that particular event easily (job switching to failed).
In fact we could subscribe even in this case and check the max retry counter.
@guewen did you have something in mind regarding handling failures?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously this is the kind of thing we would add to the @job decorator, things that were configured on this decorator are now on queue.job.function. This is akin to the "related actions" where we store the method to execute there. Different jobs can be pointed to the same error handler, and we would be able to use an handler on "no-code jobs" easily (e.g. I call an existing method with with_delay in a script, and I want to notify slack when the max failure is reached using a handler that already exists in the code, I can create a queue job function and set this handler from the UI).

I agree with your points on triggering when switching to failed, not considering retries, then it would be worth to provide the max retry and current retry count to the handler as well.

Something to pay really attention to in the implementation is the transaction handling: I think in the current form, if the job failed with any error that causes a rollback (such as a serialization error for example), the transaction is unusable and the handler will probably fail as well! We should probably execute it in a new transaction, but then be aware that it will not be up-to-date with whatever happened in the current transaction, and could be subject to deadlocks depending of what the failed job did and the failure handler does...

Considering that, I'd also be more confortable if the handling happens somewhere in

    def _try_perform_job(self, env, job):
        """Try to perform the job."""
        job.set_started()
        job.store()
        env.cr.commit()
        _logger.debug("%s started", job)

        job.perform()
        # Triggers any stored computed fields before calling 'set_done'
        # so that will be part of the 'exec_time'
        env.flush_all()
        job.set_done()
        job.store()
        env.flush_all()
        env.cr.commit()
        _logger.debug("%s done", job)

So the transactional flow is more straightforward

if hasattr(self.recordset, hook):
recordset = self.recordset.with_context(

Check warning on line 532 in queue_job/job.py

View check run for this annotation

Codecov / codecov/patch

queue_job/job.py#L532

Added line #L532 was not covered by tests
job_uuid=self.uuid, exc_info=self.exc_info
)
try:
getattr(recordset, hook)()
except Exception as ex:
_logger.debug(

Check warning on line 538 in queue_job/job.py

View check run for this annotation

Codecov / codecov/patch

queue_job/job.py#L535-L538

Added lines #L535 - L538 were not covered by tests
"Exception on %s:%s() for Job UUID: %s",
self.recordset,
hook,
self.uuid,
ex,
)
type_, value, traceback = sys.exc_info()
# change the exception type but keep the original
# traceback and message:
Expand Down
27 changes: 27 additions & 0 deletions queue_job/readme/USAGE.rst
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,33 @@ Based on this configuration, we can tell that:
* retries 10 to 15 postponed 30 seconds later
* all subsequent retries postponed 5 minutes later

**Job function: reach max retryable times**

When a job has reached the maximum number of retries and still fails,
the job's status will be set to ``Failed``.
You can define a specific method to handle this event.
The method should be named ``{method_name}_on_max_retries_reached``.

Here's an example:

.. code-block:: python

from odoo import models, fields, api

class MyModel(models.Model):
_name = 'my.model'

def button_done(self):
self.env['my.model'].with_delay().my_method('a', k=2)

def my_method_on_max_retries_reached(self):
# This method is called when the job reaches the maximum retries and fails
# Add your custom logic here

In this example, ``my_method_on_max_retries_reached`` is the method
that will be called when the job my_method fails after reaching the maximum retries.
You can add your custom logic inside this method to handle the event.

**Job Context**

The context of the recordset of the job, or any recordset passed in arguments of
Expand Down
33 changes: 29 additions & 4 deletions queue_job/static/description/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,11 @@

/*
:Author: David Goodger ([email protected])
:Id: $Id: html4css1.css 8954 2022-01-20 10:10:25Z milde $
:Id: $Id: html4css1.css 9511 2024-01-13 09:50:07Z milde $
:Copyright: This stylesheet has been placed in the public domain.

Default cascading style sheet for the HTML output of Docutils.
Despite the name, some widely supported CSS2 features are used.

See https://docutils.sourceforge.io/docs/howto/html-stylesheets.html for how to
customize this style sheet.
Expand Down Expand Up @@ -274,7 +275,7 @@
margin-left: 2em ;
margin-right: 2em }

pre.code .ln { color: grey; } /* line numbers */
pre.code .ln { color: gray; } /* line numbers */
pre.code, code { background-color: #eeeeee }
pre.code .comment, code .comment { color: #5C6576 }
pre.code .keyword, code .keyword { color: #3B0D06; font-weight: bold }
Expand All @@ -300,7 +301,7 @@
span.pre {
white-space: pre }

span.problematic {
span.problematic, pre.problematic {
color: red }

span.section-subtitle {
Expand Down Expand Up @@ -715,6 +716,28 @@ <h3><a class="toc-backref" href="#toc-entry-7">Configure default options for job
<li>retries 10 to 15 postponed 30 seconds later</li>
<li>all subsequent retries postponed 5 minutes later</li>
</ul>
<p><strong>Job function: reach max retryable times</strong></p>
<p>When a job has reached the maximum number of retries and still fails,
the job’s status will be set to <tt class="docutils literal">Failed</tt>.
You can define a specific method to handle this event.
The method should be named <tt class="docutils literal">{method_name}_on_max_retries_reached</tt>.</p>
<p>Here’s an example:</p>
<pre class="code python literal-block">
<span class="kn">from</span> <span class="nn">odoo</span> <span class="kn">import</span> <span class="n">models</span><span class="p">,</span> <span class="n">fields</span><span class="p">,</span> <span class="n">api</span><span class="w">

</span><span class="k">class</span> <span class="nc">MyModel</span><span class="p">(</span><span class="n">models</span><span class="o">.</span><span class="n">Model</span><span class="p">):</span><span class="w">
</span> <span class="n">_name</span> <span class="o">=</span> <span class="s1">'my.model'</span><span class="w">

</span> <span class="k">def</span> <span class="nf">button_done</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span><span class="w">
</span> <span class="bp">self</span><span class="o">.</span><span class="n">env</span><span class="p">[</span><span class="s1">'my.model'</span><span class="p">]</span><span class="o">.</span><span class="n">with_delay</span><span class="p">()</span><span class="o">.</span><span class="n">my_method</span><span class="p">(</span><span class="s1">'a'</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span><span class="w">

</span> <span class="k">def</span> <span class="nf">my_method_on_max_retries_reached</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span><span class="w">
</span> <span class="c1"># This method is called when the job reaches the maximum retries and fails</span><span class="w">
</span> <span class="c1"># Add your custom logic here</span>
</pre>
<p>In this example, <tt class="docutils literal">my_method_on_max_retries_reached</tt> is the method
that will be called when the job my_method fails after reaching the maximum retries.
You can add your custom logic inside this method to handle the event.</p>
<p><strong>Job Context</strong></p>
<p>The context of the recordset of the job, or any recordset passed in arguments of
a job, is transferred to the job according to an allow-list.</p>
Expand Down Expand Up @@ -958,7 +981,9 @@ <h2><a class="toc-backref" href="#toc-entry-17">Contributors</a></h2>
<div class="section" id="maintainers">
<h2><a class="toc-backref" href="#toc-entry-18">Maintainers</a></h2>
<p>This module is maintained by the OCA.</p>
<a class="reference external image-reference" href="https://odoo-community.org"><img alt="Odoo Community Association" src="https://odoo-community.org/logo.png" /></a>
<a class="reference external image-reference" href="https://odoo-community.org">
<img alt="Odoo Community Association" src="https://odoo-community.org/logo.png" />
</a>
<p>OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.</p>
Expand Down
Loading