- APPS
- Job Queue 16.0
Job Queue
This addon adds an integrated Job Queue to Odoo.
It allows to postpone method calls executed asynchronously.
Jobs are executed in the background by a Jobrunner, in their own transaction.
Example:
from odoo import models, fields, api class MyModel(models.Model): _name = 'my.model' def my_method(self, a, k=None): _logger.info('executed with a: %s and k: %s', a, k) class MyOtherModel(models.Model): _name = 'my.other.model' def button_do_stuff(self): self.env['my.model'].with_delay().my_method('a', k=2)
In the snippet of code above, when we call button_do_stuff, a job capturing the method and arguments will be postponed. It will be executed as soon as the Jobrunner has a free bucket, which can be instantaneous if no other job is running.
Features:
- Views for jobs, jobs are stored in PostgreSQL
- Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL’s NOTIFY
- Channels: give a capacity for the root channel and its sub-channels and segregate jobs in them. Allow for instance to restrict heavy jobs to be executed one at a time while little ones are executed 4 at a times.
- Retries: Ability to retry jobs by raising a type of exception
- Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next tries, retry after 1 minutes, …
- Job properties: priorities, estimated time of arrival (ETA), custom description, number of retries
- Related Actions: link an action on the job view, such as open the record concerned by the job
Table of contents
Installation
Be sure to have the requests library.
Configuration
- Using environment variables and command line:
- Adjust environment variables (optional):
- ODOO_QUEUE_JOB_CHANNELS=root:4 or any other channels configuration. The default is root:1
- if xmlrpc_port is not set: ODOO_QUEUE_JOB_PORT=8069
- Start Odoo with --load=web,queue_job and --workers greater than 1. [1]
- Adjust environment variables (optional):
- Using the Odoo configuration file:
[options] (...) workers = 6 server_wide_modules = web,queue_job (...) [queue_job] channels = root:2
- Confirm the runner is starting correctly by checking the odoo log file:
...INFO...queue_job.jobrunner.runner: starting ...INFO...queue_job.jobrunner.runner: initializing database connections ...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname> ...INFO...queue_job.jobrunner.runner: database connections ready
- Create jobs (eg using base_import_async) and observe they start immediately and in parallel.
- Tip: to enable debug logging for the queue job, use --log-handler=odoo.addons.queue_job:DEBUG
[1] | It works with the threaded Odoo server too, although this way of running Odoo is obviously not for production purposes. |
Usage
To use this module, you need to:
- Go to Job Queue menu
Developers
Delaying jobs
The fast way to enqueue a job for a method is to use with_delay() on a record or model:
def button_done(self): self.with_delay().print_confirmation_document(self.state) self.write({"state": "done"}) return True
Here, the method print_confirmation_document() will be executed asynchronously as a job. with_delay() can take several parameters to define more precisely how the job is executed (priority, …).
All the arguments passed to the method being delayed are stored in the job and passed to the method when it is executed asynchronously, including self, so the current record is maintained during the job execution (warning: the context is not kept).
Dependencies can be expressed between jobs. To start a graph of jobs, use delayable() on a record or model. The following is the equivalent of with_delay() but using the long form:
def button_done(self): delayable = self.delayable() delayable.print_confirmation_document(self.state) delayable.delay() self.write({"state": "done"}) return True
Methods of Delayable objects return itself, so it can be used as a builder pattern, which in some cases allow to build the jobs dynamically:
def button_generate_simple_with_delayable(self): self.ensure_one() # Introduction of a delayable object, using a builder pattern # allowing to chain jobs or set properties. The delay() method # on the delayable object actually stores the delayable objects # in the queue_job table ( self.delayable() .generate_thumbnail((50, 50)) .set(priority=30) .set(description=_("generate xxx")) .delay() )
The simplest way to define a dependency is to use .on_done(job) on a Delayable:
def button_chain_done(self): self.ensure_one() job1 = self.browse(1).delayable().generate_thumbnail((50, 50)) job2 = self.browse(1).delayable().generate_thumbnail((50, 50)) job3 = self.browse(1).delayable().generate_thumbnail((50, 50)) # job 3 is executed when job 2 is done which is executed when job 1 is done job1.on_done(job2.on_done(job3)).delay()
Delayables can be chained to form more complex graphs using the chain() and group() primitives. A chain represents a sequence of jobs to execute in order, a group represents jobs which can be executed in parallel. Using chain() has the same effect as using several nested on_done() but is more readable. Both can be combined to form a graph, for instance we can group [A] of jobs, which blocks another group [B] of jobs. When and only when all the jobs of the group [A] are executed, the jobs of the group [B] are executed. The code would look like:
from odoo.addons.queue_job.delay import group, chain def button_done(self): group_a = group(self.delayable().method_foo(), self.delayable().method_bar()) group_b = group(self.delayable().method_baz(1), self.delayable().method_baz(2)) chain(group_a, group_b).delay() self.write({"state": "done"}) return True
When a failure happens in a graph of jobs, the execution of the jobs that depend on the failed job stops. They remain in a state wait_dependencies until their “parent” job is successful. This can happen in two ways: either the parent job retries and is successful on a second try, either the parent job is manually “set to done” by a user. In these two cases, the dependency is resolved and the graph will continue to be processed. Alternatively, the failed job and all its dependent jobs can be canceled by a user. The other jobs of the graph that do not depend on the failed job continue their execution in any case.
Note: delay() must be called on the delayable, chain, or group which is at the top of the graph. In the example above, if it was called on group_a, then group_b would never be delayed (but a warning would be shown).
Enqueing Job Options
- priority: default is 10, the closest it is to 0, the faster it will be executed
- eta: Estimated Time of Arrival of the job. It will not be executed before this date/time
- max_retries: default is 5, maximum number of retries before giving up and set the job state to ‘failed’. A value of 0 means infinite retries.
- description: human description of the job. If not set, description is computed from the function doc or method name
- channel: the complete name of the channel to use to process the function. If specified it overrides the one defined on the function
- identity_key: key uniquely identifying the job, if specified and a job with the same key has not yet been run, the new job will not be created
Configure default options for jobs
In earlier versions, jobs could be configured using the @job decorator. This is now obsolete, they can be configured using optional queue.job.function and queue.job.channel XML records.
Example of channel:
<record id="channel_sale" model="queue.job.channel"> <field name="name">sale</field> <field name="parent_id" ref="queue_job.channel_root" /> </record>
Example of job function:
<record id="job_function_sale_order_action_done" model="queue.job.function"> <field name="model_id" ref="sale.model_sale_order" /> <field name="method">action_done</field> <field name="channel_id" ref="channel_sale" /> <field name="related_action" eval='{"func_name": "custom_related_action"}' /> <field name="retry_pattern" eval="{1: 60, 2: 180, 3: 10, 5: 300}" /> </record>
The general form for the name is: <model.name>.method.
The channel, related action and retry pattern options are optional, they are documented below.
When writing modules, if 2+ modules add a job function or channel with the same name (and parent for channels), they’ll be merged in the same record, even if they have different xmlids. On uninstall, the merged record is deleted when all the modules using it are uninstalled.
Job function: model
If the function is defined in an abstract model, you can not write <field name="model_id" ref="xml_id_of_the_abstract_model"</field> but you have to define a function for each model that inherits from the abstract model.
Job function: channel
The channel where the job will be delayed. The default channel is root.
Job function: related action
The Related Action appears as a button on the Job’s view. The button will execute the defined action.
The default one is to open the view of the record related to the job (form view when there is a single record, list view for several records). In many cases, the default related action is enough and doesn’t need customization, but it can be customized by providing a dictionary on the job function:
{ "enable": False, "func_name": "related_action_partner", "kwargs": {"name": "Partner"}, }
- enable: when False, the button has no effect (default: True)
- func_name: name of the method on queue.job that returns an action
- kwargs: extra arguments to pass to the related action method
Example of related action code:
class QueueJob(models.Model): _inherit = 'queue.job' def related_action_partner(self, name): self.ensure_one() model = self.model_name partner = self.records action = { 'name': name, 'type': 'ir.actions.act_window', 'res_model': model, 'view_type': 'form', 'view_mode': 'form', 'res_id': partner.id, } return action
Job function: retry pattern
When a job fails with a retryable error type, it is automatically retried later. By default, the retry is always 10 minutes later.
A retry pattern can be configured on the job function. What a pattern represents is “from X tries, postpone to Y seconds”. It is expressed as a dictionary where keys are tries and values are seconds to postpone as integers:
{ 1: 10, 5: 20, 10: 30, 15: 300, }
Based on this configuration, we can tell that:
- 5 first retries are postponed 10 seconds later
- retries 5 to 10 postponed 20 seconds later
- retries 10 to 15 postponed 30 seconds later
- all subsequent retries postponed 5 minutes later
Job Context
The context of the recordset of the job, or any recordset passed in arguments of a job, is transferred to the job according to an allow-list.
The default allow-list is (“tz”, “lang”, “allowed_company_ids”, “force_company”, “active_test”). It can be customized in Base._job_prepare_context_before_enqueue_keys. Bypass jobs on running Odoo
When you are developing (ie: connector modules) you might want to bypass the queue job and run your code immediately.
To do so you can set TEST_QUEUE_JOB_NO_DELAY=1 in your enviroment.
Bypass jobs in tests
When writing tests on job-related methods is always tricky to deal with delayed recordsets. To make your testing life easier you can set test_queue_job_no_delay=True in the context.
Tip: you can do this at test case level like this
@classmethod def setUpClass(cls): super().setUpClass() cls.env = cls.env(context=dict( cls.env.context, test_queue_job_no_delay=True, # no jobs thanks ))
Then all your tests execute the job methods synchronously without delaying any jobs.
Testing
Asserting enqueued jobs
The recommended way to test jobs, rather than running them directly and synchronously is to split the tests in two parts:
- one test where the job is mocked (trap jobs with trap_jobs() and the test only verifies that the job has been delayed with the expected arguments
- one test that only calls the method of the job synchronously, to validate the proper behavior of this method only
Proceeding this way means that you can prove that jobs will be enqueued properly at runtime, and it ensures your code does not have a different behavior in tests and in production (because running your jobs synchronously may have a different behavior as they are in the same transaction / in the middle of the method). Additionally, it gives more control on the arguments you want to pass when calling the job’s method (synchronously, this time, in the second type of tests), and it makes tests smaller.
The best way to run such assertions on the enqueued jobs is to use odoo.addons.queue_job.tests.common.trap_jobs().
A very small example (more details in tests/common.py):
# code def my_job_method(self, name, count): self.write({"name": " ".join([name] * count) def method_to_test(self): count = self.env["other.model"].search_count([]) self.with_delay(priority=15).my_job_method("Hi!", count=count) return count # tests from odoo.addons.queue_job.tests.common import trap_jobs # first test only check the expected behavior of the method and the proper # enqueuing of jobs def test_method_to_test(self): with trap_jobs() as trap: result = self.env["model"].method_to_test() expected_count = 12 trap.assert_jobs_count(1, only=self.env["model"].my_job_method) trap.assert_enqueued_job( self.env["model"].my_job_method, args=("Hi!",), kwargs=dict(count=expected_count), properties=dict(priority=15) ) self.assertEqual(result, expected_count) # second test to validate the behavior of the job unitarily def test_my_job_method(self): record = self.env["model"].browse(1) record.my_job_method("Hi!", count=12) self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")
If you prefer, you can still test the whole thing in a single test, by calling jobs_tester.perform_enqueued_jobs() in your test.
def test_method_to_test(self): with trap_jobs() as trap: result = self.env["model"].method_to_test() expected_count = 12 trap.assert_jobs_count(1, only=self.env["model"].my_job_method) trap.assert_enqueued_job( self.env["model"].my_job_method, args=("Hi!",), kwargs=dict(count=expected_count), properties=dict(priority=15) ) self.assertEqual(result, expected_count) trap.perform_enqueued_jobs() record = self.env["model"].browse(1) record.my_job_method("Hi!", count=12) self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")
Execute jobs synchronously when running Odoo
When you are developing (ie: connector modules) you might want to bypass the queue job and run your code immediately.
To do so you can set TEST_QUEUE_JOB_NO_DELAY=1 in your environment.
Warning
Do not do this in production
Execute jobs synchronously in tests
You should use trap_jobs, really, but if for any reason you could not use it, and still need to have job methods executed synchronously in your tests, you can do so by setting test_queue_job_no_delay=True in the context.
Tip: you can do this at test case level like this
@classmethod def setUpClass(cls): super().setUpClass() cls.env = cls.env(context=dict( cls.env.context, test_queue_job_no_delay=True, # no jobs thanks ))
Then all your tests execute the job methods synchronously without delaying any jobs.
In tests you’ll have to mute the logger like:
@mute_logger(‘odoo.addons.queue_job.models.base’)
Note
in graphs of jobs, the test_queue_job_no_delay context key must be in at least one job’s env of the graph for the whole graph to be executed synchronously
Tips and tricks
- Idempotency (https://www.restapitutorial.com/lessons/idempotency.html): The queue_job should be idempotent so they can be retried several times without impact on the data.
- The job should test at the very beginning its relevance: the moment the job will be executed is unknown by design. So the first task of a job should be to check if the related work is still relevant at the moment of the execution.
Patterns
Through the time, two main patterns emerged:
- For data exposed to users, a model should store the data and the model should be the creator of the job. The job is kept hidden from the users
- For technical data, that are not exposed to the users, it is generally alright to create directly jobs with data passed as arguments to the job, without intermediary models.
Known issues / Roadmap
- After creating a new database or installing queue_job on an existing database, Odoo must be restarted for the runner to detect it.
- When Odoo shuts down normally, it waits for running jobs to finish. However, when the Odoo server crashes or is otherwise force-stopped, running jobs are interrupted while the runner has no chance to know they have been aborted. In such situations, jobs may remain in started or enqueued state after the Odoo server is halted. Since the runner has no way to know if they are actually running or not, and does not know for sure if it is safe to restart the jobs, it does not attempt to restart them automatically. Such stale jobs therefore fill the running queue and prevent other jobs to start. You must therefore requeue them manually, either from the Jobs view, or by running the following SQL statement before starting Odoo:
update queue_job set state='pending' where state in ('started', 'enqueued')
Changelog
Next
- [ADD] Run jobrunner as a worker process instead of a thread in the main process (when running with –workers > 0)
- [REF] @job and @related_action deprecated, any method can be delayed, and configured using queue.job.function records
- [MIGRATION] from 13.0 branched at rev. e24ff4b
Bug Tracker
Bugs are tracked on GitHub Issues. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us smashing it by providing a detailed and welcomed feedback.
Do not contact contributors directly about support or help with technical issues.
Credits
Authors
- Camptocamp
- ACSONE SA/NV
Contributors
- Guewen Baconnier <[email protected]>
- Stéphane Bidoul <[email protected]>
- Matthieu Dietrich <[email protected]>
- Jos De Graeve <[email protected]>
- David Lefever <[email protected]>
- Laurent Mignon <[email protected]>
- Laetitia Gangloff <[email protected]>
- Cédric Pigeon <[email protected]>
- Tatiana Deribina <[email protected]>
- Souheil Bejaoui <[email protected]>
- Eric Antones <[email protected]>
- Simone Orsi <[email protected]>
Maintainers
This module is maintained by the OCA.

OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use.
Current maintainer:
This module is part of the OCA/queue project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
This is an unofficial translation of the GNU Lesser General Public License into Vietnamese. It was not published by the Free Software Foundation,
and does not legally state the distribution terms for software that uses the GNU LGPL - only the original English text of the GNU LGPL does
that. However, we hope that this translation will help language speakers understand the GNU LGPL better.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June
2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License,
supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein,"this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of
the GNU "General" Public License.
"The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked Version".
The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that
uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version:
a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still
operates, and performs whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such
object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the
Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:
a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license document.
c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices,
as well as a reference directing the user to the copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and
under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified
Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.
1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the
Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library
that is interface-compatible with the Linked Version.
e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by
recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information
must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that
are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of
the following:
a)Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying
uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered
version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and
conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.