CHANGES.md
unique=True with job_id prevents duplicate jobs from being enqueued or scheduled. Thanks @selwin!Result now stores execution metadata (execution_id, execution_started_at and execution_ended_at). Thanks @selwin!Retry now supports enqueue_at_front=True, allowing retried jobs to be requeued at the front of the queue. Thanks @crazillagodzilla!job_id values may only contain letters, numbers, underscores and dashes. Thanks @selwin!DeferredJobRegistry is now scored by job creation time. Thanks @selwin!CLIENT LIST is not supported. Thanks @selwin and @djmaze!CronScheduler monitoring, you can now monitor each CronJob's latest and next scheduled enqueue time. Thanks @selwin!job.get_status() now reports the correct final state inside on_success / on_failure callbacks. Thanks @Fridayai700!CronScheduler to accept job_timeout instead of timeout argument. Thanks @selwin!CronScheduler.heartbeat() does not properly extend the key's TTL. Thanks @selwin!CronScheduler.all() that returns a list of active schedulers. Thanks @selwin!CronScheduler now supports running periodic jobs based on cron string. Thanks @selwin!SpawnWorker does not properly register successful job executions. Thanks @selwin!Worker may fail to register custom job and queue classes. Thanks @armicron!result.worker_name to easily trace which Worker generated the result. Thanks @selwin!Worker will now automatically choose TimerDeathPenalty if UnixSignalDeathPenalty is not available. Thanks @selwin!CREATED Job status for jobs that are not enqueued not deferred. Thanks @selwin!Worker can now import Job and Queue classes from string. Thanks @selwin!Group.cleanup(). Thanks @dixoncrews-gdl!rq cron CLI command. Thanks @selwin!job.cancel(remove_from_dependencies=True). Thanks @Marishka17!WorkerPool now accepts queue_class argument. Thanks @amonsh1!redis-py=6.0.0. Thanks @selwin and @terencehonles!log_job_description is set to False. Thanks @danilopeixoto!pubsub_thread may die in the background. Thanks @ankush!SpawnWorker that uses multiprocessing.spawn to spawn worker processes. This makes RQ usable in operating systems without os.fork() like Windows. Thanks @selwin!StartedJobRegistry.cleanup() now properly creates job results. Thanks @OlegZv!WorkerPool status is never set to STARTED. Thanks @taleinat!Worker.monitor_work_horse() now properly handles InvalidJobOperation. Thanks @fancyweb!queue.enqueue_many now always registers the queue in RQ's queue registry. Thanks @eswolinsky3241!job.id must not contain :. Thanks @sanurielf!job.ended_at should be set when job is run synchronously. Thanks @alexprabhat99!Group.all() now properly handles non existing group. Thanks @eswolinsky3241!ruff instead of black as formatter. Thanks @hongquan!New Features:
Worker(default_worker_ttl=10) is deprecated in favor of Worker(worker_ttl=10). Thanks @stv8!cleanup parameter to registry.get_job_ids() and registry.get_job_count(). Thanks @anton-daneyko-ultramarin!Breaking Changes:
RoundRobinWorker and RandomWorker are deprecated. Use --dequeue-strategy <round-robin/random> instead.Job.__init__ requires both id and connection to be passed in.Job.exists() requires connection argument to be passed in.Queue.all() requires connection argument.@job decorator now requires connection argument.Bug Fixes:
name attribute. Thanks @wckao!job.get_status() will now always return JobStatus enum. Thanks @indepndnt!worker_pool.get_worker_process() to make WorkerPool easier to extend. Thanks @selwin!job.latest_result(timeout=60). Thanks @ajnisbet!stopped_callback is not respected when job is enqueued via enqueue_many(). Thanks @eswolinsky3241!worker-pool no longer ignores --quiet. Thanks @Mindiell!worker-pool now starts with scheduler. Thanks @chromium7!Callback(on_stopped='my_callback). Thanks @eswolinsky3241!Callback now accepts dotted path to function as input. Thanks @rishabh-ranjan!queue.enqueue_many() now supports job dependencies. Thanks @eswolinsky3241!rq worker CLI script now configures logging based on DICT_CONFIG key present in config file. Thanks @juur!Worker now uses lmove() to implement reliable queue pattern. Thanks @selwin!redis>=4.0.0Scheduler should only release locks that it successfully acquires. Thanks @xzander!as_text() function in v1.14. Thanks @tchapi!job.meta() is loaded using the wrong serializer. Thanks @gabriels1234!WorkerPool (beta) that manages multiple workers in a single CLI. Thanks @selwin!Callback class that allows more flexibility in declaring job callbacks. Thanks @ronlut!--dequeue-strategy option to RQ's CLI. Thanks @ccrvlh!--max-idle-time option to RQ's worker CLI. Thanks @ronlut!--maintenance-interval option to RQ's worker CLI. Thanks @ronlut!rq info CLI command. Thanks @iggeehu!queue.enqueue_jobs() now properly account for job dependencies. Thanks @sim6!TimerDeathPenalty now properly handles negative/infinite timeout. Thanks @marqueurs404!work_horse_killed_handler argument to Worker. Thanks @ronlut!result_ttl is -1. Thanks @sim6!dequeue_timeout ignores worker_ttl. Thanks @ronlut!job.return_value() instead of job.result when processing callbacks. Thanks @selwin!Worker code more easily extendable. Thanks @lowercase00!at_front argument when jobs are scheduled. Thanks @gabriels1234!job.worker_name after job is finished. Thanks @eswolinsky3241!queue.enqueue_many() now supports on_success and on on_failure arguments. Thanks @y4n9squared!enqueue_at_front to Dependency() objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje!SETNAME command. Thanks @yilmaz-burak!Dependency(allow_failure=True). Thanks @mattchan-tencent, @caffeinatedMike and @selwin!job.requeue() now supports at_front() argument. Thanks @buroa!SimpleWorker now works better on Windows. Thanks @caffeinatedMike!on_failure and on_success arguments to @job decorator. Thanks @nepta1998!FAILED and failure callbacks are now properly called when job is run synchronously. Thanks @ericman93!result_ttl=0. Thanks @selwin!ssl_cert_reqs argument to be passed to Redis. Thanks @mgcdanny!job.cancel() should also remove itself from registries. Thanks @joshcoden!daemon mode. Thanks @mik3y!CanceledJobRegistry to keep track of canceled jobs. Thanks @selwin!cancel_job(job_id, enqueue_dependents=True) allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden!job.get_meta() to fetch fresh meta value directly from Redis. Thanks @aparcar!job.exc_info. Thanks @selwin!queue.enqueue(foo, on_success=do_this, on_failure=do_that). Thanks @selwin!queue.enqueue_many() to enqueue many jobs in one go. Thanks @joshcoden!Scheduler now works with custom serializers. Thanks @alella!RoundRobinWorker and RandomWorker classes to control how jobs are dequeued from multiple queues. Thanks @bielcardona!--serializer option to rq worker CLI. Thanks @f0cker!STOPPED job status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley!clean_worker_registry() now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak!job.worker_name attribute that tells you which worker is executing a job. Thanks @selwin!send_stop_job_command() that tells a worker to stop executing a job. Thanks @selwin!JSONSerializer as an alternative to the default pickle based serializer. Thanks @JackBoreczky!RQScheduler running on Redis with ssl=True. Thanks @BobReid!send_shutdown_command() and send_kill_horse_command(). Thanks @selwin!job.last_heartbeat property that's periodically updated when job is running. Thanks @theambient!FailedJobRegistry. Thanks @selwin!hset() on redis-py >= 3.5.0. Thanks @aparcar!job.get_position() and queue.get_job_position(). Thanks @aparcar!job.requeue() now returns the modified job. Thanks @ericatkin!hmset command which causes workers on Redis server < 4 to crash. Thanks @selwin!pickle.HIGHEST_PROTOCOL for backward compatibility reasons. Thanks @bbayles!delay() now accepts job_id argument. Thanks @grayshirt!--sentry-ca-certs and --sentry-debug parameters to rq worker CLI. Thanks @kichawa!StartedJobRegistry are given an exception info. Thanks @selwin!__main__ file so you can now do python -m rq.cli. Thanks @bbayles!job_id is now passed to logger during failed jobs. Thanks @smaccona!queue.enqueue_at() and queue.enqueue_in() now supports explicit args and kwargs function invocation. Thanks @selwin!Job.fetch() now properly handles unpickleable return values. Thanks @selwin!enqueue_at() and enqueue_in() now sets job status to scheduled. Thanks @coolhacker170597!RQScheduler logging configuration. Thanks @FlorianPerucki!--verbose or --quiet CLI arguments should override --logging-level. Thanks @zyt312074545!rq info where it doesn't show workers for empty queues. Thanks @zyt312074545!queue.enqueue_dependents() on custom Queue classes. Thanks @van-ess0!RQ and Python versions are now stored in job metadata. Thanks @eoranged!failure_ttl argument to job decorator. Thanks @pax0r!max_jobs to Worker.work and --max-jobs to rq worker CLI. Thanks @perobertson!--disable-job-desc-logging to rq worker now does what it's supposed to do. Thanks @janierdavila!StartedJobRegistry now properly handles jobs with infinite timeout. Thanks @macintoshpie!rq info CLI command now cleans up registries when it first runs. Thanks @selwin!procname with setproctitle. Thanks @j178!Backward incompatible changes:
job.status has been removed. Use job.get_status() and job.set_status() instead. Thanks @selwin!
FailedQueue has been replaced with FailedJobRegistry:
get_failed_queue() function has been removed. Please use FailedJobRegistry(queue=queue) instead.move_to_failed_queue() has been removed.RQ's custom job exception handling mechanism has also changed slightly:
FailedJobRegistry) can be disabled by doing Worker(disable_default_exception_handler=True).Worker names are now randomized. Thanks @selwin!
timeout argument on queue.enqueue() has been deprecated in favor of job_timeout. Thanks @selwin!
Sentry integration has been reworked:
RQ_SENTRY_DSN environment variable instead of SENTRY_DSN before instantiating Sentry integrationFixed Worker.total_working_time accounting bug. Thanks @selwin!
job_timeout argument to queue.enqueue(). This argument will eventually replace timeout argument. Thanks @selwin!job_id argument to BaseDeathPenalty class. Thanks @loopbio!SimpleWorker. Thanks @selwin!date_format and log_format arguments to Worker and rq worker CLI. Thanks @shikharsg!async is a keyword in Python 3.7,
Queue(async=False) has been changed to Queue(is_async=False). The async
keyword argument will still work, but raises a DeprecationWarning. Thanks @dchevell!Worker now periodically sends heartbeats and checks whether child process is still alive while performing long running jobs. Thanks @Kriechi!Job.create now accepts timeout in string format (e.g 1h). Thanks @theodesp!worker.main_work_horse() should exit with return code 0 even if job execution fails. Thanks @selwin!job.delete(delete_dependents=True) will delete job along with its dependents. Thanks @olingerc!@job decorator now accepts description, meta, at_front and depends_on kwargs. Thanks @jlucas91 and @nlyubchich!Worker.all(queue=queue) and Worker.count(queue=queue).job.data and job.exc_info are now stored in compressed format in Redis.worker.refresh() may fail when birth_date is not set. Thanks @vanife!worker.refresh() may fail when upgrading from previous versions of RQ.Worker statistics! Worker now keeps track of last_heartbeat, successful_job_count, failed_job_count and total_working_time. Thanks @selwin!Worker now sends heartbeat during suspension check. Thanks @theodesp!queue.delete() method to delete Queue objects entirely from Redis. Thanks @theodesp!--logging-level option to command line scripts. Thanks @jiajunhuang!job.save() may fail with unpickleable return value.job.id with Job instance in local _job_stack . Thanks @katichev!job.save() no longer implicitly calls job.cleanup(). Thanks @katichev!StopRequested worker.heartbeat(). Thanks @fate0!FailedQueue has been moved to rq.handlers.move_to_failed_queue. Thanks @yaniv-g!--path parameter. Thanks @kirill and @sjtbham!job.dependency slightly more efficient. Thanks @liangsijian!FailedQueue now returns jobs with the correct class. Thanks @amjith!Connection, Job, Worker and Queue classes via CLI. Thanks @jezdez!job.delete() now properly cleans itself from job registries. Thanks @selwin!Worker should no longer overwrite job.meta. Thanks @WeatherGod!job.save_meta() can now be used to persist custom job data. Thanks @katichev!Worker.find_by_key() more efficient. Thanks @selwin!timeout using strings such as queue.enqueue(foo, timeout='1m'). Thanks @luojiebin!HerokuWorker termination logic. Thanks @samuelcolvin!FailedQueue (#765). Thanks @jsurloppe!enqueue_job should use Redis pipeline when available (#761). Thanks mtdewulf!fetch_job now checks that a job requested actually comes from the
intended queue (#728, #733)request_force_stop_sigrtmin failing for Python 3 (#727)Job.cancel() method on failed queue (#707)cancel_job now works properly. Thanks @jlopex!Worker.work() now accepts logging_level argument. Thanks @jlopex!@job decorator now accepts ttl argument. Thanks @javimb!Worker.__init__ now accepts queue_class keyword argument. Thanks @antoineleclair!Worker now saves warm shutdown time. You can access this property from worker.shutdown_requested_date. Thanks @olingerc!Worker now correctly deletes current_job_id after failed job execution. Thanks @olingerc!Job.create() and queue.enqueue_call() now accepts meta argument. Thanks @tornstrom!job.started_at property. Thanks @samuelcolvin!job.cancel() and job.delete(). Thanks @glaslos!Worker.execute_job() now exports RQ_WORKER_ID and RQ_JOB_ID to OS environment variables. Thanks @mgk!rqinfo now accepts --config option. Thanks @kfrendrich!Worker class now has request_force_stop() and request_stop() methods that can be overridden by custom worker classes. Thanks @samuelcolvin!DEBUG level. Thanks @tbaugis!patch_connection so Redis connection can be easily mocked--exception-handler command line flag(July 8th, 2015)
(June 3rd, 2015)
FailedQueue now have their statuses set properlyenqueue_call() no longer ignores ttl. Thanks @mbodock!(April 14th, 2015)
(March 9th, 2015)
birth_date and death_date on WorkerREDIS_SSL config option)(Jan 30th, 2015)
rq suspend and
rq resume commands. Thanks Jonathan Tushman!StartedJobRegistry
for monitoring purposes. This also prevents currently active jobs from
being orphaned/lost in the case of hard shutdowns.FinishedJobRegistry.
Thanks Nic Cope for helping!deferred as their
status. You can monitor deferred jobs by checking DeferredJobRegistry.queue.enqueue(func, at_front=True). Thanks Travis Johnson!click. Thanks Lyon Zhang!SimpleWorker that does not fork when executing jobs.
Useful for testing purposes. Thanks Cal Leeming!--queue-class and --job-class arguments to rqworker script.
Thanks David Bonner!(May 21st, 2014)
(May 8th, 2014)
(May 7th, 2014)
(May 6th, 2014)
(April 28th, 2014)
(April 22nd, 2014)
(April 22nd, 2014)
Emptying the failed queue from the command line is now as simple as running
rqinfo -X or rqinfo --empty-failed-queue.
Job data is unpickled lazily. Thanks, Malthe!
Removed dependency on the times library. Thanks, Malthe!
Job dependencies! Thanks, Selwin.
Custom worker classes, via the --worker-class=path.to.MyClass command line
argument. Thanks, Selwin.
Queue.all() and rqinfo now report empty queues, too. Thanks, Rob!
Fixed a performance issue in Queue.all() when issued in large Redis DBs.
Thanks, Rob!
Birth and death dates are now stored as proper datetimes, not timestamps.
Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim.
Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions.
Fix: Queue.empty() now correctly deletes job hashes from Redis.
(December 17th, 2013)
(December 16th, 2013)
(August 23th, 2013)
(August 20th, 2013)
(August 20th, 2013)
Python 3 compatibility (Thanks, Alex!)
Minor bug fix where Sentry would break when func cannot be imported
(June 17th, 2013)
rqworker and rqinfo have a --url argument to connect to a Redis url.
rqworker and rqinfo have a --socket option to connect to a Redis server
through a Unix socket.
rqworker reads SENTRY_DSN from the environment, unless specifically
provided on the command line.
Queue has a new API that supports paging get_jobs(3, 7), which will
return at most 7 jobs, starting from the 3rd.
(February 26th, 2013)
(February 18th, 2013)
Worker registrations now expire. This should prevent rqinfo from reporting
about ghosted workers. (Thanks, @yaniv-aknin!)
rqworker will automatically clean up ghosted worker registrations from
pre-0.3.6 runs.
rqworker grew a -q flag, to be more silent (only warnings/errors are shown)
(February 6th, 2013)
ended_at is now recorded for normally finished jobs, too. (Previously only
for failed jobs.)
Adds support for both Redis and StrictRedis connection types
Makes StrictRedis the default connection type if none is explicitly provided
(January 23rd, 2013)
(January 18th, 2013)
Fix bug where work was lost due to silently ignored unpickle errors.
Jobs can now access the current Job instance from within. Relevant
documentation here.
Custom properties can be set by modifying the job.meta dict. Relevant
documentation here.
Custom properties can be set by modifying the job.meta dict. Relevant
documentation here.
rqworker now has an optional --password flag.
Remove logbook dependency (in favor of logging)
(September 3rd, 2012)
Fixes broken rqinfo command.
Improve compatibility with Python < 2.7.
(August 30th, 2012)
.enqueue() now takes a result_ttl keyword argument that can be used to
change the expiration time of results.
Queue constructor now takes an optional async=False argument to bypass the
worker (for testing purposes).
Jobs now carry status information. To get job status information, like
whether a job is queued, finished, or failed, use the property status, or
one of the new boolean accessor properties is_queued, is_finished or
is_failed.
Jobs return values are always stored explicitly, even if they have to
explicit return value or return None (with given TTL of course). This
makes it possible to distinguish between a job that explicitly returned
None and a job that isn't finished yet (see status property).
Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation here and here.
rqworker now supports passing in configuration files instead of the
many command line options: rqworker -c settings will source
settings.py.
rqworker now supports one-flag setup to enable Sentry as its exception
handler: rqworker --sentry-dsn="http://public:[email protected]/1"
Alternatively, you can use a settings file and configure SENTRY_DSN = 'http://public:[email protected]/1' instead.
(August 5th, 2012)
Reliability improvements
.enqueue() does not consume the timeout kwarg anymore. Instead, to pass
RQ a timeout value while enqueueing a function, use the explicit invocation
instead:
```python
q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30)
```
Add a @job decorator, which can be used to do Celery-style delayed
invocations:
```python
from redis import StrictRedis
from rq.decorators import job
# Connect to Redis
redis = StrictRedis()
@job('high', timeout=10, connection=redis)
def some_work(x, y):
return x + y
```
Then, in another module, you can call some_work:
```python
from foo.bar import some_work
some_work.delay(2, 3)
```
(August 1st, 2012)
(July 20th, 2012)
(July 18th, 2012)
q.enqueue() accepts instance methods now, too. Objects will be pickle'd
along with the instance method, so beware.q.enqueue() accepts string specification of functions now, too. Example:
q.enqueue("my.math.lib.fibonacci", 5). Useful if the worker and the
submitter of work don't share code bases.(May 15, 2012)
(May 14, 2012)
(March 28, 2012)