Pro-Changes.md
Sidekiq Changes | Sidekiq Pro Changes | Sidekiq Enterprise Changes
Please see sidekiq.org for more details and how to buy.
autoflush mode to Batches to flush every N jobs. [#6967]
The jobs block collects all jobs for the Batch and pushes them
all atomically at the same time but this can use a massive amount of
memory if your batch has millions of jobs. Enabling autoflush will
push every N collected jobs but will lose the atomicity.b = Sidekiq::Batch.new
b.autoflush = 1000 # flush to Redis every 1000 jobs
b.jobs do
# push tons of jobs
end
kiqb = Sidekiq::Batch.new
b.tags = %w(customer:12345 france)
require "sidekiq/api"
Sidekiq::BatchSet.new.scan_tags("12345") do |bid|
st = Sidekiq::Batch::Status.new(bid)
end
batch.on(event, target, options).
Callback options were always meant to be a Hash but the API never validated it.
This will raise an ArgumentError in Pro 9.0. [#6789]batch.on(:success, MyCallback, 1234) # bad
batch.on(:success, MyCallback, "document_id" => 1234) # good
sidekiq.jobs.perform_dist
distribution metric. This type is proprietary to Datadog but provides
much better statistical visibility into job timing across a cluster.
You can disable this metric type with:Sidekiq.configure_server { |c| c[:use_datadog_extensions] = false }
sidekiq.batch.duration and sidekiq.batch.duration_dist
metrics for monitoring the total time to execute a batch. You can add tags for
visibility and filtering within the UI when defining your success callback:batch.on(:success, ..., {tags: ["batchtype:OrderProcess"]})
sidekiq.jobs.perform timing
metric when a job fails since failure can happen for so many reasons;
timing data is typically not useful for detecting or fixing failures.sidekiq.. [#6337]
Customers are advised to remove namespace: 'sidekiq' or similar from their Statsd configration.failure_info structure and API have been replaced
with failed_jids in order to reduce duplicate data within Redis. [#6580]created_at is now stored as Integer milliseconds rather than a Float.dead: false [#6628]distribution and distribution_time metrics [#6534]time metrics no longer hold a Statsd connection while timing the blocksuccess_at in batch callbacks [#6463]Sidekiq::Pro.gem_version APIbase64 gem dependency.base64 gem dependency for Ruby 3.3 compatibilityenqueued_at String values in the reliable scheduler [#4768]complete_at, success_at and death_at timestamps to S::Batch::Status, which track when that batch callback was triggered. [#5818]*_at Batch APIs to consistently return Time objects [#5837]pending fix [#5689]pending to 0 [#5659]pending fix [#5689]pending to 0 [#5659]Sidekiq::Pro.statsd.
dogstatsd-ruby will be the only supported statsd library in 6.0. [#5212]Sidekiq.via API for targeting shards [#5269]SHARD1 = ConnectionPool.new { Redis.new(db: 0) }
SHARD2 = ConnectionPool.new { Redis.new(db: 1) }
Sidekiq.via(SHARD2) do
Sidekiq::Queue.all.sum(&:size)
end
error_type tag for job.failures metrics [#5211]# add to your initializer
Sidekiq::Middleware::Server::Statsd.options = ->(klass, job, q) do
{tags: ["worker:#{klass}", "queue:#{q}"]}.tap do |h|
h[:tags] << "tenant:#{job['tenant_id']}" if job["tenant_id"]
end
end
SIDEKIQ_PRELOAD_APP=1 in sidekiqswarm. [#4733]bundle install much faster with Bundler 2.2+ [#4158]Sidekiq::Rack::BatchStatus to Sidekiq::Pro::BatchStatus [#4655]WorkerName in the name [#4377]job.WorkerName.count -> job.count with tag worker:WorkerName
job.WorkerName.perform -> job.perform with tag worker:WorkerName
job.WorkerName.failure -> job.failure with tag worker:WorkerName
concurrent-ruby gem dependency [#4586]constantize for batch callbacks. [#4469]jobs.recovered.fetch metric [#4594]UNLINK [#4155]Sidekiq::Batch::Status#dead_jobs API in favor of
Sidekiq::Batch::Status#dead_jids. [#4217]reliable_push when exiting. [#3823]batch = Sidekiq::Batch.new
batch.on(:death, ...)
sleep(1)
call and resulting latency [#3790]freeze calls on Strings [#3759]config.super_fetch!
Sidekiq::Pro.dogstatsd = ->{ Datadog::Statsd.new("metrics.example.com", 8125) }
This release overhauls the Statsd metrics support and adds more metrics for tracking Pro feature usage. In your initializer:
Sidekiq::Pro.statsd = ->{ ::Statsd.new("127.0.0.1", 8125) }
Sidekiq Pro will emit more metrics to Statsd:
jobs.expired - when a job is expired
jobs.recovered.push - when a job is recovered by reliable_push after network outage
jobs.recovered.fetch - when a job is recovered by super_fetch after process crash
batch.created - when a batch is created
batch.complete - when a batch is completed
batch.success - when a batch is successful
Sidekiq Pro's existing Statsd middleware has been rewritten to leverage the new API. Everything should be backwards compatible with one deprecation notice.
Status#completed? when run against a Batch that had succeeded
and was deleted. [#3519]Sidekiq::Queue#delete_job to avoid O(n) runtime [#3408]_ in the name [#3339]Batch::Status#invalidated? API which returns true if any/all
JIDs were invalidated within the batch. [#3326]super_fetch instead:Sidekiq.configure_server do |config|
config.super_fetch!
end
Also note that Sidekiq's -i/--index option is no longer used/relevant with super_fetch.
super_fetch! This
algorithm will replace reliable_fetch in Pro 4.0. super_fetch is
bullet-proof across all environments, no longer requiring stable
hostnames or an index to be set per-process. [#3077]Sidekiq.configure_server do |config|
config.super_fetch!
end
Thank you to @jonhyman for code review and the Sidekiq Pro customers that beta tested super_fetch.
Sidekiq::PendingSet#destroy(jid) API to remove poison pill jobs [#3015]NoSuchBatch should be raised
properly now if Sidekiq::Batch.new(bid) is called on a batch no
longer in Redis.timed_fetch. See the
wiki documentation
for trade offs between the two reliability options. You should
use this if you are on Heroku, Docker, Amazon ECS or EBS or
another container-based system.Sidekiq::Queue#delete_by_class to Ruby-based, to
avoid O(N^2) performance and possible Redis failure. [#2806]-q a,1 -q b,1enqueued_at [#2414]Sidekiq::Client.reliable_push! [#2408]POOL1 = ConnectionPool.new { Redis.new(:url => "redis://localhost:6379/0") }
POOL2 = ConnectionPool.new { Redis.new(:url => "redis://localhost:6378/0") }
mount Sidekiq::Pro::Web => '/sidekiq' # default
mount Sidekiq::Pro::Web.with(redis_pool: POOL1), at: '/sidekiq1', as: 'sidekiq1' # shard1
mount Sidekiq::Pro::Web.with(redis_pool: POOL2), at: '/sidekiq2', as: 'sidekiq2' # shard2
batch.callback_queue so batch callbacks can use a higher
priority queue than jobs. [#2200]SCRIPT FLUSH on Redis. [#2240]bulk_requeue, allowing clean shutdownBatch#on(:complete) instead of Batch#notify.
The specific Campfire, HipChat, email and other notification schemes
will be removed in 2.0.0.q = Sidekiq::Queue.new("critical")
q.pause!
q.paused? # => true
q.unpause!
Sidekiq polls Redis every 10 seconds for paused queues so pausing will take a few seconds to take effect.
b = Sidekiq::Batch.new
b.expires_in 5.days
...
Thanks to @jonhyman for his contributions to this Sidekiq Pro release.
This release includes new functionality based on the SCAN command newly added to Redis 2.8. Pro still works with Redis 2.4 but some functionality will be unavailable.
Sidekiq::RetrySet.new.scan("Warehouse::OrderShip") do |job|
job.delete
end
jobs.count
jobs.success
jobs.failure
delete_job performance.Sidekiq Pro API
0.030000 0.020000 0.050000 ( 1.640659)
Sidekiq API
17.250000 2.220000 19.470000 ( 22.193300)
sidekiq/pro/reliable_push which makes Sidekiq::Client resiliant
to Redis network failures. [#793]sidekiq/reliable_fetch to sidekiq/pro/reliable_fetchsidekiq/rack/batch_statusrequire 'sidekiq/web'
require 'sidekiq/batch/web'
mount Sidekiq::Web => '/sidekiq'
def perform(...)
batch = Sidekiq::Batch.new(bid) # instantiate batch associated with this job
batch.jobs do
SomeWorker.perform_async # add another job
end
end
batch = Sidekiq::Batch.new(bid)
batch.jobs do
# define more jobs here
end