release-notes/4.0.1.md
RabbitMQ 4.0 is a new major release.
Starting June 1st, 2024, community support for this series will only be provided to regularly contributing users and those who hold a valid commercial support license.
Some key improvements in this release are listed below.
See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.
After three years of deprecation, classic queue mirroring was completely removed in this version. Quorum queues and streams are two mature replicated data types offered by RabbitMQ 4.x. Classic queues continue being supported without any breaking changes for client libraries and applications but they are now a non-replicated queue type.
After an upgrade to 4.0, all classic queue mirroring-related parts of policies will have no effect. Classic queues will continue to work like before but with only one replica.
Clients will be able to connect to any node to publish to and consume from any non-replicated classic queues. Therefore applications will be able to use the same classic queues as before.
See Mirrored Classic Queues Migration to Quorum Queues for guidance on how to migrate to quorum queues for the parts of the system that really need to use replication.
Quorum queues now have a default redelivery limit set to 20.
Messages that are redelivered 20 times or more will be dead-lettered or dropped (removed).
This limit is necessary to protect nodes from consumers that run into infinite fail-requeue-fail-requeue loops. Such consumers can drive a node out of disk space by making a quorum queue Raft log grow forever without allowing compaction of older entries to happen.
If 20 deliveries per message is a common scenario for a queue, a dead-lettering target or a higher limit must be configured for such queues. The recommended way of doing that is via a policy. See the Position Messaging Handling section in the quorum queue documentation guide.
Note that increasing the limit is recommended against: usually the presence of messages that have been redelivered 20 times or more suggests that a consumer has entered a fail-requeue-fail-requeue loop, in which case even a much higher limit won't help avoid the dead-lettering.
For specific cases where the RabbitMQ configuration cannot be updated to include a dead letter policy
the delivery limit can be disabled by setting a delivery limit configuration of -1. However, the RabbitMQ team
strongly recommends keeping the delivery limit in place to ensure cluster availability isn't
accidentally sacrificed.
Up to RabbitMQ 3.13, when an AMQP 0.9.1 client (re-)published a message to RabbitMQ, RabbitMQ interpreted the
AMQP 0.9.1 x-death header in the published message's basic_message.content.properties.headers field.
RabbitMQ 4.x will not interpret this x-death header anymore when clients (re-)publish a message.
Note that RabbitMQ 4.x will continue to set and update the x-death header every time a message is dead-lettered, including when a client rejects the message.
Applications that rely on RabbitMQ incrementing the count fields within the x-death header array elements for new messages (re-)published
(instead of existing messages being rejected), should introduce and increment a separate x- header,
with a name that would not be updated by RabbitMQ itself.
CQv1, the original classic queue storage layer, was removed except for the part that's necessary for upgrades to CQv2 (the 2nd generation).
In case rabbitmq.conf explicitly sets classic_queue.default_version to 1 like so
# this configuration value is no longer supported,
# remove this line or set the version to 2
classic_queue.default_version = 1
nodes will now fail to start. Removing the line will make the node start and perform the migration from CQv1 to CQv2.
cluster_formation.randomized_startup_delay_range.* were RemovedThe following two deprecated rabbitmq.conf settings were removed:
cluster_formation.randomized_startup_delay_range.min
cluster_formation.randomized_startup_delay_range.max
RabbitMQ 4.0 will fail to boot if these settings are configured in rabbitmq.conf.
Several I/O-related metrics are dropped, they should be monitored at the infrastructure and kernel layers
Default maximum message size is reduced to 16 MiB (from 128 MiB).
The limit can be increased via a rabbitmq.conf setting:
# 32 MiB
max_message_size = 33554432
However, it is recommended that such large multi-MiB messages are put into a blob store, and their IDs are passed around in messages instead of the entire payload.
RabbitMQ 3.13 rabbitmq.conf setting rabbitmq_amqp1_0.default_vhost is unsupported in RabbitMQ 4.0.
Instead default_vhost will be used to determine the default vhost an AMQP 1.0 client connects to(i.e. when the AMQP 1.0 client
does not define the vhost in the hostname field of the open frame).
Starting with RabbitMQ 4.0, RabbitMQ strictly validates that
delivery annotations,
message annotations, and
footer contain only
non-reserved annotation keys.
As a result, clients can only send symbolic keys that begin with x-.
Starting with RabbitMQ 4.0, RabbitMQ also strictly validates that an AMQP message contains the mandatory body.
RabbitMQ 3.13 rabbitmq.conf settings mqtt.default_user, mqtt.default_password,
and amqp1_0.default_user are unsupported in RabbitMQ 4.0.
Instead, set the new RabbitMQ 4.0 settings anonymous_login_user and anonymous_login_pass (both values default to guest).
For production scenarios, disallow anonymous logins.
Starting with Erlang 26, client side TLS peer certificate chain verification settings are enabled by default in most contexts: from federation links to shovels to TLS-enabled LDAP client connections.
If using TLS peer certificate chain verification is not practical or necessary, it can be disabled. Please refer to the docs of the feature in question, for example, this one on TLS-enabled LDAP client connections, two others on TLS-enabled dynamic shovels and dynamic shovel URI query parameters.
RabbitMQ Shovels will be able connect to a RabbitMQ 4.0 node via AMQP 1.0 only when the Shovel runs on a RabbitMQ node >= 3.13.7.
TLS-enabled Shovels will be affected by the TLS client default changes in Erlang 26 (see above).
This release requires Erlang 26.2.
Provisioning Latest Erlang Releases explains what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.
RabbitMQ releases are distributed via GitHub. Debian and RPM packages are available via repositories maintained by the RabbitMQ Core Team.
Community Docker image, Chocolatey package, and the Homebrew formula are other installation options. They are updated with a delay.
Generic binary builds of 4.0.1 incorrectly report their version as 4.0.0+2. This also applies to plugin versions. This was addressed in 4.0.2.
Other artifacts (Debian and RPM packages, the Windows installer) are not affected.
See the Upgrading guide for documentation on upgrades and GitHub releases for release notes of individual releases.
This release series only supports upgrades from 3.13.x.
This release requires all feature flags in the 3.x series (specifically 3.13.x) to be enabled before upgrading,
there is no direct upgrade path from 3.12.14 (or a later patch release) straight to a 4.0.x version.
Blue/Green Deployment-style upgrades are avaialble for migrations from 3.12.14 to 4.0.x.
This release graduates all feature flags introduced up to 3.13.0.
All users must enable all stable [feature flags] before upgrading to 4.0 from the latest available 3.13.x patch release.
Khepri was an experimental feature in the 3.13.x series. There is no direct upgrade path for clusters on 3.13.x with Khepri enabled to 4.0.x,
because internal data model used to store various metadata (users, virtual hosts, queues, streams, policies, and so on) has changed dramatically.
Such clusters should be migrated using the Blue/Green deployment strategy.
RabbitMQ 4.0.0 nodes can run alongside 3.13.x nodes. 4.0.x-specific features can only be made available when all nodes in the cluster
upgrade to 4.0.0 or a later patch release in the new series.
While operating in mixed version mode, some aspects of the system may not behave as expected. Once all nodes are upgraded to 4.0.0, these irregularities will go away.
Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).
In environments where messages can experience 20 redeliveries, the affected queues should have dead lettering configured (usually via a policy) to make sure that messages that are redelivered 20 times are moved to a separate queue (or stream) instead of being dropped (removed) by the crash-requeue-redelivery loop protection mechanism.
Alternatively, the limit can be increased using a policy. This option is recommended against: usually the presence of messages that have been redelivered 20 times or more suggests that a consumer has entered a fail-requeue-fail-requeue loop, in which case even a much higher limit won't help avoid the dead-lettering.
This section is incomplete and will be expanded as 4.0 approaches its release candidate stage.
A whole category of issues with binding inconsistency are addressed with the stabilization of Khepri, a new metadata store that uses a tree of nested objects instead of multiple tables.
With Mnesia, the original metadata store, bindings are stored in two tables, one for durable bindings (between durable exchanges and durable queues or streams) and another for semi-durable and transient ones (where either the queue is transient or both the queue and the exchange are).
When a node was stopped or failed, all non-replicated transient queues on that node were deleted by the remaining cluster peers. Due to high lock contention around these tables with Mnesia, this could take a while. In the case where the restarted (or failed) node came online before all bindings were removed, and/or clients could begin to create new bindings concurrently, the bindings table rows could end up being inconsistent, resulting in obscure "binding not found" errors.
Khepri avoids this problem entirely by only supporting durable entities and using a very different tree-based data model that makes bindings removal much more efficient and lock contention-free.
Mnesia users can work around this problem by using quorum queues or durable classic queues and durable exchanges. Their durable bindings will not be removed when a node stops. Queues that are transient in nature can be declared as durable classic ones with a TTL of a few hours.
Efficient sub-linear quorum queue recovery on node startup using checkpoints.
GitHub issue: #10637
Classic queue storage v2 (CQv2) optimizations. For example, CQv2 recovery time on node boot is now twice as fast for some data sets.
GitHub issue: #11112
Node startup time improvements. For some environments, nodes with very small on disk data sets now start about 25% quicker.
GitHub issue: #10989
Quorum queues now support priorities. However, there are difference with how priorities work in classic queues.
GitHub issue: #10637
Per-message metadata stored in the quorum queue Raft log now uses less disk space.
GitHub issue: #8261
Single Active Consumer (SAC) implementation of quorum queues now respects consumer priorities.
GitHub issue: #8261
rabbitmq.conf now supports encrypted values
with a prefix:
default_user = bunnies-444
default_pass = encrypted:F/bjQkteQENB4rMUXFKdgsJEpYMXYLzBY/AmcYG83Tg8AOUwYP7Oa0Q33ooNEpK9
GitHub issue: #11989
All feature flags up to 3.13.0 have graduated and are now mandatory.
GitHub issue: #11659
Quorum queues now use a default redelivery limit of 20.
GitHub issue: #11937
queue_master_locator queue setting has been deprecated in favor of queue_leader_locator used by quorum queues
and streams.
GitHub issue: #10702
AMQP 0-9-1 to AMQP 1.0 string data type conversion improvements.
GitHub issue: #11715
AMQP 1.0 is now a core protocol that is always enabled. Its plugin is now a no-op that only exists to simplify upgrades.
The AMQP 1.0 implementation is now significantly more efficient: its peak throughput is more than double than that of 3.13.x on some workloads.
GitHub issue: #9022
For AMQP 1.0, resource alarms only block inbound TRANSFER frames instead of blocking all traffic.
GitHub issue: #9022
AMQP 1.0 clients now can manage topologies (queues, exchanges, bindings).
GitHub issue: #10559
AMQP 1.0 implementation now supports a new (v2) address format for referencing queues, exchanges, and so on.
AMQP 1.0 implementation now supports consumer priorities.
GitHub issue: #11705
Client-provided connection name will now be logged for AMQP 1.0 connections.
GitHub issue: #11958
Stream filtering is now supported for AMQP 1.0 clients.
GitHub issue: #10098
Detailed memory breakdown metrics are now exposed via the Prometheus scraping endpoint.
GitHub issue: #11743
Several new metrics for streams, for example, stream_consumer_max_offset_lag.
Contributed by @markus812498, @gomoripeti.
GitHub issue: #10275
New per-exchange and per-queue metrics.
Contributed by @LoisSotoLopez.
GitHub issue: #11559
Shovel and Federation metrics are now available via two new plugins: rabbitmq_shovel_prometheus and rabbitmq_federation_prometheus.
Contributed by @SimonUnge.
GitHub issue: #11942
Shovels now can be configured to use pre-declared topologies. This is primarily useful in environments where schema definition comes from definitions.
GitHub issue: #10501
This is an initial release that includes Local Random Exchange.
STOMP now supports consumer priorities.
GitHub issue: #11947
2.14.00.16.03.4.0To obtain source code of the entire distribution, please download the archive named rabbitmq-server-4.0.1.tar.xz
instead of the source tarball produced by GitHub.