doc/releases/bobtail.rst
Bobtail is the second stable release of Ceph. It is named after the bobtail squid (order Sepiolida), a group of cephalopods closely related to cuttlefish.
This bobtail update fixes a range of radosgw bugs (including an easily triggered crash from multi-delete), a possible data corruption issue with power failure on XFS, and several OSD problems, including a memory "leak" that will affect aged clusters.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.7.txt>.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.6.txt>.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.5.txt>.
There is a fix in the syntax for the output of 'ceph osd tree --format=json'.
The MDS disk format has changed from prior releases and from v0.57. In particular, upgrades to v0.56.4 are safe, but you cannot move from v0.56.4 to v0.57 if you are using the MDS for CephFS; you must upgrade directly to v0.58 (or later) instead.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.4.txt>.
This release has several bug fixes surrounding OSD stability. Most significantly, an issue with OSDs being unresponsive shortly after startup (and occasionally crashing due to an internal heartbeat check) is resolved. Please upgrade.
A bug was fixed in which the OSDMap epoch for PGs without any IO
requests was not recorded. If there are pools in the cluster that
are completely idle (for example, the data and metadata
pools normally used by CephFS), and a large number of OSDMap epochs
have elapsed since the ceph-osd daemon was last restarted, those
maps will get reprocessed when the daemon restarts. This process
can take a while if there are a lot of maps. A workaround is to
'touch' any idle pools with IO prior to restarting the daemons after
packages are upgraded::
rados bench 10 write -t 1 -b 4096 -p {POOLNAME}
This will typically generate enough IO to touch every PG in the pool without generating significant cluster load, and also cleans up any temporary objects it creates.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.3.txt>.
This release has a wide range of bug fixes, stability improvements, and some performance improvements. Please upgrade.
The meaning of the 'osd scrub min interval' and 'osd scrub max interval' has changed slightly. The min interval used to be meaningless, while the max interval would only trigger a scrub if the load was sufficiently low. Now, the min interval option works the way the old max interval did (it will trigger a scrub after this amount of time if the load is low), while the max interval will force a scrub regardless of load. The default options have been adjusted accordingly. If you have customized these in ceph.conf, please review their values when upgrading.
CRUSH maps that are generated by default when calling ceph-mon --mkfs directly now distribute replicas across hosts instead of
across OSDs. Any provisioning tools that are being used by Ceph may
be affected, although probably for the better, as distributing across
hosts is a much more commonly sought behavior. If you use
mkcephfs to create the cluster, the default CRUSH rule is still
inferred by the number of hosts and/or racks in the initial ceph.conf.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.2.txt>.
This release has two critical fixes. Please upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.56.1.txt>.
Bobtail is the second stable release of Ceph, named in honor of the
Bobtail Squid: https://en.wikipedia.org/wiki/Bobtail_squid.
Please refer to the document Upgrading from Argonaut to Bobtail_ for details.
.. _Upgrading from Argonaut to Bobtail: ../install/upgrading-ceph/#upgrading-from-argonaut-to-bobtail
Cephx authentication is now enabled by default (since v0.55). Upgrading a cluster without adjusting the Ceph configuration will likely prevent the system from starting up on its own. We recommend first modifying the configuration to indicate that authentication is disabled, and only then upgrading to the latest version::
auth client required = none auth service required = none auth cluster required = none
Ceph daemons can be upgraded one-by-one while the cluster is online and in service.
The ceph-osd daemons must be upgraded and restarted before any
radosgw daemons are restarted, as they depend on some new
ceph-osd functionality. (The ceph-mon, ceph-osd, and
ceph-mds daemons can be upgraded and restarted in any order.)
Once each individual daemon has been upgraded and restarted, it cannot be downgraded.
The cluster of ceph-mon daemons will migrate to a new internal
on-wire protocol once all daemons in the quorum have been upgraded.
Upgrading only a majority of the nodes (e.g., two out of three) may
expose the cluster to a situation where a single additional failure
may compromise availability (because the non-upgraded daemon cannot
participate in the new protocol). We recommend not waiting for an
extended period of time between ceph-mon upgrades.
The ops log and usage log for radosgw are now off by default. If
you need these logs (e.g., for billing purposes), you must enable
them explicitly. For logging of all operations to objects in the
.log pool (see radosgw-admin log ...)::
rgw enable ops log = true
For usage logging of aggregated bandwidth usage (see radosgw-admin usage ...)::
rgw enable usage log = true
You should not create or use "format 2" RBD images until after all
ceph-osd daemons have been upgraded. Note that "format 1" is
still the default. You can use the new ceph osd ls and
ceph tell osd.N version commands to doublecheck your cluster.
ceph osd ls will give a list of all OSD IDs that are part of the
cluster, and you can use that to write a simple shell loop to display
all the OSD version strings: ::
for i in $(ceph osd ls); do
ceph tell osd.${i} version
done
The 'ceph osd create [<uuid>]' command now rejects an argument that is not a UUID. (Previously it would take an optional integer OSD id.) This correct syntax has been 'ceph osd create [<uuid>]' since v0.47, but the older calling convention was being silently ignored.
The CRUSH map root nodes now have type root instead of type
pool. This avoids confusion with RADOS pools, which are not
directly related. Any scripts or tools that use the ceph osd crush ... commands may need to be adjusted accordingly.
The ceph osd pool create <poolname> <pgnum> command now requires
the pgnum argument. Previously this was optional, and would
default to 8, which was almost never a good number.
Degraded mode (when there fewer than the desired number of replicas) is now more configurable on a per-pool basis, with the min_size parameter. By default, with min_size 0, this allows I/O to objects with N - floor(N/2) replicas, where N is the total number of expected copies. Argonaut behavior was equivalent to having min_size = 1, so I/O would always be possible if any completely up to date copy remained. min_size = 1 could result in lower overall availability in certain cases, such as flapping network partitions.
The sysvinit start/stop script now defaults to adjusting the max
open files ulimit to 16384. On most systems the default is 1024, so
this is an increase and won't break anything. If some system has a
higher initial value, however, this change will lower the limit.
The value can be adjusted explicitly by adding an entry to the
ceph.conf file in the appropriate section. For example::
[global] max open files = 32768
'rbd lock list' and 'rbd showmapped' no longer use tabs as separators in their output.
There is configurable limit on the number of PGs when creating a new pool, to prevent a user from accidentally specifying a ridiculous number for pg_num. It can be adjusted via the 'mon max pool pg num' option on the monitor, and defaults to 65536 (the current max supported by the Linux kernel client).
The osd capabilities associated with a rados user have changed syntax since 0.48 argonaut. The new format is mostly backwards compatible, but there are two backwards-incompatible changes:
specifying a list of pools in one grant, i.e. 'allow r pool=foo,bar' is now done in separate grants, i.e. 'allow r pool=foo, allow r pool=bar'.
restricting pool access by pool owner ('allow r uid=foo') is removed. This feature was not very useful and unused in practice.
The new format is documented in the ceph-authtool man page.
'rbd cp' and 'rbd rename' use rbd as the default destination pool, regardless of what pool the source image is in. Previously they would default to the same pool as the source image.
'rbd export' no longer prints a message for each object written. It just reports percent complete like other long-lasting operations.
'ceph osd tree' now uses 4 decimal places for weight so output is nicer for humans
Several monitor operations are now idempotent:
The osd capabilities associated with a rados user have changed syntax since 0.48 argonaut. The new format is mostly backwards compatible, but there are two backwards-incompatible changes:
specifying a list of pools in one grant, i.e. 'allow r pool=foo,bar' is now done in separate grants, i.e. 'allow r pool=foo, allow r pool=bar'.
restricting pool access by pool owner ('allow r uid=foo') is removed. This feature was not very useful and unused in practice.
The new format is documented in the ceph-authtool man page.
Bug fixes to the new osd capability format parsing properly validate the allowed operations. If an existing rados user gets permissions errors after upgrading, its capabilities were probably misconfigured. See the ceph-authtool man page for details on osd capabilities.
'rbd lock list' and 'rbd showmapped' no longer use tabs as separators in their output.