doc/releases/cuttlefish.rst
Cuttlefish is the 3rd stable release of Ceph. It is named after a type of cephalopod (order Sepiida) characterized by a unique internal shell, the cuttlebone, which is used for control of buoyancy.
This point release resolves several low to medium-impact bugs across the code base, and fixes a performance problem (CPU utilization) with radosgw. We recommend that all production cuttlefish users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.9.txt>.
This release includes a number of important issues, including rare race conditions in the OSD, a few monitor bugs, and fixes for RBD flush behavior. We recommend that production users upgrade at their convenience.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.8.txt>.
This release fixes another regression preventing monitors to start after undergoing certain upgrade sequences, as well as some corner cases with Paxos and support for unusual device names in ceph-disk/ceph-deploy.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.7.txt>.
This release fixes a regression in v0.61.5 that could prevent monitors from restarting. This affects any cluster that was upgraded from a previous version of Ceph (and not freshly created with v0.61.5).
All users are strongly recommended to upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.6.txt>.
This release most improves stability of the monitor and fixes a few bugs with the ceph-disk utility (used by ceph-deploy). We recommend that all v0.61.x users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.5.txt>.
This release resolves a possible data corruption on power-cycle when using XFS, a few outstanding problems with monitor sync, several problems with ceph-disk and ceph-deploy operation, and a problem with OSD memory usage during scrub.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.4.txt>.
This release resolves a number of problems with the monitors and leveldb that users have been seeing. Please upgrade.
There is one known problem with mon upgrades from bobtail. If the ceph-mon conversion on startup is aborted or fails for some reason, we do not correctly error out, but instead continue with (in certain cases) odd results. Please be careful if you have to restart the mons during the upgrade. A 0.61.4 release with a fix will be out shortly.
In the meantime, for current cuttlefish users, v0.61.3 is safe to use.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.3.txt>.
This release disables a monitor debug log that consumes disk space and fixes a bug when upgrade some monitors from bobtail to cuttlefish.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.2.txt>.
This release fixes a problem when upgrading a bobtail cluster that had snapshots to cuttlefish.
For more detailed information, see :download:the complete changelog <../changelog/v0.61.1.txt>.
The ceph-deploy tool is now the preferred method of provisioning
new clusters. For existing clusters created via mkcephfs that
would like to transition to the new tool, there is a migration
path, documented at Transitioning to ceph-deploy_.
The sysvinit script (/etc/init.d/ceph) will now verify (and, if
necessary, update) the OSD's position in the CRUSH map on startup.
(The upstart script has always worked this way.) By default, this
ensures that the OSD is under a 'host' with a name that matches the
hostname (hostname -s). Legacy clusters create with mkcephfs do
this by default, so this should not cause any problems, but legacy
clusters with customized CRUSH maps with an alternate structure
should set osd crush update on start = false.
radosgw-admin now uses the term zone instead of cluster to describe each instance of the radosgw data store (and corresponding collection of radosgw daemons). The usage for the radosgw-admin command and the 'rgw zone root pool' config options have changed accordingly.
rbd progress indicators now go to standard error instead of standard out. (You can disable progress with --no-progress.)
The 'rbd resize ...' command now requires the --allow-shrink option when resizing to a smaller size. Expanding images to a larger size is unchanged.
Please review the changes going back to 0.56.4 if you are upgrading all the way from bobtail.
The old 'ceph stop_cluster' command has been removed.
The sysvinit script now uses the ceph.conf file on the remote host when starting remote daemons via the '-a' option. Note that if '-a' is used in conjunction with '-c path', the path must also be present on the remote host (it is not copied to a temporary file, as it was previously).
Please see Upgrading from Bobtail to Cuttlefish_ for details.
.. _Upgrading from Bobtail to Cuttlefish: ../install/upgrading-ceph/#upgrading-from-bobtail-to-cuttlefish
Transitioning to ceph-deploy_... _Transitioning to ceph-deploy: ../rados/deployment/ceph-deploy-transition
The sysvinit script (/etc/init.d/ceph) will now verify (and, if
necessary, update) the OSD's position in the CRUSH map on startup.
(The upstart script has always worked this way.) By default, this
ensures that the OSD is under a 'host' with a name that matches the
hostname (hostname -s). Legacy clusters create with mkcephfs do
this by default, so this should not cause any problems, but legacy
clusters with customized CRUSH maps with an alternate structure
should set osd crush update on start = false.
radosgw-admin now uses the term zone instead of cluster to describe each instance of the radosgw data store (and corresponding collection of radosgw daemons). The usage for the radosgw-admin command and the 'rgw zone root pool' config options have changed accordingly.
rbd progress indicators now go to standard error instead of standard out. (You can disable progress with --no-progress.)
The 'rbd resize ...' command now requires the --allow-shrink option when resizing to a smaller size. Expanding images to a larger size is unchanged.
Please review the changes going back to 0.56.4 if you are upgrading all the way from bobtail.
The old 'ceph stop_cluster' command has been removed.
The sysvinit script now uses the ceph.conf file on the remote host when starting remote daemons via the '-a' option. Note that if '-a' is used in conjunction with '-c path', the path must also be present on the remote host (it is not copied to a temporary file, as it was previously).
The monitor is using a completely new storage strategy and intra-cluster protocol. This means that cuttlefish and bobtail monitors do not talk to each other. When you upgrade each one, it will convert its local data store to the new format. Once you upgrade a majority, the quorum will be formed using the new protocol and the old monitors will be blocked out until they too get upgraded. For this reason, we recommend not running a mixed-version cluster for very long.
ceph-mon now requires the creation of its data directory prior to --mkfs, similarly to what happens on ceph-osd. This directory is no longer automatically created, and custom scripts should be adjusted to reflect just that.
The monitor now enforces that MDS names be unique. If you have
multiple daemons start with the same id (e.g., mds.a) the
second one will implicitly mark the first as failed. This makes
things less confusing and makes a daemon restart faster (we no
longer wait for the stopped daemon to time out) but existing
multi-mds configurations may need to be adjusted accordingly to give
daemons unique names.
The 'ceph osd pool delete <poolname>' and 'rados rmpool <poolname>' now have safety interlocks with loud warnings that make you confirm pool removal. Any scripts currently rely on these functions zapping data without confirmation need to be adjusted accordingly.
The monitor is using a completely new storage strategy and intra-cluster protocol. This means that v0.59 and pre-v0.59 monitors do not talk to each other. When you upgrade each one, it will convert its local data store to the new format. Once you upgrade a majority, the quorum will be formed using the new protocol and the old monitors will be blocked out until they too get upgraded. For this reason, we recommend not running a mixed-version cluster for very long.
ceph-mon now requires the creation of its data directory prior to --mkfs, similarly to what happens on ceph-osd. This directory is no longer automatically created, and custom scripts should be adjusted to reflect just that.
mds.a) the
second one will implicitly mark the first as failed. This makes
things less confusing and makes a daemon restart faster (we no
longer wait for the stopped daemon to time out) but existing
multi-mds configurations may need to be adjusted accordingly to give
daemons unique names.This development release has a lot of additional functionality accumulated over the last couple months. Most of the bug fixes (with the notable exception of the MDS related work) has already been backported to v0.56.x, and is not mentioned here.