doc/releases/giant.rst
Giant is the 7th stable release of Ceph. It is named after the giant squid (Architeuthis dux).
This is the second (and possibly final) point release for Giant.
We recommend all v0.87.x Giant users upgrade to this release.
For more detailed information, see :download:the complete changelog <../changelog/v0.87.2.txt>.
This is the first (and possibly final) point release for Giant. Our focus on stability fixes will be directed towards Hammer and Firefly.
We recommend that all v0.87 Giant users upgrade to this release.
For more detailed information, see :download:the complete changelog <../changelog/v0.87.1.txt>.
This release will form the basis for the stable release Giant, v0.87.x. Highlights for Giant include:
If your existing cluster is running a version older than v0.80.x Firefly, please first upgrade to the latest Firefly release before moving on to Giant. We have not tested upgrades directly from Emperor, Dumpling, or older releases.
We have tested:
Please upgrade daemons in the following order:
#. Monitors #. OSDs #. MDSs and/or radosgw
Note that the relative ordering of OSDs and monitors should not matter, but we primarily tested upgrading monitors first.
The client-side caching for librbd is now enabled by default (rbd cache = true). A safety option (rbd cache writethrough until flush = true) is also enabled so that writeback caching is not used until the library observes a 'flush' command, indicating that the librbd users is passing that operation through from the guest VM. This avoids potential data loss when used with older versions of qemu that do not support flush.
leveldb_write_buffer_size = 810241024 = 33554432 // 8MB leveldb_cache_size = 51210241204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""
OSDs will still maintain the following osd-specific defaults:
leveldb_log = ""
The 'rados getxattr ...' command used to add a gratuitous newline to the attr value; it now does not.
The *_kb perf counters on the monitor have been removed. These are
replaced with a new set of *_bytes counters (e.g., cluster_osd_kb is
replaced by cluster_osd_bytes).
The rd_kb and wr_kb fields in the JSON dumps for pool stats (accessed
via the ceph df detail -f json-pretty and related commands) have been
replaced with corresponding *_bytes fields. Similarly, the
total_space, total_used, and total_avail fields are replaced with
total_bytes, total_used_bytes, and total_avail_bytes fields.
The rados df --format=json output read_bytes and write_bytes
fields were incorrectly reporting ops; this is now fixed.
The rados df --format=json output previously included read_kb and
write_kb fields; these have been removed. Please use read_bytes and
write_bytes instead (and divide by 1024 if appropriate).
The experimental keyvaluestore-dev OSD backend had an on-disk format change that prevents existing OSD data from being upgraded. This affects developers and testers only.
mon-specific and osd-specific leveldb options have been removed.
From this point onward users should use the leveldb_* generic
options and add the options in the appropriate sections of their
configuration files. Monitors will still maintain the following
monitor-specific defaults:
leveldb_write_buffer_size = 810241024 = 33554432 // 8MB leveldb_cache_size = 51210241204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""
OSDs will still maintain the following osd-specific defaults:
leveldb_log = ""
CephFS support for the legacy anchor table has finally been removed. Users with file systems created before firefly should ensure that inodes with multiple hard links are modified prior to the upgrade to ensure that the backtraces are written properly. For example::
sudo find /mnt/cephfs -type f -links +1 -exec touch {} ;
We disallow nonsensical 'tier cache-mode' transitions. From this point onward, 'writeback' can only transition to 'forward' and 'forward' can transition to 1) 'writeback' if there are dirty objects, or 2) any if there are no dirty objects.
This is a release candidate for Giant, which will hopefully be out in another week or two. We did a feature freeze about a month ago and since then have been doing only stabilization and bug fixing (and a handful on low-risk enhancements). A fair bit of new functionality went into the final sprint, but it's baked for quite a while now and we're feeling pretty good about it.
Major items include:
There are still a handful of known bugs in this release, but nothing severe enough to prevent a release. By and large we are pretty pleased with the stability and expect the final Giant release to be quite reliable.
Please try this out on your non-production clusters for a preview
This is the second-to-last development release before Giant that contains new functionality. The big items to land during this cycle are the messenger refactoring from Matt Benjamin that lays some groundwork for RDMA support, a performance improvement series from SanDisk that improves performance on SSDs, lots of improvements to our new standalone civetweb-based RGW frontend, and a new 'osd blocked-by' mon command that allows admins to easily identify which OSDs are blocking peering progress. The other big change is that the OSDs and Monitors now distinguish between "misplaced" and "degraded" objects: the latter means there are fewer copies than we'd like, while the former simply means the are not stored in the locations where we want them to be.
Also of note is a change to librbd that enables client-side caching by default. This is coupled with another option that makes the cache write-through until a "flush" operations is observed: this implies that the librbd user (usually a VM guest OS) supports barriers and flush and that it is safe for the cache to switch into writeback mode without compromising data safety or integrity. It has long been recommended practice that these options be enabled (e.g., in OpenStack environments) but until now it has not been the default.
We have frozen the tree for the looming Giant release, and the next development release will be a release candidate with a final batch of new functionality.
The client-side caching for librbd is now enabled by default (rbd cache = true). A safety option (rbd cache writethrough until flush = true) is also enabled so that writeback caching is not used until the library observes a 'flush' command, indicating that the librbd users is passing that operation through from the guest VM. This avoids potential data loss when used with older versions of qemu that do not support flush.
leveldb_write_buffer_size = 3210241024 = 33554432 // 32MB leveldb_cache_size = 51210241204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""
OSDs will still maintain the following osd-specific defaults:
leveldb_log = ""
The 'rados getxattr ...' command used to add a gratuitous newline to the attr value; it now does not.
The next Ceph development release is here! This release contains several meaty items, including some MDS improvements for journaling, the ability to remove the CephFS file system (and name it), several mon cleanups with tiered pools, several OSD performance branches, a new "read forward" RADOS caching mode, a prototype Kinetic OSD backend, and various radosgw improvements (especially with the new standalone civetweb frontend). And there are a zillion OSD bug fixes. Things are looking pretty good for the Giant release that is coming up in the next month.
The *_kb perf counters on the monitor have been removed. These are
replaced with a new set of *_bytes counters (e.g., cluster_osd_kb is
replaced by cluster_osd_bytes).
The rd_kb and wr_kb fields in the JSON dumps for pool stats (accessed
via the ceph df detail -f json-pretty and related commands) have been
replaced with corresponding *_bytes fields. Similarly, the
total_space, total_used, and total_avail fields are replaced with
total_bytes, total_used_bytes, and total_avail_bytes fields.
The rados df --format=json output read_bytes and write_bytes
fields were incorrectly reporting ops; this is now fixed.
The rados df --format=json output previously included read_kb and
write_kb fields; these have been removed. Please use read_bytes and
write_bytes instead (and divide by 1024 if appropriate).
Another Ceph development release! This has been a longer cycle, so there has been quite a bit of bug fixing and stabilization in this round. There is also a bunch of packaging fixes for RPM distros (RHEL/CentOS, Fedora, and SUSE) and for systemd. We've also added a new librados-striper library from Sebastien Ponce that provides a generic striping API for applications to code to.
The experimental keyvaluestore-dev OSD backend had an on-disk format change that prevents existing OSD data from being upgraded. This affects developers and testers only.
mon-specific and osd-specific leveldb options have been removed.
From this point onward users should use the leveldb_* generic
options and add the options in the appropriate sections of their
configuration files. Monitors will still maintain the following
monitor-specific defaults:
leveldb_write_buffer_size = 3210241024 = 33554432 // 32MB leveldb_cache_size = 51210241204 = 536870912 // 512MB leveldb_block_size = 64*1024 = 65536 // 64KB leveldb_compression = false leveldb_log = ""
OSDs will still maintain the following osd-specific defaults:
leveldb_log = ""
This is the second post-firefly development release. It includes a range of bug fixes and some usability improvements. There are some MDS debugging and diagnostic tools, an improved 'ceph df', and some OSD backend refactoring and cleanup.
This is the first development release since Firefly. It includes a lot of work that we delayed merging while stabilizing things. Lots of new functionality, as well as several fixes that are baking a bit before getting backported.
CephFS support for the legacy anchor table has finally been removed. Users with file systems created before firefly should ensure that inodes with multiple hard links are modified prior to the upgrade to ensure that the backtraces are written properly. For example::
sudo find /mnt/cephfs -type f -links +1 -exec touch {} ;
Disallow nonsensical 'tier cache-mode' transitions. From this point onward, 'writeback' can only transition to 'forward' and 'forward' can transition to 1) 'writeback' if there are dirty objects, or 2) any if there are no dirty objects.