doc/releases/firefly.rst
Firefly is the 6th stable release of Ceph. It is named after the firefly squid (Watasenia scintillans).
This is a bugfix release for Firefly. This Firefly 0.80.x is nearing its planned end of life in January 2016 it may also be the last.
We recommend that all Firefly users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.11.txt>.
issue#11140 <http://tracker.ceph.com/issues/11140>, pr#5831 <http://github.com/ceph/ceph/pull/5831>, Dmitry Yatsushkevich)issue#13417 <http://tracker.ceph.com/issues/13417>, pr#6207 <http://github.com/ceph/ceph/pull/6207>, Boris Ranto)issue#12034 <http://tracker.ceph.com/issues/12034>, pr#5217 <http://github.com/ceph/ceph/pull/5217>, Nathan Cutler)issue#12301 <http://tracker.ceph.com/issues/12301>, pr#5224 <http://github.com/ceph/ceph/pull/5224>, Nathan Cutler)issue#12166 <http://tracker.ceph.com/issues/12166>, pr#5225 <http://github.com/ceph/ceph/pull/5225>, Nathan Cutler)issue#12351 <http://tracker.ceph.com/issues/12351>, pr#5394 <http://github.com/ceph/ceph/pull/5394>, Nathan Cutler)issue#10728 <http://tracker.ceph.com/issues/10728>, pr#6203 <http://github.com/ceph/ceph/pull/6203>, Ken Dreyer, Loic Dachary)issue#11798 <http://tracker.ceph.com/issues/11798>, pr#5992 <http://github.com/ceph/ceph/pull/5992>, Sage Weil)issue#11535 <http://tracker.ceph.com/issues/11535>, pr#4633 <http://github.com/ceph/ceph/pull/4633>, Jon Bernard)issue#12512 <http://tracker.ceph.com/issues/12512>, pr#5529 <http://github.com/ceph/ceph/pull/5529>, Danny Al-Gaaf, Kefu Chai, Jianpeng Ma)issue#13088 <http://tracker.ceph.com/issues/13088>, pr#6038 <http://github.com/ceph/ceph/pull/6038>, Sage Weil)issue#7387 <http://tracker.ceph.com/issues/7387>, pr#4635 <http://github.com/ceph/ceph/pull/4635>, Kefu Chai, Tim Serong)issue#11762 <http://tracker.ceph.com/issues/11762>, pr#5403 <http://github.com/ceph/ceph/pull/5403>, Ketor Meng)issue#12570 <http://tracker.ceph.com/issues/12570>, pr#6325 <http://github.com/ceph/ceph/pull/6325>, Piotr Dałek, Zheng Qiankun)issue#12662 <http://tracker.ceph.com/issues/12662>, pr#5991 <http://github.com/ceph/ceph/pull/5991>, Jason Dillaman)issue#12252 <http://tracker.ceph.com/issues/12252>, pr#5388 <http://github.com/ceph/ceph/pull/5388>, Haomai Wang)issue#12465 <http://tracker.ceph.com/issues/12465>, pr#5406 <http://github.com/ceph/ceph/pull/5406>, Samuel Just)issue#12614 <http://tracker.ceph.com/issues/12614>, pr#5814 <http://github.com/ceph/ceph/pull/5814>, Josh Durgin)issue#11602 <http://tracker.ceph.com/issues/11602>, pr#4769 <http://github.com/ceph/ceph/pull/4769>, Sage Weil)issue#11090 <http://tracker.ceph.com/issues/11090>, pr#5307 <http://github.com/ceph/ceph/pull/5307>, Loic Dachary, Sage Weil)issue#13162 <http://tracker.ceph.com/issues/13162>, pr#5993 <http://github.com/ceph/ceph/pull/5993>, Alfredo Deza)issue#11590 <http://tracker.ceph.com/issues/11590>, pr#5199 <http://github.com/ceph/ceph/pull/5199>, Kefu Chai)issue#13032 <http://tracker.ceph.com/issues/13032>, pr#6087 <http://github.com/ceph/ceph/pull/6087>, Josh Durgin, Sage Weil)issue#7385 <http://tracker.ceph.com/issues/7385>, pr#4639 <http://github.com/ceph/ceph/pull/4639>, Jason Dillaman)issue#11056 <http://tracker.ceph.com/issues/11056>, pr#4854 <http://github.com/ceph/ceph/pull/4854>, Haomai Wang, Sage Weil, Jason Dillaman)issue#12176 <http://tracker.ceph.com/issues/12176>, pr#5171 <http://github.com/ceph/ceph/pull/5171>, Jason Dillaman)issue#11877 <http://tracker.ceph.com/issues/11877>, pr#4867 <http://github.com/ceph/ceph/pull/4867>, Thorsten Behrens)issue#11650 <http://tracker.ceph.com/issues/11650>, pr#5389 <http://github.com/ceph/ceph/pull/5389>, Samuel Just)issue#11800 <http://tracker.ceph.com/issues/11800>, pr#4788 <http://github.com/ceph/ceph/pull/4788>, Sage Weil)issue#11786 <http://tracker.ceph.com/issues/11786>, pr#5360 <http://github.com/ceph/ceph/pull/5360>, Joao Eduardo Luis)issue#11470 <http://tracker.ceph.com/issues/11470>, pr#5358 <http://github.com/ceph/ceph/pull/5358>, Joao Eduardo Luis)issue#12638 <http://tracker.ceph.com/issues/12638>, pr#5698 <http://github.com/ceph/ceph/pull/5698>, Kefu Chai)issue#11493 <http://tracker.ceph.com/issues/11493>, pr#5236 <http://github.com/ceph/ceph/pull/5236>, Sage Weil, Samuel Just)issue#11576 <http://tracker.ceph.com/issues/11576>, pr#5129 <http://github.com/ceph/ceph/pull/5129>, Kefu Chai)issue#13089 <http://tracker.ceph.com/issues/13089>, pr#6091 <http://github.com/ceph/ceph/pull/6091>, Sage Weil)issue#12402 <http://tracker.ceph.com/issues/12402>, pr#5410 <http://github.com/ceph/ceph/pull/5410>, renhwztetecs)issue#13255 <http://tracker.ceph.com/issues/13255>, pr#6010 <http://github.com/ceph/ceph/pull/6010>, Sage Weil)issue#12401 <http://tracker.ceph.com/issues/12401>, pr#5409 <http://github.com/ceph/ceph/pull/5409>, huangjun)issue#12210 <http://tracker.ceph.com/issues/12210>, pr#5404 <http://github.com/ceph/ceph/pull/5404>, Xinze Chi)issue#8815 <http://tracker.ceph.com/issues/8815>, issue#8674 <http://tracker.ceph.com/issues/8674>, issue#9064 <http://tracker.ceph.com/issues/9064>, pr#5200 <http://github.com/ceph/ceph/pull/5200>, Sage Weil, Zhiqiang Wang, Samuel Just)issue#12251 <http://tracker.ceph.com/issues/12251>, pr#5408 <http://github.com/ceph/ceph/pull/5408>, Joao Eduardo Luis)issue#11026 <http://tracker.ceph.com/issues/11026>, pr#4597 <http://github.com/ceph/ceph/pull/4597>, Jianpeng Ma, Sage Weil)issue#9008 <http://tracker.ceph.com/issues/9008>, pr#5043 <http://github.com/ceph/ceph/pull/5043>, Guang Yang)issue#9806 <http://tracker.ceph.com/issues/9806>, pr#5062 <http://github.com/ceph/ceph/pull/5062>, Josh Durgin, Samuel Just)issue#9983 <http://tracker.ceph.com/issues/9983>, pr#5039 <http://github.com/ceph/ceph/pull/5039>, William A. Kennington III)issue#10052 <http://tracker.ceph.com/issues/10052>, pr#5050 <http://github.com/ceph/ceph/pull/5050>, Sage Weil)issue#12437 <http://tracker.ceph.com/issues/12437>, pr#5815 <http://github.com/ceph/ceph/pull/5815>, David Zafman)issue#9614 <http://tracker.ceph.com/issues/9614>, pr#5044 <http://github.com/ceph/ceph/pull/5044>, Guang Yang)issue#12809 <http://tracker.ceph.com/issues/12809>, pr#5988 <http://github.com/ceph/ceph/pull/5988>, Samuel Just)issue#11069 <http://tracker.ceph.com/issues/11069>, pr#4631 <http://github.com/ceph/ceph/pull/4631>, Samuel Just)issue#11358 <http://tracker.ceph.com/issues/11358>, pr#5287 <http://github.com/ceph/ceph/pull/5287>, Samuel Just)issue#12223 <http://tracker.ceph.com/issues/12223>, pr#5822 <http://github.com/ceph/ceph/pull/5822>, Samuel Just)issue#10006 <http://tracker.ceph.com/issues/10006>, pr#5051 <http://github.com/ceph/ceph/pull/5051>, Xinze Chi, Zhiqiang Wang)issue#12429 <http://tracker.ceph.com/issues/12429>, pr#5526 <http://github.com/ceph/ceph/pull/5526>, John Spray)issue#10911 <http://tracker.ceph.com/issues/10911>, pr#4960 <http://github.com/ceph/ceph/pull/4960>, Sage Weil)issue#11771 <http://tracker.ceph.com/issues/11771>, issue#10399 <http://tracker.ceph.com/issues/10399>, pr#5726 <http://github.com/ceph/ceph/pull/5726>_, Samuel Just, Jason Dillaman)issue#11439 <http://tracker.ceph.com/issues/11439>, pr#5823 <http://github.com/ceph/ceph/pull/5823>, Samuel Just)issue#11507 <http://tracker.ceph.com/issues/11507>, pr#4632 <http://github.com/ceph/ceph/pull/4632>, Jianpeng Ma, Loic Dachary)issue#12943 <http://tracker.ceph.com/issues/12943>, pr#5619 <http://github.com/ceph/ceph/pull/5619>, Xie Rui)issue#12652 <http://tracker.ceph.com/issues/12652>, pr#5820 <http://github.com/ceph/ceph/pull/5820>, Sage Weil)issue#12309 <http://tracker.ceph.com/issues/12309>, pr#5235 <http://github.com/ceph/ceph/pull/5235>, Sage Weil)issue#12467 <http://tracker.ceph.com/issues/12467>, pr#4583 <http://github.com/ceph/ceph/pull/4583>, Daniel J. Hofmann)issue#11851 <http://tracker.ceph.com/issues/11851>, pr#5234 <http://github.com/ceph/ceph/pull/5234>, Yehuda Sadeh)issue#11367 <http://tracker.ceph.com/issues/11367>, pr#4765 <http://github.com/ceph/ceph/pull/4765>, Anton Aksola)issue#11639 <http://tracker.ceph.com/issues/11639>, pr#4762 <http://github.com/ceph/ceph/pull/4762>, Javier M. Mellid)issue#11860 <http://tracker.ceph.com/issues/11860>, issue#12537 <http://tracker.ceph.com/issues/12537>, pr#5730 <http://github.com/ceph/ceph/pull/5730>_, Yehuda Sadeh, Wido den Hollander)issue#11036 <http://tracker.ceph.com/issues/11036>, pr#5170 <http://github.com/ceph/ceph/pull/5170>, Radoslaw Zarzynski)issue#10701 <http://tracker.ceph.com/issues/10701>, pr#5997 <http://github.com/ceph/ceph/pull/5997>, Yehuda Sadeh)issue#11149 <http://tracker.ceph.com/issues/11149>, pr#4641 <http://github.com/ceph/ceph/pull/4641>, Orit Wasserman)issue#8911 <http://tracker.ceph.com/issues/8911>, pr#4584 <http://github.com/ceph/ceph/pull/4584>, Yehuda Sadeh)issue#11455 <http://tracker.ceph.com/issues/11455>, pr#5729 <http://github.com/ceph/ceph/pull/5729>, Yehuda Sadeh)issue#12073 <http://tracker.ceph.com/issues/12073>, pr#5233 <http://github.com/ceph/ceph/pull/5233>, Thorsten Behrens)issue#12043 <http://tracker.ceph.com/issues/12043>, pr#5390 <http://github.com/ceph/ceph/pull/5390>, wuxingyi)issue#11323 <http://tracker.ceph.com/issues/11323>, pr#4642 <http://github.com/ceph/ceph/pull/4642>, Sergey Arkhipov)issue#11091 <http://tracker.ceph.com/issues/11091>, issue#11438 <http://tracker.ceph.com/issues/11438>, issue#12939 <http://tracker.ceph.com/issues/12939>, issue#12157 <http://tracker.ceph.com/issues/12157>, issue#12158 <http://tracker.ceph.com/issues/12158>, issue#12363 <http://tracker.ceph.com/issues/12363>, pr#5532 <http://github.com/ceph/ceph/pull/5532>_, Radoslaw Zarzynski, Orit Wasserman, Robin H. Johnson)issue#11416 <http://tracker.ceph.com/issues/11416>, pr#4535 <http://github.com/ceph/ceph/pull/4535>, Yehuda Sadeh)issue#12673 <http://tracker.ceph.com/issues/12673>, pr#5813 <http://github.com/ceph/ceph/pull/5813>, Loic Dachary)issue#11758 <http://tracker.ceph.com/issues/11758>, pr#6000 <http://github.com/ceph/ceph/pull/6000>, Greg Farnum)issue#13420 <http://tracker.ceph.com/issues/13420>, pr#6328 <http://github.com/ceph/ceph/pull/6328>, Yuan Zhou, Sage Weil)issue#11143 <http://tracker.ceph.com/issues/11143>, pr#4636 <http://github.com/ceph/ceph/pull/4636>, Thorsten Behrens, Owen Synge)issue#10146 <http://tracker.ceph.com/issues/10146>, pr#5541 <http://github.com/ceph/ceph/pull/5541>, Dan van der Ster)issue#11612 <http://tracker.ceph.com/issues/11612>, pr#4771 <http://github.com/ceph/ceph/pull/4771>, Ilja Slepnev)issue#11836 <http://tracker.ceph.com/issues/11836>, pr#5037 <http://github.com/ceph/ceph/pull/5037>, Joseph McDonald, Sage Weil)issue#11543 <http://tracker.ceph.com/issues/11543>, pr#4582 <http://github.com/ceph/ceph/pull/4582>, Thorsten Behrens)issue#10983 <http://tracker.ceph.com/issues/10983>, pr#4630 <http://github.com/ceph/ceph/pull/4630>, Loic Dachary)This is a bugfix release for Firefly.
We recommend that all Firefly users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.10.txt>.
issue#11955 <http://tracker.ceph.com/issues/11955>, pr#4924 <http://github.com/ceph/ceph/pull/4924>, Ken Dreyer)issue#11673 <http://tracker.ceph.com/issues/11673>, pr#4766 <http://github.com/ceph/ceph/pull/4766>, Ken Dreyer)issue#11453 <http://tracker.ceph.com/issues/11453>, pr#4638 <http://github.com/ceph/ceph/pull/4638>, Ken Dreyer)issue#9193 <http://tracker.ceph.com/issues/9193>, pr#3944 <http://github.com/ceph/ceph/pull/3944>, Sage Weil)issue#10153 <http://tracker.ceph.com/issues/10153>, pr#3963 <http://github.com/ceph/ceph/pull/3963>, Federico Simoncelli)issue#10080 <http://tracker.ceph.com/issues/10080>, pr#3915 <http://github.com/ceph/ceph/pull/3915>, Greg Farnum)issue#10817 <http://tracker.ceph.com/issues/10817>, pr#3941 <http://github.com/ceph/ceph/pull/3941>, Samuel Just)issue#10353 <http://tracker.ceph.com/issues/10353>, pr#3824 <http://github.com/ceph/ceph/pull/3824>, Loic Dachary)issue#10724 <http://tracker.ceph.com/issues/10724>, pr#3936 <http://github.com/ceph/ceph/pull/3936>, Nilamdyuti Goswami)issue#10676 <http://tracker.ceph.com/issues/10676>, pr#3996 <http://github.com/ceph/ceph/pull/3996>, David Zafman)issue#10351 <http://tracker.ceph.com/issues/10351>, pr#3927 <http://github.com/ceph/ceph/pull/3927>, Yan, Zheng)issue#10723 <http://tracker.ceph.com/issues/10723>, pr#3935 <http://github.com/ceph/ceph/pull/3935>, Josh Durgin)issue#10425 <http://tracker.ceph.com/issues/10425>, pr#3828 <http://github.com/ceph/ceph/pull/3828>, Radoslaw Zarzynski)issue#10497 <http://tracker.ceph.com/issues/10497>, pr#3930 <http://github.com/ceph/ceph/pull/3930>, Matt Richards)issue#5488 <http://tracker.ceph.com/issues/5488>, pr#4206 <http://github.com/ceph/ceph/pull/4206>, Jason Dillaman)issue#11113 <http://tracker.ceph.com/issues/11113>, pr#4245 <http://github.com/ceph/ceph/pull/4245>, Jason Dillaman)issue#11053 <http://tracker.ceph.com/issues/11053>, pr#3970 <http://github.com/ceph/ceph/pull/3970>, Yan, Zheng)issue#10762 <http://tracker.ceph.com/issues/10762>, pr#3937 <http://github.com/ceph/ceph/pull/3937>, Sage Weil)issue#10844 <http://tracker.ceph.com/issues/10844>, pr#3942 <http://github.com/ceph/ceph/pull/3942>, Joao Eduardo Luis)issue#10546 <http://tracker.ceph.com/issues/10546>, pr#3932 <http://github.com/ceph/ceph/pull/3932>, Joao Eduardo Luis)issue#10787 <http://tracker.ceph.com/issues/10787>, pr#3823 <http://github.com/ceph/ceph/pull/3823>, Sage Weil)issue#9538 <http://tracker.ceph.com/issues/9538>, pr#4475 <http://github.com/ceph/ceph/pull/4475>, Loic Dachary)issue#10257 <http://tracker.ceph.com/issues/10257>, pr#3826 <http://github.com/ceph/ceph/pull/3826>, Joao Eduardo Luis)issue#9986 <http://tracker.ceph.com/issues/9986>, pr#3952 <http://github.com/ceph/ceph/pull/3952>, Ding Dinghua)issue#9915 <http://tracker.ceph.com/issues/9915>, pr#3949 <http://github.com/ceph/ceph/pull/3949>, Zhiqiang Wang)issue#11244 <http://tracker.ceph.com/issues/11244>, pr#4415 <http://github.com/ceph/ceph/pull/4415>, Samuel Just)issue#9555 <http://tracker.ceph.com/issues/9555>, pr#3947 <http://github.com/ceph/ceph/pull/3947>, Sage Weil)issue#9891 <http://tracker.ceph.com/issues/9891>, pr#3948 <http://github.com/ceph/ceph/pull/3948>, Samuel Just)issue#10617 <http://tracker.ceph.com/issues/10617>, pr#3933 <http://github.com/ceph/ceph/pull/3933>, Sage Weil)issue#11199 <http://tracker.ceph.com/issues/11199>, pr#4385 <http://github.com/ceph/ceph/pull/4385>, Samuel Just)issue#11144 <http://tracker.ceph.com/issues/11144>, pr#4383 <http://github.com/ceph/ceph/pull/4383>, Loic Dachary)issue#11156 <http://tracker.ceph.com/issues/11156>, pr#4185 <http://github.com/ceph/ceph/pull/4185>, Samuel Just)issue#6003 <http://tracker.ceph.com/issues/6003>, pr#3960 <http://github.com/ceph/ceph/pull/3960>, Samuel Just)issue#7737 <http://tracker.ceph.com/issues/7737>, pr#4021 <http://github.com/ceph/ceph/pull/4021>, Guang Yang)issue#9985 <http://tracker.ceph.com/issues/9985>, pr#3950 <http://github.com/ceph/ceph/pull/3950>, Sage Weil)issue#11429 <http://tracker.ceph.com/issues/11429>, pr#4556 <http://github.com/ceph/ceph/pull/4556>, Samuel Just)issue#10014 <http://tracker.ceph.com/issues/10014>, pr#3954 <http://github.com/ceph/ceph/pull/3954>, Jianpeng Ma)issue#10259 <http://tracker.ceph.com/issues/10259>, pr#3827 <http://github.com/ceph/ceph/pull/3827>, Samuel Just)issue#11454 <http://tracker.ceph.com/issues/11454>, pr#4453 <http://github.com/ceph/ceph/pull/4453>, Guang Yang)issue#10976 <http://tracker.ceph.com/issues/10976>, pr#4416 <http://github.com/ceph/ceph/pull/4416>, Mykola Golub)issue#10059 <http://tracker.ceph.com/issues/10059>, pr#3955 <http://github.com/ceph/ceph/pull/3955>, Samuel Just)issue#10718 <http://tracker.ceph.com/issues/10718>, pr#4382 <http://github.com/ceph/ceph/pull/4382>, Samuel Just)issue#10157 <http://tracker.ceph.com/issues/10157>, pr#3964 <http://github.com/ceph/ceph/pull/3964>_, Samuel Just)issue#11197 <http://tracker.ceph.com/issues/11197>, pr#4384 <http://github.com/ceph/ceph/pull/4384>, Samuel Just)issue#8011 <http://tracker.ceph.com/issues/8011>, pr#3943 <http://github.com/ceph/ceph/pull/3943>, Samuel Just)issue#8753 <http://tracker.ceph.com/issues/8753>, pr#3940 <http://github.com/ceph/ceph/pull/3940>, Samuel Just)issue#10150 <http://tracker.ceph.com/issues/10150>, pr#3962 <http://github.com/ceph/ceph/pull/3962>, Samuel Just)issue#10512 <http://tracker.ceph.com/issues/10512>, pr#3931 <http://github.com/ceph/ceph/pull/3931>, Sage Weil)issue#10062 <http://tracker.ceph.com/issues/10062>, pr#3958 <http://github.com/ceph/ceph/pull/3958>, Abhishek Lekshmanan)issue#11720 <http://tracker.ceph.com/issues/11720>, pr#4780 <http://github.com/ceph/ceph/pull/4780>, Orit Wasserman)issue#11890 <http://tracker.ceph.com/issues/11890>, pr#4829 <http://github.com/ceph/ceph/pull/4829>, Yehuda Sadeh)issue#10698 <http://tracker.ceph.com/issues/10698>, pr#3966 <http://github.com/ceph/ceph/pull/3966>, Yehuda Sadeh)issue#10106 <http://tracker.ceph.com/issues/10106>, pr#3961 <http://github.com/ceph/ceph/pull/3961>, Yehuda Sadeh)issue#11256 <http://tracker.ceph.com/issues/11256>, pr#4571 <http://github.com/ceph/ceph/pull/4571>, Yehuda Sadeh)issue#11871,11891 <http://tracker.ceph.com/issues/11871,11891>, pr#4851 <http://github.com/ceph/ceph/pull/4851>, Radoslaw Zarzynski)issue#11125 <http://tracker.ceph.com/issues/11125>, pr#4414 <http://github.com/ceph/ceph/pull/4414>, Yehuda Sadeh)issue#11622 <http://tracker.ceph.com/issues/11622>, pr#4697 <http://github.com/ceph/ceph/pull/4697>, Yehuda Sadeh)issue#10770 <http://tracker.ceph.com/issues/10770>, pr#3938 <http://github.com/ceph/ceph/pull/3938>, Yehuda Sadeh)issue#11160 <http://tracker.ceph.com/issues/11160>, pr#4275 <http://github.com/ceph/ceph/pull/4275>, Yehuda Sadeh)issue#10665 <http://tracker.ceph.com/issues/10665>, pr#3934 <http://github.com/ceph/ceph/pull/3934>, Dmytro Iurchenko)issue#10475 <http://tracker.ceph.com/issues/10475>, pr#3929 <http://github.com/ceph/ceph/pull/3929>, Dmytro Iurchenko)issue#11416 <http://tracker.ceph.com/issues/11416>, pr#4379 <http://github.com/ceph/ceph/pull/4379>, Yehuda Sadeh)issue#11157 <http://tracker.ceph.com/issues/11157>, pr#4079 <http://github.com/ceph/ceph/pull/4079>, Loic Dachary)issue#12327 <http://tracker.ceph.com/issues/12327>, pr#3866 <http://github.com/ceph/ceph/pull/3866>, David Zafman)issue#11176 <http://tracker.ceph.com/issues/11176>, pr#4126 <http://github.com/ceph/ceph/pull/4126>, David Zafman)issue#11139 <http://tracker.ceph.com/issues/11139>, pr#4129 <http://github.com/ceph/ceph/pull/4129>, David Zafman)issue#11303 <http://tracker.ceph.com/issues/11303>, pr#4247 <http://github.com/ceph/ceph/pull/4247>, Alfredo Deza)This is a bugfix release for firefly. It fixes a performance regression in librbd, an important CRUSH misbehavior (see below), and several RGW bugs. We have also backported support for flock/fcntl locks to ceph-fuse and libcephfs.
We recommend that all Firefly users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.9.txt>.
This point release fixes several issues with CRUSH that trigger excessive data migration when adjusting OSD weights. These are most obvious when a very small weight change (e.g., a change from 0 to .01) triggers a large amount of movement, but the same set of bugs can also lead to excessive (though less noticeable) movement in other cases.
However, because the bug may already have affected your cluster, fixing it may trigger movement back to the more correct location. For this reason, you must manually opt-in to the fixed behavior.
In order to set the new tunable to correct the behavior::
ceph osd crush set-tunable straw_calc_version 1
Note that this change will have no immediate effect. However, from this point forward, any 'straw' bucket in your CRUSH map that is adjusted will get non-buggy internal weights, and that transition may trigger some rebalancing.
You can estimate how much rebalancing will eventually be necessary on your cluster with::
ceph osd getcrushmap -o /tmp/cm crushtool -i /tmp/cm --num-rep 3 --test --show-mappings > /tmp/a 2>&1 crushtool -i /tmp/cm --set-straw-calc-version 1 -o /tmp/cm2 crushtool -i /tmp/cm2 --reweight -o /tmp/cm2 crushtool -i /tmp/cm2 --num-rep 3 --test --show-mappings > /tmp/b 2>&1 wc -l /tmp/a # num total mappings diff -u /tmp/a /tmp/b | grep -c ^+ # num changed mappings
Divide the number of changed lines by the total number of lines in /tmp/a. We've found that most clusters are under 10%.
You can force all of this rebalancing to happen at once with::
ceph osd crush reweight-all
Otherwise, it will happen at some unknown point in the future when CRUSH weights are next adjusted.
This is a long-awaited bugfix release for firefly. It has several important (but relatively rare) OSD peering fixes, performance issues when snapshots are trimmed, several RGW fixes, a paxos corner case fix, and some packaging updates.
We recommend that all users for v0.80.x firefly upgrade when it is convenient to do so.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.8.txt>.
This release fixes a few critical issues with v0.80.6, particularly with clusters running mixed versions.
We recommend that all v0.80.x Firefly users upgrade to this release.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.7.txt>.
This is a major bugfix release for firefly, fixing a range of issues in the OSD and monitor, particularly with cache tiering. There are also important fixes in librados, with the watch/notify mechanism used by librbd, and in radosgw.
A few pieces of new functionality of been backported, including improved 'ceph df' output (view amount of writeable space per pool), support for non-default cluster names when using sysvinit or systemd, and improved (and fixed) support for dmcrypt.
We recommend that all v0.80.x Firefly users upgrade to this release.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.6.txt>.
This release fixes a few important bugs in the radosgw and fixes several packaging and environment issues, including OSD log rotation, systemd environments, and daemon restarts on upgrade.
We recommend that all v0.80.x Firefly users upgrade, particularly if they are using upstart, systemd, or radosgw.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.5.txt>.
This Firefly point release fixes an potential data corruption problem when ceph-osd daemons run on top of XFS and service Firefly librbd clients. A recently added allocation hint that RBD utilizes triggers an XFS bug on some kernels (Linux 3.2, and likely others) that leads to data corruption and deep-scrub errors (and inconsistent PGs). This release avoids the situation by disabling the allocation hint until we can validate which kernels are affected and/or are known to be safe to use the hint on.
We recommend that all v0.80.x Firefly users urgently upgrade, especially if they are using RBD.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.4.txt>.
This is the third Firefly point release. It includes a single fix for a radosgw regression that was discovered in v0.80.2 right after it was released.
We recommend that all v0.80.x Firefly users upgrade.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.3.txt>.
This is the second Firefly point release. It contains a range of important fixes, including several bugs in the OSD cache tiering, some compatibility checks that affect upgrade situations, several radosgw bugs, and an irritating and unnecessary feature bit check that prevents older clients from communicating with a cluster with any erasure coded pools.
One someone large change in this point release is that the ceph RPM package is separated into a ceph and ceph-common package, similar to Debian. The ceph-common package contains just the client libraries without any of the server-side daemons.
We recommend that all v0.80.x Firefly users skip this release and use v0.80.3.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.2.txt>.
This first Firefly point release fixes a few bugs, the most visible being a problem that prevents scrub from completing in some cases.
For more detailed information, see :download:the complete changelog <../changelog/v0.80.1.txt>.
This release will form the basis for our long-term supported release Firefly, v0.80.x. The big new features are support for erasure coding and cache tiering, although a broad range of other features, fixes, and improvements have been made across the code base. Highlights include:
We expect to maintain a series of stable releases based on v0.80 Firefly for as much as a year. In the meantime, development of Ceph continues with the next release, Giant, which will feature work on the CephFS distributed file system, more alternative storage backends (like RocksDB and f2fs), RDMA support, support for pyramid erasure codes, and additional functionality in the block device (RBD) like copy-on-read and multisite mirroring.
If your existing cluster is running a version older than v0.67
Dumpling, please first upgrade to the latest Dumpling release before
upgrading to v0.80 Firefly. Please refer to the :ref:dumpling-upgrade
documentation.
We recommend adding the following to the [mon] section of your ceph.conf prior to upgrade::
mon warn on legacy crush tunables = false
This will prevent health warnings due to the use of legacy CRUSH placement. Although it is possible to rebalance existing data across your cluster (see the upgrade notes below), we do not normally recommend it for production environments as a large amount of data will move and there is a significant performance impact from the rebalancing.
Upgrade daemons in the following order:
#. Monitors #. OSDs #. MDSs and/or radosgw
If the ceph-mds daemon is restarted first, it will wait until all OSDs have been upgraded before finishing its startup sequence. If the ceph-mon daemons are not restarted prior to the ceph-osd daemons, they will not correctly register their new capabilities with the cluster and new features may not be usable until they are restarted a second time.
Upgrade radosgw daemons together. There is a subtle change in behavior for multipart uploads that prevents a multipart request that was initiated with a new radosgw from being completed by an old radosgw.
OSDMap's json-formatted dump changed for keys 'full' and 'nearfull'. What was previously being outputted as 'true' or 'false' strings are now being outputted 'true' and 'false' booleans according to json syntax.
HEALTH_WARN on 'mon osd down out interval == 0'. Having this option set to zero on the leader acts much like having the 'noout' flag set. This warning will only be reported if the monitor getting the 'health' or 'status' request has this option set to zero.
Monitor 'auth' commands now require the mon 'x' capability. This matches dumpling v0.67.x and earlier, but differs from emperor v0.72.x.
A librados WATCH operation on a non-existent object now returns ENOENT; previously it did not.
Librados interface change: As there are no partial writes, the rados_write() and rados_append() operations now return 0 on success like rados_write_full() always has. This includes the C++ interface equivalents and AIO return values for the aio variants.
The radosgw init script (sysvinit) how requires that the 'host = ...' line in ceph.conf, if present, match the short hostname (the output of 'hostname -s'), not the fully qualified hostname or the (occasionally non-short) output of 'hostname'. Failure to adjust this when upgrading from emperor or dumpling may prevent the radosgw daemon from starting.
See notes above.
The 'ceph -s' or 'ceph status' command's 'num_in_osds' field in the JSON and XML output has been changed from a string to an int.
The recently added 'ceph mds set allow_new_snaps' command's syntax has changed slightly; it is now 'ceph mds set allow_new_snaps true'. The 'unset' command has been removed; instead, set the value to 'false'.
The syntax for allowing snapshots is now 'mds set allow_new_snaps <true|false>' instead of 'mds <set,unset> allow_new_snaps'.
'rbd ls' on a pool which never held rbd images now exits with code 0. It outputs nothing in plain format, or an empty list in non-plain format. This is consistent with the behavior for a pool which used to hold images, but contains none. Scripts relying on this behavior should be updated.
The MDS requires a new OSD operation TMAP2OMAP, added in this release. When upgrading, be sure to upgrade and restart the ceph-osd daemons before the ceph-mds daemon. The MDS will refuse to start if any up OSDs do not support the new feature.
The 'ceph mds set_max_mds N' command is now deprecated in favor of 'ceph mds set max_mds N'.
The 'osd pool create ...' syntax has changed for erasure pools.
The default CRUSH rules and layouts are now using the 'bobtail' tunables and defaults. Upgraded clusters using the old values will now present with a health WARN state. This can be disabled by adding 'mon warn on legacy crush tunables = false' to ceph.conf and restarting the monitors. Alternatively, you can switch to the new tunables with 'ceph osd crush tunables firefly,' but keep in mind that this will involve moving a significant portion of the data already stored in the cluster and in a large cluster may take several days to complete. We do not recommend adjusting tunables on a production cluster.
We now default to the 'bobtail' CRUSH tunable values that are first supported by Ceph clients in bobtail (v0.56) and Linux kernel version v3.9. If you plan to access a newly created Ceph cluster with an older kernel client, you should use 'ceph osd crush tunables legacy' to switch back to the legacy behavior. Note that making that change will likely result in some data movement in the system, so adjust the setting before populating the new cluster with data.
We now set the HASHPSPOOL flag on newly created pools (and new clusters) by default. Support for this flag first appeared in v0.64; v0.67 Dumpling is the first major release that supports it. It is first supported by the Linux kernel version v3.9. If you plan to access a newly created Ceph cluster with an older kernel or clients (e.g, librados, librbd) from a pre-dumpling Ceph release, you should add 'osd pool default flag hashpspool = false' to the '[global]' section of your 'ceph.conf' prior to creating your monitors (e.g., after 'ceph-deploy new' but before 'ceph-deploy mon create ...').
The configuration option 'osd pool default crush rule' is deprecated and replaced with 'osd pool default crush replicated ruleset'. 'osd pool default crush rule' takes precedence for backward compatibility and a deprecation warning is displayed when it is used.
As part of fix for #6796, 'ceph osd pool set <pool> <var> <arg>' now receives <arg> as an integer instead of a string. This affects how 'hashpspool' flag is set/unset: instead of 'true' or 'false', it now must be '0' or '1'.
The behavior of the CRUSH 'indep' choose mode has been changed. No ceph cluster should have been using this behavior unless someone has manually extracted a crush map, modified a CRUSH rule to replace 'firstn' with 'indep', recompiled, and reinjected the new map into the cluster. If the 'indep' mode is currently in use on a cluster, the rule should be modified to use 'firstn' instead, and the administrator should wait until any data movement completes before upgrading.
The 'osd dump' command now dumps pool snaps as an array instead of an object.
See notes above.
ceph-fuse and radosgw now use the same default values for the admin socket and log file paths that the other daemons (ceph-osd, ceph-mon, etc.) do. If you run these daemons as non-root, you may need to adjust your ceph.conf to disable these options or to adjust the permissions on /var/run/ceph and /var/log/ceph.
The MDS now disallows snapshots by default as they are not considered stable. The command 'ceph mds set allow_snaps' will enable them.
For clusters that were created before v0.44 (pre-argonaut, Spring 2012) and store radosgw data, the auto-upgrade from TMAP to OMAP objects has been disabled. Before upgrading, make sure that any buckets created on pre-argonaut releases have been modified (e.g., by PUTing and then DELETEing an object from each bucket). Any cluster created with argonaut (v0.48) or a later release or not using radosgw never relied on the automatic conversion and is not affected by this change.
Any direct users of the 'tmap' portion of the librados API should be aware that the automatic tmap -> omap conversion functionality has been removed.
Most output that used K or KB (e.g., for kilobyte) now uses a lower-case k to match the official SI convention. Any scripts that parse output and check for an upper-case K will need to be modified.
librados::Rados::pool_create_async() and librados::Rados::pool_delete_async() don't drop a reference to the completion object on error, caller needs to take care of that. This has never really worked correctly and we were leaking an object
'ceph osd crush set <id> <weight> <loc..>' no longer adds the osd to the specified location, as that's a job for 'ceph osd crush add'. It will however continue to work just the same as long as the osd already exists in the crush map.
The OSD now enforces that class write methods cannot both mutate an object and return data. The rbd.assign_bid method, the lone offender, has been removed. This breaks compatibility with pre-bobtail librbd clients by preventing them from creating new images.
librados now returns on commit instead of ack for synchronous calls. This is a bit safer in the case where both OSDs and the client crash, and is probably how it should have been acting from the beginning. Users are unlikely to notice but it could result in lower performance in some circumstances. Those who care should switch to using the async interfaces, which let you specify safety semantics precisely.
The C++ librados AioComplete::get_version() method was incorrectly returning an int (usually 32-bits). To avoid breaking library compatibility, a get_version64() method is added that returns the full-width value. The old method is deprecated and will be removed in a future release. Users of the C++ librados API that make use of the get_version() method should modify their code to avoid getting a value that is truncated from 64 to to 32 bits.
This release is intended to serve as a release candidate for firefly, which will hopefully be v0.80. No changes are being made to the code base at this point except those that fix bugs. Please test this release if you intend to make use of the new erasure-coded pools or cache tiers in firefly.
This release fixes a range of bugs found in v0.78 and streamlines the user experience when creating erasure-coded pools. There is also a raft of fixes for the MDS (multi-mds, directory fragmentation, and large directories). The main notable new piece of functionality is a small change to allow radosgw to use an erasure-coded pool for object data.
Erasure pools created with v0.78 will no longer function with v0.79. You will need to delete the old pool and create a new one.
A bug was fixed in the authentication handshake with big-endian architectures that prevent authentication between big- and little-endian machines in the same cluster. If you have a cluster that consists entirely of big-endian machines, you will need to upgrade all daemons and clients and restart.
The 'ceph.file.layout' and 'ceph.dir.layout' extended attributes are no longer included in the listxattr(2) results to prevent problems with 'cp -a' and similar tools.
Monitor 'auth' read-only commands now expect the user to have 'rx' caps. This is the same behavior that was present in dumpling, but in emperor and more recent development releases the 'r' cap was sufficient. The affected commands are::
ceph auth export ceph auth get ceph auth get-key ceph auth print-key ceph auth list
This development release includes two key features: erasure coding and cache tiering. A huge amount of code was merged for this release and several additional weeks were spent stabilizing the code base, and it is now in a state where it is ready to be tested by a broader user base.
This is not the firefly release. Firefly will be delayed for at least another sprint so that we can get some operational experience with the new code and do some additional testing before committing to long term support.
.. note:: Please note that while it is possible to create and test erasure coded pools in this release, the pools will not be usable when you upgrade to v0.79 as the OSDMap encoding will subtlely change. Please do not populate your test pools with important data that can't be reloaded.
Upgrade daemons in the following order:
#. Monitors #. OSDs #. MDSs and/or radosgw
If the ceph-mds daemon is restarted first, it will wait until all OSDs have been upgraded before finishing its startup sequence. If the ceph-mon daemons are not restarted prior to the ceph-osd daemons, they will not correctly register their new capabilities with the cluster and new features may not be usable until they are restarted a second time.
Upgrade radosgw daemons together. There is a subtle change in behavior for multipart uploads that prevents a multipart request that was initiated with a new radosgw from being completed by an old radosgw.
CephFS recently added support for a new 'backtrace' attribute on file data objects that is used for lookup by inode number (i.e., NFS reexport and hard links), and will later be used by fsck repair. This replaces the existing anchor table mechanism that is used for hard link resolution. In order to completely phase that out, any inode that has an outdated backtrace attribute will get updated when the inode itself is modified. This will result in some extra workload after a legacy CephFS file system is upgraded.
The per-op return code in librados' ObjectWriteOperation interface is now filled in.
The librados cmpxattr operation now handles xattrs containing null bytes as data rather than null-terminated strings.
Compound operations in librados that create and then delete the same object are now explicitly disallowed (they fail with -EINVAL).
The default leveldb cache size for the ceph-osd daemon has been increased from 4 MB to 128 MB. This will increase the memory footprint of that process but tends to increase performance of omap (key/value) objects (used for CephFS and the radosgw). If memory in your deployment is tight, you can preserve the old behavior by adding::
leveldb write buffer size = 0 leveldb cache size = 0
to your ceph.conf to get back the (leveldb) defaults.
This is the final development release before the Firefly feature freeze. The main items in this release include some additional refactoring work in the OSD IO path (include some locking improvements), per-user quotas for the radosgw, a switch to civetweb from mongoose for the prototype radosgw standalone mode, and a prototype leveldb-based backend for the OSD. The C librados API also got support for atomic write operations (read side transactions will appear in v0.78).
The 'ceph -s' or 'ceph status' command's 'num_in_osds' field in the JSON and XML output has been changed from a string to an int.
The recently added 'ceph mds set allow_new_snaps' command's syntax has changed slightly; it is now 'ceph mds set allow_new_snaps true'. The 'unset' command has been removed; instead, set the value to 'false'.
The syntax for allowing snapshots is now 'mds set allow_new_snaps <true|false>' instead of 'mds <set,unset> allow_new_snaps'.
This release includes another batch of updates for firefly functionality. Most notably, the cache pool infrastructure now support snapshots, the OSD backfill functionality has been generalized to include multiple targets (necessary for the coming erasure pools), and there were performance improvements to the erasure code plugin on capable processors. The MDS now properly utilizes (and seamlessly migrates to) the OSD key/value interface (aka omap) for storing directory objects. There continue to be many other fixes and improvements for usability and code portability across the tree.
'rbd ls' on a pool which never held rbd images now exits with code 0. It outputs nothing in plain format, or an empty list in non-plain format. This is consistent with the behavior for a pool which used to hold images, but contains none. Scripts relying on this behavior should be updated.
The MDS requires a new OSD operation TMAP2OMAP, added in this release. When upgrading, be sure to upgrade and restart the ceph-osd daemons before the ceph-mds daemon. The MDS will refuse to start if any up OSDs do not support the new feature.
The 'ceph mds set_max_mds N' command is now deprecated in favor of 'ceph mds set max_mds N'.
This is a big release, with lots of infrastructure going in for firefly. The big items include a prototype standalone frontend for radosgw (which does not require apache or fastcgi), tracking for read activity on the osds (to inform tiering decisions), preliminary cache pool support (no snapshots yet), and lots of bug fixes and other work across the tree to get ready for the next batch of erasure coding patches.
For comparison, here are the diff stats for the last few versions::
v0.75 291 files changed, 82713 insertions(+), 33495 deletions(-) v0.74 192 files changed, 17980 insertions(+), 1062 deletions(-) v0.73 148 files changed, 4464 insertions(+), 2129 deletions(-)
The 'osd pool create ...' syntax has changed for erasure pools.
The default CRUSH rules and layouts are now using the latest and greatest tunables and defaults. Clusters using the old values will now present with a health WARN state. This can be disabled by adding 'mon warn on legacy crush tunables = false' to ceph.conf.
This release includes a few substantial pieces for Firefly, including a long-overdue switch to 3x replication by default and a switch to the "new" CRUSH tunables by default (supported since bobtail). There is also a fix for a long-standing radosgw bug (stalled GET) that has already been backported to emperor and dumpling.
We now default to the 'bobtail' CRUSH tunable values that are first supported by Ceph clients in bobtail (v0.56) and Linux kernel version v3.9. If you plan to access a newly created Ceph cluster with an older kernel client, you should use 'ceph osd crush tunables legacy' to switch back to the legacy behavior. Note that making that change will likely result in some data movement in the system, so adjust the setting before populating the new cluster with data.
We now set the HASHPSPOOL flag on newly created pools (and new clusters) by default. Support for this flag first appeared in v0.64; v0.67 Dumpling is the first major release that supports it. It is first supported by the Linux kernel version v3.9. If you plan to access a newly created Ceph cluster with an older kernel or clients (e.g, librados, librbd) from a pre-dumpling Ceph release, you should add 'osd pool default flag hashpspool = false' to the '[global]' section of your 'ceph.conf' prior to creating your monitors (e.g., after 'ceph-deploy new' but before 'ceph-deploy mon create ...').
The configuration option 'osd pool default crush rule' is deprecated and replaced with 'osd pool default crush replicated ruleset'. 'osd pool default crush rule' takes precedence for backward compatibility and a deprecation warning is displayed when it is used.
This release, the first development release after emperor, includes many bug fixes and a few additional pieces of functionality. The first batch of larger changes will be landing in the next version, v0.74.
As part of fix for #6796, 'ceph osd pool set <pool> <var> <arg>' now receives <arg> as an integer instead of a string. This affects how 'hashpspool' flag is set/unset: instead of 'true' or 'false', it now must be '0' or '1'.
The behavior of the CRUSH 'indep' choose mode has been changed. No ceph cluster should have been using this behavior unless someone has manually extracted a crush map, modified a CRUSH rule to replace 'firstn' with 'indep', recompiled, and reinjected the new map into the cluster. If the 'indep' mode is currently in use on a cluster, the rule should be modified to use 'firstn' instead, and the administrator should wait until any data movement completes before upgrading.
The 'osd dump' command now dumps pool snaps as an array instead of an object.
The radosgw init script (sysvinit) how requires that the 'host = ...' line in ceph.conf, if present, match the short hostname (the output of 'hostname -s'), not the fully qualified hostname or the (occasionally non-short) output of 'hostname'. Failure to adjust this when upgrading from emperor or dumpling may prevent the radosgw daemon from starting.