design/ceph/osd-migration.md
Migration of OSDs with following configurations is deferred for now and will considered in the future:
Add spec.storage.migration in the CephCluster resource.
storage:
migration:
confirmation: yes-really-migrate-osds
confirmation: Confirmation from the user that they really want to migrate the OSDs.
This field can only take the value yes-really-migrate-osds.storage:
storageClassDeviceSets:
- name: set1
count: 3
encrypted: false
storage:
migration:
confirmation: "yes-really-migrate-osds"
storageClassDeviceSets:
- name: set1
count: 3
encrypted: true # changed to true
encrypted: false to identify the OSDs that are not encrypted.storage:
store:
type: bluestore
storage:
migration:
confirmation: "yes-really-migrate-osds"
store:
type: <new-backend-store> # changed to new backend store type (such as seastore in the future)
osd-store to identify the OSDs that have different backend store.status.storage.osd
status:
storage:
osd:
migrationStatus:
pending: 5
osd.migrationStatus.pending: Total number of OSDs that are pending migration.phase should be set to progressing while OSDs are migratingOSD replacement steps:
osd-store:<osd store type> does not match spec.storage.store.type.active+clean, do not proceed.active+clean but a previous OSD migration is not completed, do not proceed.active+clean and no migration is in progress, then select an OSD to be migrated.ceph osd destroy {id} --yes-i-really-mean-it) and prepare it again using the same OSD ID. Refer Destroy OSD for details.