Documentation/Getting-Started/Prerequisites/prerequisites.md
Rook can be installed on any existing Kubernetes cluster as long as it meets the minimum version and Rook is granted the required privileges (see below for more information).
Kubernetes versions v1.30 through v1.35 are supported.
Architectures supported are amd64 / x86_64 and arm64.
To configure the Ceph storage cluster, at least one of these local storage types is required:
block modeConfirm whether the partitions or devices are formatted with filesystems with the following command:
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
vda
└─vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB
├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 /
└─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP]
vdb
If the FSTYPE field is not empty, there is a filesystem on top of the corresponding device. In this example, vdb is available to Rook, while vda and its partitions have a filesystem and are not available.
Ceph OSDs have a dependency on LVM in the following scenarios:
encryptedDevice: "true" in the cluster CR)metadata device is specifiedosdsPerDevice is greater than 1LVM is not required for OSDs in these scenarios:
storageClassDeviceSetsIf LVM is required, LVM needs to be available on the hosts where OSDs will be running.
Some Linux distributions do not ship with the lvm2 package. This package is required on all storage nodes in the k8s cluster to run Ceph OSDs.
Without this package even though Rook will be able to successfully create the Ceph OSDs, when a node is rebooted the OSD pods
running on the restarted node will fail to start. Please install LVM using your Linux distribution's package manager. For example:
CentOS:
sudo yum install -y lvm2
Ubuntu:
sudo apt-get install -y lvm2
RancherOS:
runcmd:
- [ "vgchange", "-ay" ]
Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD.
Test your Kubernetes nodes by running modprobe rbd.
If the rbd module is 'not found', rebuild the kernel to include the rbd module,
install a newer kernel, or choose a different Linux distribution.
Rook's default RBD configuration specifies only the layering feature, for
broad compatibility with older kernels. If your Kubernetes nodes run a 5.4
or later kernel, additional feature flags can be enabled in the
storage class. The fast-diff and object-map features are especially useful.
imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels.
Specific configurations for some distributions.
For NixOS, the kernel modules will be found in the non-standard path /run/current-system/kernel-modules/lib/modules/,
and they'll be symlinked inside the also non-standard path /nix.
Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods.
If installing Rook with Helm, uncomment these example settings in values.yaml:
csi.csiCephFSPluginVolumecsi.csiCephFSPluginVolumeMountcsi.csiRBDPluginVolumecsi.csiRBDPluginVolumeMountIf deploying without Helm, add those same values to the settings in the rook-ceph-operator-config
ConfigMap found in operator.yaml:
CSI_CEPHFS_PLUGIN_VOLUMECSI_CEPHFS_PLUGIN_VOLUME_MOUNTCSI_RBD_PLUGIN_VOLUMECSI_RBD_PLUGIN_VOLUME_MOUNTIf using containerd, remove LimitNOFILE from containerd service config to avoid issues like slow ceph commands or mons falling out of quorum.
systemd.services.containerd.serviceConfig = {
LimitNOFILE = lib.mkForce null;
};