Back to Rook

Ceph Operator Helm Chart

Documentation/Helm-Charts/operator-chart.md

1.19.519.8 KB
Original Source
<!--- Document is generated by `make helm-docs`. DO NOT EDIT. Edit the corresponding *.gotmpl.md file instead -->

Installs rook to create, configure, and manage Ceph clusters on Kubernetes.

Introduction

This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Helm 3.13+

See the Helm support matrix for more details.

Installing

The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.

  1. Install the Helm chart
  2. Create a Rook cluster.

The helm install command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph namespace (you will install your clusters into separate namespaces).

Release

The release channel is the most recent release of Rook that is considered stable for the community.

console
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml

For example settings, see the next section or values.yaml

Configuration

The following table lists the configurable parameters of the rook-operator chart and their default values.

ParameterDescriptionDefault
allowLoopDevicesIf true, loop devices are allowed to be used for osds in test clustersfalse
annotationsPod annotations{}
ceph-csi-operator.controllerManager.manager.env.csiServiceAccountPrefix"ceph-csi-"
ceph-csi-operator.fullnameOverride"ceph-csi"
ceph-csi-operator.nameOverride"ceph-csi"
cephCommandsTimeoutSecondsThe timeout for ceph commands in seconds"15"
containerSecurityContextSet the container security context for the operator{"capabilities":{"drop":["ALL"]},"runAsGroup":2016,"runAsNonRoot":true,"runAsUser":2016}
crds.enabledWhether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see the disaster recovery guide to restore them.true
csi.attacher.repositoryKubernetes CSI Attacher image repository"registry.k8s.io/sig-storage/csi-attacher"
csi.attacher.tagAttacher image tag"v4.11.0"
csi.cephFSAttachRequiredWhether to skip any attach operation altogether for CephFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. WARNING It's highly discouraged to use this for CephFS RWO volumes. Refer to this issue for more details.true
csi.cephFSFSGroupPolicyPolicy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html"File"
csi.cephFSKernelMountOptionsSet CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CRnil
csi.cephFSPluginUpdateStrategyCSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdateRollingUpdate
csi.cephFSPluginUpdateStrategyMaxUnavailableA maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.1
csi.cephcsi.repositoryCeph CSI image repository"quay.io/cephcsi/cephcsi"
csi.cephcsi.tagCeph CSI image tag"v3.16.2"
csi.cephfsLivenessMetricsPortCSI CephFS driver metrics port9081
csi.cephfsPodLabelsLabels to add to the CSI CephFS Deployments and DaemonSets Podsnil
csi.clusterNameCluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph clusternil
csi.crossNamespaceVolumeDataSource.enabledEnable cross namespace volume data source provisioningfalse
csi.csiAddons.enabledEnable CSIAddonsfalse
csi.csiAddons.repositoryCSIAddons sidecar image repository"quay.io/csiaddons/k8s-sidecar"
csi.csiAddons.tagCSIAddons sidecar image tag"v0.14.0"
csi.csiAddonsCephFSProvisionerPortCSI Addons server port for the Ceph FS provisioner9070
csi.csiAddonsPortCSI Addons server port9070
csi.csiAddonsRBDProvisionerPortCSI Addons server port for the RBD provisioner9070
csi.csiCephFSPluginResourceCEPH CSI CephFS plugin resource requirement listsee values.yaml
csi.csiCephFSPluginVolumeThe volume of the CephCSI CephFS plugin DaemonSetnil
csi.csiCephFSPluginVolumeMountThe volume mounts of the CephCSI CephFS plugin DaemonSetnil
csi.csiCephFSProvisionerResourceCEPH CSI CephFS provisioner resource requirement listsee values.yaml
csi.csiDriverNamePrefixCSI driver name prefix for cephfs, rbd and nfs.namespace name where rook-ceph operator is deployed
csi.csiLeaderElectionLeaseDurationDuration in seconds that non-leader candidates will wait to force acquire leadership.137s
csi.csiLeaderElectionRenewDeadlineDeadline in seconds that the acting leader will retry refreshing leadership before giving up.107s
csi.csiLeaderElectionRetryPeriodRetry period in seconds the LeaderElector clients should wait between tries of actions.26s
csi.csiNFSPluginResourceCEPH CSI NFS plugin resource requirement listsee values.yaml
csi.csiNFSProvisionerResourceCEPH CSI NFS provisioner resource requirement listsee values.yaml
csi.csiRBDPluginResourceCEPH CSI RBD plugin resource requirement listsee values.yaml
csi.csiRBDPluginVolumeThe volume of the CephCSI RBD plugin DaemonSetnil
csi.csiRBDPluginVolumeMountThe volume mounts of the CephCSI RBD plugin DaemonSetnil
csi.csiRBDProvisionerResourceCEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if enableOMAPGenerator is set to truesee values.yaml
csi.disableCsiDriverDisable the CSI driver."false"
csi.enableCSIEncryptionEnable Ceph CSI PVC encryption supportfalse
csi.enableCSIHostNetworkEnable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performancetrue
csi.enableCephfsDriverEnable Ceph CSI CephFS drivertrue
csi.enableCephfsSnapshotterEnable Snapshotter in CephFS provisioner podtrue
csi.enableLivenessEnable Ceph CSI Liveness sidecar deploymentfalse
csi.enableMetadataEnable adding volume metadata on the CephFS subvolumes and RBD images. Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. Hence enable metadata is false by defaultfalse
csi.enableNFSSnapshotterEnable Snapshotter in NFS provisioner podtrue
csi.enableOMAPGeneratorOMAP generator generates the omap mapping between the PV name and the RBD image which helps CSI to identify the rbd images for CSI operations. CSI_ENABLE_OMAP_GENERATOR needs to be enabled when we are using rbd mirroring feature. By default OMAP generator is disabled and when enabled, it will be deployed as a sidecar with CSI provisioner pod, to enable set it to true.false
csi.enablePluginSelinuxHostMountEnable Host mount for /etc/selinux directory for Ceph CSI nodepluginsfalse
csi.enableRBDSnapshotterEnable Snapshotter in RBD provisioner podtrue
csi.enableRbdDriverEnable Ceph CSI RBD drivertrue
csi.enableVolumeGroupSnapshotEnable volume group snapshot feature. This feature is enabled by default as long as the necessary CRDs are available in the cluster.true
csi.forceCephFSKernelClientEnable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the upgrade guidetrue
csi.grpcTimeoutInSecondsSet GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150150
csi.imagePullPolicyImage pull policy"IfNotPresent"
csi.kubeApiBurstBurst to use while communicating with the kubernetes apiserver.nil
csi.kubeApiQPSQPS to use while communicating with the kubernetes apiserver.nil
csi.kubeletDirPathKubelet root directory path (if the Kubelet uses a different path for the --root-dir flag)/var/lib/kubelet
csi.logLevelSet logging level for cephCSI containers maintained by the cephCSI. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.0
csi.nfs.enabledEnable the nfs csi driverfalse
csi.nfsAttachRequiredWhether to skip any attach operation altogether for NFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the NFS PVC fast. WARNING It's highly discouraged to use this for NFS RWO volumes. Refer to this issue for more details.true
csi.nfsFSGroupPolicyPolicy for modifying a volume's ownership or permissions when the NFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html"File"
csi.nfsPluginUpdateStrategyCSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdateRollingUpdate
csi.nfsPodLabelsLabels to add to the CSI NFS Deployments and DaemonSets Podsnil
csi.pluginNodeAffinityThe node labels for affinity of the CephCSI RBD plugin DaemonSet 1nil
csi.pluginPriorityClassNamePriorityClassName to be set on csi driver plugin pods"system-node-critical"
csi.pluginTolerationsArray of tolerations in YAML format which will be added to CephCSI plugin DaemonSetnil
csi.provisioner.repositoryKubernetes CSI provisioner image repository"registry.k8s.io/sig-storage/csi-provisioner"
csi.provisioner.tagProvisioner image tag"v6.1.1"
csi.provisionerNodeAffinityThe node labels for affinity of the CSI provisioner deployment 1nil
csi.provisionerPriorityClassNamePriorityClassName to be set on csi driver provisioner pods"system-cluster-critical"
csi.provisionerReplicasSet replicas for csi provisioner deployment2
csi.provisionerTolerationsArray of tolerations in YAML format which will be added to CSI provisioner deploymentnil
csi.rbdAttachRequiredWhether to skip any attach operation altogether for RBD PVCs. See more details here. If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast. WARNING It's highly discouraged to use this for RWO volumes as it can cause data corruption. csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on. Refer to this issue for more details.true
csi.rbdFSGroupPolicyPolicy for modifying a volume's ownership or permissions when the RBD PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html"File"
csi.rbdLivenessMetricsPortCeph CSI RBD driver metrics port8080
csi.rbdPluginUpdateStrategyCSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdateRollingUpdate
csi.rbdPluginUpdateStrategyMaxUnavailableA maxUnavailable parameter of CSI RBD plugin daemonset update strategy.1
csi.rbdPodLabelsLabels to add to the CSI RBD Deployments and DaemonSets Podsnil
csi.registrar.repositoryKubernetes CSI registrar image repository"registry.k8s.io/sig-storage/csi-node-driver-registrar"
csi.registrar.tagRegistrar image tag"v2.16.0"
csi.resizer.repositoryKubernetes CSI resizer image repository"registry.k8s.io/sig-storage/csi-resizer"
csi.resizer.tagResizer image tag"v2.1.0"
csi.rookUseCsiOperatortrue
csi.serviceMonitor.enabledEnable ServiceMonitor for Ceph CSI driversfalse
csi.serviceMonitor.intervalService monitor scrape interval"10s"
csi.serviceMonitor.labelsServiceMonitor additional labels{}
csi.serviceMonitor.namespaceUse a different namespace for the ServiceMonitornil
csi.sidecarLogLevelSet logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.0
csi.snapshotter.repositoryKubernetes CSI snapshotter image repository"registry.k8s.io/sig-storage/csi-snapshotter"
csi.snapshotter.tagSnapshotter image tag"v8.5.0"
csi.topology.domainLabelsdomainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domainsnil
csi.topology.enabledEnable topology based provisioningfalse
currentNamespaceOnlyWhether the operator should watch cluster CRD in its own namespace or notfalse
customHostnameLabelCustom label to identify node hostname. If not set kubernetes.io/hostname will be usednil
disableDeviceHotplugDisable automatic orchestration when new devices are discovered.false
discover.nodeAffinityThe node labels for affinity of discover-agent 1nil
discover.podLabelsLabels to add to the discover podsnil
discover.resourcesAdd resources to discover daemon podsnil
discover.tolerationToleration for the discover pods. Options: NoSchedule, PreferNoSchedule or NoExecutenil
discover.tolerationKeyThe specific key of the taint to toleratenil
discover.tolerationsArray of tolerations in YAML format which will be added to discover deploymentnil
discoverDaemonUdevBlacklist certain disks according to the regex provided.nil
discoveryDaemonIntervalSet the discovery daemon device discovery interval (default to 60m)"60m"
enableDiscoveryDaemonEnable discovery daemonfalse
enableOBCWatchOperatorNamespaceWhether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be usedtrue
enforceHostNetworkWhether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabledfalse
hostpathRequiresPrivilegedRuns Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions.false
image.pullPolicyImage pull policy"IfNotPresent"
image.repositoryImage"docker.io/rook/ceph"
image.tagImage tagmaster
imagePullSecretsimagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.nil
logLevelGlobal log level for the operator. Options: ERROR, WARNING, INFO, DEBUG"INFO"
monitoring.enabledEnable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitorsfalse
nodeSelectorKubernetes nodeSelector to add to the Deployment.{}
obcAllowAdditionalConfigFieldsMany OBC additional config fields may be risky for administrators to allow users control over. The safe and default-allowed fields are 'maxObjects' and 'maxSize'. Other fields should be considered risky. To allow all additional configs, use this value: "maxObjects,maxSize,bucketMaxObjects,bucketMaxSize,bucketPolicy,bucketLifecycle,bucketOwner""maxObjects,maxSize"
obcProvisionerNamePrefixSpecify the prefix for the OBC provisioner in place of the cluster namespaceceph cluster namespace
operatorPodLabelsCustom pod labels for the operator{}
priorityClassNameSet the priority class for the rook operator deployment if desirednil
rbacAggregate.enableOBCsIf true, create a ClusterRole aggregated to user facing roles for objectbucketclaimsfalse
rbacEnableIf true, create & use RBAC resourcestrue
reconcileConcurrentClustersNumber of clusters the operator reconciles concurrently1
resourcesPod resource requests & limits{"limits":{"memory":"512Mi"},"requests":{"cpu":"200m","memory":"128Mi"}}
revisionHistoryLimitThe revision history limit for all pods created by Rook. If blank, the K8s default is 10.nil
scaleDownOperatorIf true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts.false
tolerationsList of Kubernetes tolerations to add to the Deployment.[]
unreachableNodeTolerationSecondsDelay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes5
useOperatorHostNetworkIf true, run rook operator on the host networknil

Development Build

To deploy from a local build from your development environment:

  1. Build the Rook docker image: make
  2. Copy the image to your K8s cluster, such as with the docker save then the docker load commands
  3. Install the helm chart:
console
cd deploy/charts/rook-ceph
helm install --create-namespace --namespace rook-ceph rook-ceph .

Uninstalling the Chart

To see the currently installed Rook chart:

console
helm ls --namespace rook-ceph

To uninstall/delete the rook-ceph deployment:

console
helm delete --namespace rook-ceph rook-ceph

The command removes all the Kubernetes components associated with the chart and deletes the release.

After uninstalling you may want to clean up the CRDs as described on the teardown documentation.

Footnotes

  1. nodeAffinity and *NodeAffinity options should have the format "role=storage,rook; storage=ceph" or storage;role=rook-example or storage; (checks only for presence of key) 2 3