Back to Rook

Ceph Cluster Helm Chart

Documentation/Helm-Charts/ceph-cluster-chart.md

1.19.513.3 KB
Original Source
<!--- Document is generated by `make helm-docs`. DO NOT EDIT. Edit the corresponding *.gotmpl.md file instead -->

Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:

  • CephCluster, CephFilesystem, and CephObjectStore CRs
  • Storage classes to expose Ceph RBD volumes, CephFS volumes, and RGW buckets
  • Ingress for external access to the dashboard
  • Toolbox

Prerequisites

Installing

The helm install command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph namespace. The clusters can be installed into the same namespace as the operator or a separate namespace.

Before installing, review the values.yaml to confirm if the default settings need to be updated.

  • If the operator was installed in a namespace other than rook-ceph, the namespace must be set in the operatorNamespace variable.
  • Set the desired settings in the cephClusterSpec. The defaults are only an example and not likely to apply to your cluster.
  • The monitoring section should be removed from the cephClusterSpec, as it is specified separately in the helm settings.
  • The default values for cephBlockPools, cephFileSystems, and CephObjectStores will create one of each, and their corresponding storage classes.
  • All Ceph components now have default values for the pod resources. The resources may need to be adjusted in production clusters depending on the load. The resources can also be disabled if Ceph should not be limited (e.g. test clusters).

Release

The release channel is the most recent release of Rook that is considered stable for the community.

The example install assumes you have first installed the Rook Operator Helm Chart and created your customized values.yaml.

!!! tip Instead of copying the entire default values.yaml, create a new values.yaml file that only includes the settings you want to override.

console
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
   --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml

!!! Note --namespace specifies the cephcluster namespace, which may be different from the rook operator namespace.

Configuration

The following table lists the configurable parameters of the rook-operator chart and their default values.

ParameterDescriptionDefault
cephBlockPoolsA list of CephBlockPool configurations to deploySee below
cephBlockPoolsVolumeSnapshotClassSettings for the block pool snapshot classSee RBD Snapshots
cephClusterMetadata.annotations{}
cephClusterMetadata.labels{}
cephClusterSpecCluster configuration.See below
cephFileSystemVolumeSnapshotClassSettings for the filesystem snapshot classSee CephFS Snapshots
cephFileSystemsA list of CephFileSystem configurations to deploySee below
cephImage.allowUnsupportedfalse
cephImage.repository"quay.io/ceph/ceph"
cephImage.tag"v19.2.3"
cephObjectStoresA list of CephObjectStore configurations to deploySee below
clusterNameThe metadata.name of the CephCluster CRThe same as the namespace
configOverrideCluster ceph.conf overridenil
csiDriverNamePrefixCSI driver name prefix for cephfs, rbd and nfs.namespace name where rook-ceph operator is deployed
ingress.dashboardEnable an ingress for the ceph-dashboard{}
kubeVersionOptional override of the target kubernetes versionnil
monitoring.createPrometheusRulesWhether to create the Prometheus rules for Ceph alertsfalse
monitoring.enabledEnable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors. Monitoring requires Prometheus to be pre-installedfalse
monitoring.metricsDisabledWhether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabledfalse
monitoring.prometheusRule.annotationsAnnotations applied to PrometheusRule{}
monitoring.prometheusRule.labelsLabels applied to PrometheusRule{}
monitoring.prometheusRuleOverridesEdit Prometheus rules for Ceph alerts{}
monitoring.rulesNamespaceOverrideThe namespace in which to create the prometheus rules, if different from the rook cluster namespace. If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions.nil
operatorNamespaceNamespace of the main rook operator"rook-ceph"
route.dashboardEnable an HTTPRoute for the ceph-dashboard{}
toolbox.affinityToolbox affinity{}
toolbox.containerSecurityContextToolbox container security context{"capabilities":{"drop":["ALL"]},"runAsGroup":2016,"runAsNonRoot":true,"runAsUser":2016}
toolbox.enabledEnable Ceph debugging pod deployment. See toolboxfalse
toolbox.imageToolbox image, defaults to the image used by the Ceph clusternil
toolbox.labelsToolbox labels{}
toolbox.priorityClassNameSet the priority class for the toolbox if desirednil
toolbox.resourcesToolbox resources{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}}
toolbox.tolerationsToolbox tolerations[]

Ceph Cluster Spec

The CephCluster CRD takes its spec from cephClusterSpec.*. This is not an exhaustive list of parameters. For the full list, see the Cluster CRD topic.

The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire cephClusterSpec with the specs from those examples.

Ceph Block Pools

The cephBlockPools array in the values file will define a list of CephBlockPool as described in the table below.

ParameterDescriptionDefault
nameThe name of the CephBlockPoolceph-blockpool
specThe CephBlockPool spec, see the CephBlockPool documentation.{}
storageClass.enabledWhether a storage class is deployed alongside the CephBlockPooltrue
storageClass.isDefaultWhether the storage class will be the default storage class for PVCs. See PersistentVolumeClaim documentation for details.true
storageClass.nameThe name of the storage classceph-block
storageClass.annotationsAdditional storage class annotations{}
storageClass.labelsAdditional storage class labels{}
storageClass.parametersSee Block Storage documentation or the helm values.yaml for suitable valuessee values.yaml
storageClass.reclaimPolicyThe default Reclaim Policy to apply to PVCs created with this storage class.Delete
storageClass.allowVolumeExpansionWhether volume expansion is allowed by default.true
storageClass.mountOptionsSpecifies the mount options for storageClass[]
storageClass.allowedTopologiesSpecifies the allowedTopologies for storageClass[]

Ceph File Systems

The cephFileSystems array in the values file will define a list of CephFileSystem as described in the table below.

ParameterDescriptionDefault
nameThe name of the CephFileSystemceph-filesystem
specThe CephFileSystem spec, see the CephFilesystem CRD documentation.see values.yaml
storageClass.enabledWhether a storage class is deployed alongside the CephFileSystemtrue
storageClass.nameThe name of the storage classceph-filesystem
storageClass.annotationsAdditional storage class annotations{}
storageClass.labelsAdditional storage class labels{}
storageClass.poolThe name of Data Pool, without the filesystem name prefixdata0
storageClass.parametersSee Shared Filesystem documentation or the helm values.yaml for suitable valuessee values.yaml
storageClass.reclaimPolicyThe default Reclaim Policy to apply to PVCs created with this storage class.Delete
storageClass.mountOptionsSpecifies the mount options for storageClass[]

Ceph Object Stores

The cephObjectStores array in the values file will define a list of CephObjectStore as described in the table below.

ParameterDescriptionDefault
nameThe name of the CephObjectStoreceph-objectstore
specThe CephObjectStore spec, see the CephObjectStore CRD documentation.see values.yaml
storageClass.enabledWhether a storage class is deployed alongside the CephObjectStoretrue
storageClass.nameThe name of the storage classceph-bucket
storageClass.annotationsAdditional storage class annotations{}
storageClass.labelsAdditional storage class labels{}
storageClass.parametersSee Object Store storage class documentation or the helm values.yaml for suitable valuessee values.yaml
storageClass.reclaimPolicyThe default Reclaim Policy to apply to PVCs created with this storage class.Delete
ingress.enabledEnable an ingress for the object storefalse
ingress.annotationsIngress annotations{}
ingress.host.nameIngress hostname""
ingress.host.pathIngress path prefix/
ingress.tlsIngress tls/
ingress.ingressClassNameIngress tls""

Existing Clusters

If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster:

  1. Extract the spec section of your existing CephCluster CR and copy to the cephClusterSpec section in values.yaml.

  2. Add the following annotations and label to your existing CephCluster CR:

yaml
  annotations:
    meta.helm.sh/release-name: rook-ceph-cluster
    meta.helm.sh/release-namespace: rook-ceph
  labels:
    app.kubernetes.io/managed-by: Helm
  1. Run the helm install command in the Installing section to create the chart.

  2. In the future when updates to the cluster are needed, ensure the values.yaml always contains the desired CephCluster spec.

Development Build

To deploy from a local build from your development environment there are two steps:

  1. Deploy the operator chart, in particular to get the CRDs.
  2. Deploy the cluster chart:
console
cd deploy/charts/rook-ceph-cluster
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster -f values.yaml .

Uninstalling the Chart

To see the currently installed Rook chart:

console
helm ls --namespace rook-ceph

To uninstall/delete the rook-ceph-cluster chart:

console
helm delete --namespace rook-ceph rook-ceph-cluster

The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory (/var/lib/rook by default) and on OSD raw devices is kept. To reuse disks, you will have to wipe them before recreating the cluster.

See the teardown documentation for more information.