Back to Charts

⚠️ Repo Archive Notice

stable/nfs-server-provisioner/README.md

latest12.4 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

NFS Server Provisioner

NFS Server Provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.

This chart will deploy the Kubernetes external-storage projects nfs provisioner. This provisioner includes a built in NFS server, and is not intended for connecting to a pre-existing NFS server. If you have a pre-existing NFS Server, please consider using the NFS Client Provisioner instead.

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

TL;DR;

console
$ helm install stable/nfs-server-provisioner

Warning: While installing in the default configuration will work, any data stored on the dynamic volumes provisioned by this chart will not be persistent!

Introduction

This chart bootstraps a nfs-server-provisioner deployment on a Kubernetes cluster using the Helm package manager.

Installing the Chart

To install the chart with the release name my-release:

console
$ helm install stable/nfs-server-provisioner --name my-release

The command deploys nfs-server-provisioner on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Uninstalling the Chart

To uninstall/delete the my-release deployment:

console
$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the kibana chart and their default values.

ParameterDescriptionDefault
extraArgsAdditional command line arguments{}
imagePullSecretsSpecify image pull secretsnil (does not add image pull secrets to deployed pods)
image.repositoryThe image repository to pull fromquay.io/kubernetes_incubator/nfs-provisioner
image.tagThe image tag to pull fromv2.2.2
image.pullPolicyImage pull policyIfNotPresent
service.typeservice typeClusterIP
service.nfsPortTCP port on which the nfs-server-provisioner NFS service is exposed2049
service.mountdPortTCP port on which the nfs-server-provisioner mountd service is exposed20048
service.rpcbindPortTCP port on which the nfs-server-provisioner RPC service is exposed111
service.nfsNodePortif service.type is NodePort and this is non-empty, sets the nfs-server-provisioner node port of the NFS servicenil
service.mountdNodePortif service.type is NodePort and this is non-empty, sets the nfs-server-provisioner node port of the mountd servicenil
service.rpcbindNodePortif service.type is NodePort and this is non-empty, sets the nfs-server-provisioner node port of the RPC servicenil
persistence.enabledEnable config persistence using PVCfalse
persistence.storageClassPVC Storage Class for config volumenil
persistence.accessModePVC Access Mode for config volumeReadWriteOnce
persistence.sizePVC Storage Request for config volume1Gi
storageClass.createEnable creation of a StorageClass to consume this nfs-server-provisioner instancetrue
storageClass.provisionerNameThe provisioner name for the storageclasscluster.local/{release-name}-{chart-name}
storageClass.defaultClassWhether to set the created StorageClass as the clusters default StorageClassfalse
storageClass.nameThe name to assign the created StorageClassnfs
storageClass.allowVolumeExpansionAllow base storage PCV to be dynamically resizeable (set to null to disable )`true
storageClass.parametersParameters for StorageClass{}
storageClass.mountOptionsMount options for StorageClass[ "vers=3" ]
storageClass.reclaimPolicyReclaimPolicy field of the class, which can be either Delete or RetainDelete
resourcesResource limits for nfs-server-provisioner pod{}
nodeSelectorMap of node labels for pod assignment{}
tolerationsList of node taints to tolerate[]
affinityMap of node/pod affinities{}
podSecurityContextSecurity context settings for nfs-server-provisioner pod (see https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod){}
console
$ helm install stable/nfs-server-provisioner --name my-release \
  --set=image.tag=v1.0.8,resources.limits.cpu=200m

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

console
$ helm install stable/nfs-server-provisioner --name my-release -f values.yaml

Tip: You can use the default values.yaml as an example

Persistence

The nfs-server-provisioner image stores it's configuration data, and importantly, the dynamic volumes it manages /export path of the container.

The chart mounts a Persistent Volume volume at this location. The volume can be created using dynamic volume provisioning. However, it is highly recommended to explicitly specify a storageclass to use rather than accept the clusters default, or pre-create a volume for each replica.

If this chart is deployed with more than 1 replica, storageClass.defaultClass=true and persistence.storageClass, then the 2nd+ replica will end up using the 1st replica to provision storage - which is likely never a desired outcome.

The following is a recommended configuration example when another storage class exists to provide persistence:

persistence:
  enabled: true
  storageClass: "standard"
  size: 200Gi

storageClass:
  defaultClass: true

On many clusters, the cloud provider integration will create a "standard" storage class which will create a volume (e.g. a Google Compute Engine Persistent Disk or Amazon EBS volume) to provide persistence.


The following is a recommended configuration example when another storage class does not exist to provide persistence:

persistence:
  enabled: true
  storageClass: "-"
  size: 200Gi

storageClass:
  defaultClass: true

In this configuration, a PersistentVolume must be created for each replica to use. Installing the Helm chart, and then inspecting the PersistentVolumeClaim's created will provide the necessary names for your PersistentVolume's to bind to.

An example of the necessary PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    fsType: "ext4"
    pdName: "data-nfs-server-provisioner-0"
  claimRef:
    namespace: kube-system
    name: data-nfs-server-provisioner-0

The following is a recommended configration example for running on bare metal with a hostPath volume:

persistence:
  enabled: true
  storageClass: "-"
  size: 200Gi

storageClass:
  defaultClass: true

nodeSelector:
  kubernetes.io/hostname: {node-name}

In this configuration, a PersistentVolume must be created for each replica to use. Installing the Helm chart, and then inspecting the PersistentVolumeClaim's created will provide the necessary names for your PersistentVolume's to bind to.

An example of the necessary PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /srv/volumes/data-nfs-server-provisioner-0
  claimRef:
    namespace: kube-system
    name: data-nfs-server-provisioner-0

Warning: hostPath volumes cannot be migrated between machines by Kubernetes, as such, in this example, we have restricted the nfs-server-provisioner pod to run on a single node. This is unsuitable for production deployments.