Back to Charts

README

stable/minio/README.md

latest33.1 KB
Original Source

MinIO

NOTICE: This chart has moved!

 Due to the deprecation and obsoletion plan of the Helm charts repository this chart has been moved to a new repository. The source for the MinIO Charts is moved to MinIO Helm Charts. The chart is hosted at https://hub.helm.sh/charts?q=minio.

MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

MinIO supports distributed mode. In distributed mode, you can pool multiple drives (even on different machines) into a single object storage server.

For more detailed documentation please visit here

Introduction

This chart bootstraps MinIO deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.4+ with Beta APIs enabled for default standalone mode.
  • Kubernetes 1.5+ with Beta APIs enabled to run MinIO in distributed mode.
  • PV provisioner support in the underlying infrastructure.

Installing the Chart

Install this chart using:

bash
$ helm install stable/minio

The command deploys MinIO on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Release name

An instance of a chart running in a Kubernetes cluster is called a release. Each release is identified by a unique name within the cluster. Helm automatically assigns a unique release name after installing the chart. You can also set your preferred name by:

bash
$ helm install --name my-release stable/minio

Access and Secret keys

By default a pre-generated access and secret key will be used. To override the default keys, pass the access and secret keys as arguments to helm install.

bash
$ helm install --set accessKey=myaccesskey,secretKey=mysecretkey \
    stable/minio

Updating MinIO configuration via Helm

ConfigMap allows injecting containers with configuration data even while a Helm release is deployed.

To update your MinIO server configuration while it is deployed in a release, you need to

  1. Check all the configurable values in the MinIO chart using helm inspect values stable/minio.
  2. Override the minio_server_config settings in a YAML formatted file, and then pass that file like this helm upgrade -f config.yaml stable/minio.
  3. Restart the MinIO server(s) for the changes to take effect.

You can also check the history of upgrades to a release using helm history my-release. Replace my-release with the actual release name.

Uninstalling the Chart

Assuming your release is named as my-release, delete it using the command:

bash
$ helm delete my-release

or

bash
$ helm uninstall my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Upgrading the Chart

You can use Helm to update MinIO version in a live release. Assuming your release is named as my-release, get the values using the command:

bash
$ helm get values my-release > old_values.yaml

Then change the field image.tag in old_values.yaml file with MinIO image tag you want to use. Now update the chart using

bash
$ helm upgrade -f old_values.yaml my-release stable/minio

Default upgrade strategies are specified in the values.yaml file. Update these fields if you'd like to use a different strategy.

Configuration

The following table lists the configurable parameters of the MinIO chart and their default values.

ParameterDescriptionDefault
nameOverrideProvide a name in place of minio""
fullnameOverrideProvide a name to substitute for the full names of resources""
image.repositoryImage repositoryminio/minio
image.tagMinIO image tag. Possible values listed here.RELEASE.2020-06-14T18-32-17Z
image.pullPolicyImage pull policyIfNotPresent
imagePullSecretsList of container registry secrets[]
mcImage.repositoryClient image repositoryminio/mc
mcImage.tagmc image tag. Possible values listed here.RELEASE.2020-05-28T23-43-36Z
mcImage.pullPolicymc Image pull policyIfNotPresent
ingress.enabledEnables Ingressfalse
ingress.labels Ingress labels{}
ingress.annotationsIngress annotations{}
ingress.hostsIngress accepted hostnames[]
ingress.tlsIngress TLS configuration[]
modeMinIO server mode (standalone or distributed)standalone
extraArgsAdditional command line arguments to pass to the MinIO server[]
replicasNumber of nodes (applicable only for MinIO distributed mode).4
zonesNumber of zones (applicable only for MinIO distributed mode).1
drivesPerNodeNumber of drives per node (applicable only for MinIO distributed mode).1
existingSecretName of existing secret with access and secret key.""
accessKeyDefault access key (5 to 20 characters)AKIAIOSFODNN7EXAMPLE
secretKeyDefault secret key (8 to 40 characters)wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
certsPathDefault certs path location/etc/minio/certs
configPathmcDefault config file location for MinIO client - mc/etc/minio/mc
mountPathDefault mount location for persistent drive/export
bucketRootDirectory from where minio should serve buckets.Value of .mountPath
clusterDomaindomain name of kubernetes cluster where pod is running.cluster.local
service.typeKubernetes service typeClusterIP
service.portKubernetes port where service is exposed9000
service.externalIPsservice external IP addressesnil
service.annotationsService annotations{}
serviceAccount.createToggle creation of new service accounttrue
serviceAccount.nameName of service account to create and/or use""
persistence.enabledUse persistent volume to store datatrue
persistence.sizeSize of persistent volume claim500Gi
persistence.existingClaimUse an existing PVC to persist datanil
persistence.storageClassStorage class name of PVCnil
persistence.accessModeReadWriteOnce or ReadOnlyReadWriteOnce
persistence.subPathMount a sub directory of the persistent volume if set""
resourcesMemory resource requestsMemory: 4Gi
priorityClassNamePod priority settings""
securityContext.enabledEnable to run containers as non-root. NOTE: if persistence.enabled=false then securityContext will be automatically disabledtrue
securityContext.runAsUserUser id of the user for the container1000
securityContext.runAsGroupGroup id of the user for the container1000
securityContext.fsGroupGroup id of the persistent volume mount for the container1000
nodeSelectorNode labels for pod assignment{}
affinityAffinity settings for pod assignment{}
tolerationsToleration labels for pod assignment[]
podAnnotationsPod annotations{}
podLabelsPod Labels{}
tls.enabledEnable TLS for MinIO serverfalse
tls.certSecretKubernetes Secret with public.crt and private.key files.""
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated5
livenessProbe.periodSecondsHow often to perform the probe5
livenessProbe.timeoutSecondsWhen the probe times out1
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.1
readinessProbe.initialDelaySecondsDelay before readiness probe is initiated60
readinessProbe.periodSecondsHow often to perform the probe5
readinessProbe.timeoutSecondsWhen the probe times out (should be 1s higher than your MINIO_API_READY_DEADLINE timeout6
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.3
defaultBucket.enabledIf set to true, a bucket will be created after MinIO installfalse
defaultBucket.nameBucket namebucket
defaultBucket.policyBucket policynone
defaultBucket.purgePurge the bucket if already existsfalse
bucketsList of buckets to create after MinIO install[]
makeBucketJob.annotationsAdditional annotations for the Kubernetes Batch (make-bucket-job)""
s3gateway.enabledUse MinIO as a s3 gatewayfalse
s3gateway.replicasNumber of s3 gateway instances to run in parallel4
s3gateway.serviceEndpointEndpoint to the S3 compatible service""
s3gateway.accessKeyAccess key of S3 compatible service""
s3gateway.secretKeySecret key of S3 compatible service""
azuregateway.enabledUse MinIO as an azure gatewayfalse
azuregateway.replicasNumber of azure gateway instances to run in parallel4
gcsgateway.enabledUse MinIO as a Google Cloud Storage gatewayfalse
gcsgateway.gcsKeyJsoncredential json file of service account key""
gcsgateway.projectIdGoogle cloud project id""
ossgateway.enabledUse MinIO as an Alibaba Cloud Object Storage Service gatewayfalse
ossgateway.replicasNumber of oss gateway instances to run in parallel4
ossgateway.endpointURLOSS server endpoint.""
nasgateway.enabledUse MinIO as a NAS gatewayfalse
nasgateway.replicasNumber of NAS gateway instances to be run in parallel on a PV4
b2gateway.enabledUse MinIO as a Backblaze B2 gatewayfalse
b2gateway.replicasNumber of b2 gateway instances to run in parallel4
environmentSet MinIO server relevant environment variables in values.yaml file. MinIO containers will be passed these variables when they start.MINIO_API_READY_DEADLINE: "5s"
metrics.serviceMonitor.enabledSet this to true to create ServiceMonitor for Prometheus operatorfalse
metrics.serviceMonitor.additionalLabelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
metrics.serviceMonitor.namespaceOptional namespace in which to create ServiceMonitornil
metrics.serviceMonitor.intervalScrape interval. If not set, the Prometheus default scrape interval is usednil
metrics.serviceMonitor.scrapeTimeoutScrape timeout. If not set, the Prometheus default scrape timeout is usednil
etcd.endpointsEnpoints of etcd[]
etcd.pathPrefixPrefix for all etcd keys""
etcd.corednsPathPrefixPrefix for all CoreDNS etcd keys""
etcd.clientCertCertificate used for SSL/TLS connections to etcd (etcd Security)""
etcd.clientCertKeyKey for the certificate (etcd Security)""

Some of the parameters above map to the env variables defined in the MinIO DockerHub image.

You can specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

bash
$ helm install --name my-release \
  --set persistence.size=1Ti \
    stable/minio

The above command deploys MinIO server with a 100Gi backing persistent volume.

Alternately, you can provide a YAML file that specifies parameter values while installing the chart. For example,

bash
$ helm install --name my-release -f values.yaml stable/minio

Tip: You can use the default values.yaml

Distributed MinIO

This chart provisions a MinIO server in standalone mode, by default. To provision MinIO server in distributed mode, set the mode field to distributed,

bash
$ helm install --set mode=distributed stable/minio

This provisions MinIO server in distributed mode with 4 nodes. To change the number of nodes in your distributed MinIO server, set the replicas field,

bash
$ helm install --set mode=distributed,replicas=8 stable/minio

This provisions MinIO server in distributed mode with 8 nodes. Note that the replicas value should be a minimum value of 4, there is no limit on number of servers you can run.

You can also expand an existing deployment by adding new zones, following command will create a total of 16 nodes with each zone running 8 nodes.

bash
$ helm install --set mode=distributed,replicas=8,zones=2 stable/minio

StatefulSet limitations applicable to distributed MinIO

  1. StatefulSets need persistent storage, so the persistence.enabled flag is ignored when mode is set to distributed.
  2. When uninstalling a distributed MinIO release, you'll need to manually delete volumes associated with the StatefulSet.

NAS Gateway

Prerequisites

MinIO in NAS gateway mode can be used to create multiple MinIO instances backed by single PV in ReadWriteMany mode. Currently few Kubernetes volume plugins support ReadWriteMany mode. To deploy MinIO NAS gateway with Helm chart you'll need to have a Persistent Volume running with one of the supported volume plugins. This document outlines steps to create a NFS PV in Kubernetes cluster.

Provision NAS Gateway MinIO instances

To provision MinIO servers in NAS gateway mode, set the nasgateway.enabled field to true,

bash
$ helm install --set nasgateway.enabled=true stable/minio

This provisions 4 MinIO NAS gateway instances backed by single storage. To change the number of instances in your MinIO deployment, set the replicas field,

bash
$ helm install --set nasgateway.enabled=true,nasgateway.replicas=8 stable/minio

This provisions MinIO NAS gateway with 8 instances.

Persistence

This chart provisions a PersistentVolumeClaim and mounts corresponding persistent volume to default location /export. You'll need physical storage available in the Kubernetes cluster for this to work. If you'd rather use emptyDir, disable PersistentVolumeClaim by:

bash
$ helm install --set persistence.enabled=false stable/minio

"An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever."

Existing PersistentVolumeClaim

If a Persistent Volume Claim already exists, specify it during installation.

  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
bash
$ helm install --set persistence.existingClaim=PVC_NAME stable/minio

NetworkPolicy

To enable network policy for MinIO, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for all pods in the namespace:

kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"

With NetworkPolicy enabled, traffic will be limited to just port 9000.

For more precise policy, set networkPolicy.allowExternal=true. This will only allow pods with the generated client label to connect to MinIO. This label will be displayed in the output of a successful install.

Existing secret

Instead of having this chart create the secret for you, you can supply a preexisting secret, much like an existing PersistentVolumeClaim.

First, create the secret:

bash
$ kubectl create secret generic my-minio-secret --from-literal=accesskey=foobarbaz --from-literal=secretkey=foobarbazqux

Then install the chart, specifying that you want to use an existing secret:

bash
$ helm install --set existingSecret=my-minio-secret stable/minio

The following fields are expected in the secret

  1. accesskey - the access key ID
  2. secretkey - the secret key
  3. gcs_key.json - The GCS key if you are using the GCS gateway feature. This is optional.

Configure TLS

To enable TLS for MinIO containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard DNS naming conventions in a Kubernetes StatefulSet (for a distributed MinIO setup). Then create a secret using

bash
$ kubectl create secret generic tls-ssl-minio --from-file=path/to/private.key --from-file=path/to/public.crt

Then install the chart, specifying that you want to use the TLS secret:

bash
$ helm install --set tls.enabled=true,tls.certSecret=tls-ssl-minio stable/minio

Pass environment variables to MinIO containers

To pass environment variables to MinIO containers when deploying via Helm chart, use the below command line format

bash
$ helm install --set environment.MINIO_BROWSER=on,environment.MINIO_DOMAIN=domain-name stable/minio

You can add as many environment variables as required, using the above format. Just add environment.<VARIABLE_NAME>=<value> under set flag.

Create buckets after install

Install the chart, specifying the buckets you want to create after install:

bash
$ helm install --set buckets[0].name=bucket1,buckets[0].policy=none,buckets[0].purge=false stable/minio

Description of the configuration parameters used above -

  1. buckets[].name - name of the bucket to create, must be a string with length > 0
  2. buckets[].policy - Can be one of none|download|upload|public
  3. buckets[].purge - Purge if bucket exists already