Back to Charts

⚠️ Repo Archive Notice

incubator/vault/README.md

latest15.5 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

Vault Helm Chart

This directory contains a Kubernetes chart to deploy a Vault server.

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

Prerequisites Details

  • Kubernetes 1.6+

Chart Details

This chart will do the following:

  • Implement a Vault deployment
  • Optionally, deploy a consul agent in the pod

Please note that a backend service for Vault (for example, Consul) must be deployed beforehand and configured with the vault.config option. YAML provided under this option will be converted to JSON for the final Vault config.json file.

See https://www.vaultproject.io/docs/configuration/ for more information.

Installing the Chart

To install the chart, use the following, this backs Vault with a Consul cluster:

console
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install incubator/vault --set vault.dev=false --set vault.config.storage.consul.address="myconsul-svc-name:8500",vault.config.storage.consul.path="vault"

An alternative example using the Amazon S3 backend can be specified using:

vault:
  config:
    storage:
      s3:
        access_key: "AWS-ACCESS-KEY"
        secret_key: "AWS-SECRET-KEY"
        bucket: "AWS-BUCKET"
        region: "eu-central-1"

Configuration

The following table lists the configurable parameters of the Vault chart and their default values.

ParameterDescriptionDefault
imagePullSecretThe name of the secret to use if pulling from a private registrynil
image.pullPolicyContainer pull policyIfNotPresent
image.repositoryContainer image to usevault
image.tagContainer image tag to deploy.Chart.appVersion
`vault.backendPolicyIf custom backend needed{}
vault.devUse Vault in dev modetrue (set to false in production)
vault.extraArgsAdditional arguments for vault server command[]
vault.extraEnvExtra env vars for Vault pods{}
vault.extraContainersSidecar containers to add to the vault pod{}
vault.extraInitContainersInit containers to be added to the vault pod{}
vault.extraVolumesAdditional volumes to the controller pod{}
vault.extraVolumeMountsExtra volumes to mount to the controller pod{}
vault.existingConfigNameLocation of existing Vault configurationnil
vault.podApiAddressSet the VAULT_API_ADDR environment variable to the Pod IP Address. This is the address (full URL) to advertise to other Vault servers in the cluster for client redirection.true
vault.configVault configurationNo default backend
vault.liveness.aliveIfUninitializedMake sure liveness probe is alive even if cluster is not initialized"true"
vault.liveness.aliveIfSealedMake sure liveness probe is alive even if cluster is unsealed"true"
vault.liveness.initialDelaySecondsNumber of seconds after the container has started before liveness probes are initiated."30"
vault.liveness.periodSecondsHow often (in seconds) to perform the probe."10"
vault.liveness.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded."3"
vault.liveness.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed."1"
vault.liveness.timeoutSecondsNumber of seconds after which the probe times out."1"
vault.readiness.readyIfSealedMake sure readiness probe is ready even if cluster is unsealed"false"
vault.readiness.readyIfStandbyMake sure readiness probe is ready even if node is on standby"true"
vault.readiness.readyIfUninitializedMake sure readiness probe is ready even if cluster is not initialized"true"
vault.readiness.initialDelaySecondsNumber of seconds after the container has started before readiness probes are initiated."10"
vault.readiness.periodSecondsHow often (in seconds) to perform the probe."10"
vault.readiness.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded."3"
vault.readiness.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed."1"
vault.readiness.timeoutSecondsNumber of seconds after which the probe times out."1"
replicaCountk8s replicas3
resources.limits.cpuContainer requested CPUnil
resources.limits.memoryContainer requested memorynil
affinityAffinity settingsSee values.yaml
nodeSelectorNode labels for pod assignment{}
tolerationsTolerations for node taints[]
service.loadBalancerIPAssign a static IP to the loadbalancernil
service.loadBalancerSourceRangesIP whitelist for service type loadbalancer[]
service.annotationsAnnotations for service{}
service.externalPortExternal port for the service8200
service.portThe API port Vault is using8200
service.clusterExternalPortExternal cluster port for the servicenil
service.clusterPortThe cluster port Vault is using8201
service.additionalSelectorAdditional selector the Vault service{}
annotationsAnnotations for deployment{}
labelsExtra labels for deployment{}
ingress.labelsLabels for ingress{}
podAnnotationsAnnotations for pods{}
priorityClassNamePriority class name for pods""
minReadySecondsMinimum number of seconds that newly created replicas must be ready without any containers crashing0
podLabelsExtra labels for pods{}
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to createGenerated from fullname template
serviceAccount.annotationsAnnotations for the created ServiceAccount{}
rbac.createSpecifies whether RBAC should be createdtrue
consulAgent.joinIf set, start start a consul agentnil
consulAgent.repositoryContainer image for consul agentconsul
consulAgent.tagContainer image tag for consul agent1.4.0
consulAgent.pullPolicyContainer pull policy for consul agentIfNotPresent
consulAgent.gossipKeySecretNamek8s secret containing gossip keynil (see values.yaml for details)
consulAgent.HttpPortHTTP port for consul agent API8500
consulAgent.resourcesContainer resources for consul agentnil
vaultExporter.enabledEnable or disable vault exporterfalse
vaultExporter.repositoryContainer image for vault exportergrapeshot/vault_exporter
vaultExporter.tagContainer image tag for vault exporterv0.1.2
vaultExporter.pullPolicyImage pull policy that sould be usedIfNotPresent
vaultExporter.vaultAddressVault address that exporter should use127.0.0.1:8200
vaultExporter.tlsCAFileVault TLS CA certificate mount path/vault/tls/ca.crt
serviceMonitor.enabledSpecifies whether a Prometheus ServiceMonitor should be createdfalse
serviceMonitor.additionalLabelsAdditional labels for Service Monitor{}
serviceMonitor.podPortNameName of the port of the pod to scrapemetrics
serviceMonitor.intervalPrometheus scrape interval10s
serviceMonitor.jobLabelPrometheus job labelvault-exporter
prometheusRules.enabledSpecifies whether a Prometheus Alert Rule should be createdfalse
prometheusRules.defaultRules.vaultUpSpecifies whether the vaultUp rule should be includedtrue
prometheusRules.defaultRules.vaultUninitializedSpecifies whether the vaultUninitialized rule should be includedtrue
prometheusRules.defaultRules.vaultSealedSpecifies whether the vaulSealed rule should be includedtrue
prometheusRules.defaultRules.vaultStandbySpecifies whether the vaultStandy rule should be includedfalse
prometheusRules.extraRulesCustom extra rules[]

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Optional Consul Agent

If you are using the consul storage for vault, you might want a local consul agent to handle health checks. By setting consulAgent.join to your consul server, an agent will be started in the vault pod. In this case, you should configure vault to connect to consul over localhost. For example:

yaml
vault:
  dev: False
  config:
    storage:
      consul:
        address: "localhost:8500"
consulAgent:
  join: consul.service.consul

If you are using the stable/consul helm chart, consul communications are encrypted with a gossip key. You can configure a secret with the same format as that chart and specify it in the consulAgent.gossipKeySecretName parameter.

Optional Vault Exporter

If you want to monitor Vault with Prometheus you can simply enable the Vault exporter which then runs as a sidecar container within the same pod as Vault itself. To use the exporter just set vaultExporter.enabled to true and set the other variables according to your needs.

If your Vault is set up with TLS make sure to specify the CA certificate path properly. This is done through the parameter vaultExporter.tlsCAFile.

If you want to use the exporter with the Prometheus Operator you can simply enable the ServiceMonitor with an extraLabel corresponding to your Prometheus scraper label selector. For example:

yaml
serviceMonitor:
  enabled: true
  additionalLabels:
    prometheus-scraper: "default"
  interval: 10s
  jobLabel: "vault-exporter"

If you do not want to use the default vaultExporter container, but use your own, you can declare it in the vault.extraContainer part. But you have to expose a named port for the metrics and set this name in the serviceMonitor.podPortName. For exmaple:

yaml
vaultExporter:
  enabled: false

serviceMonitor:
  enabled: true
  additionalLabels:
    prometheus-scraper: "default"
  podPortName: "metricPort"

vault:
  extraContainer:
  - name: my-vault-exporter
    image: my-vault-exporter:latest
    ports:
    - containerPort: 8080
      name: metricPort

If you want to add Prometheus alerting rules you can simply enable the alerts and disabling/enabling the defaults rules you want to use. You can add as many custom rules as you want. For example:

yaml
  prometheusRules:
    enabled: true
    defaultRules:
      vaultUp: true
      vaultUninitialized: true
      vaultSealed: true
      vaultStandby: false
    extraRules:
    - alert: VaultHTTPErrorRateIsHigh
      annotations:
        description: The ingress is failing more than %15 of the requests for 5m
      expr: sum(rate(nginx_ingress_controller_requests{ingress="vault",status!~"[4-5].*"}[2m])) by (ingress) / sum(rate(nginx_ingress_controller_requests{ingress="vault"}[2m])) by (ingress) < 0.85
      for: 5m
      labels:
        severity: critical

Using Vault

Once the Vault pod is ready, it can be accessed using a kubectl port-forward:

console
$ kubectl port-forward vault-pod 8200
$ export VAULT_ADDR=http://127.0.0.1:8200
$ vault status

Migrating Custom Secrets

Previous versions of this chart had a configuration option vault.customSecrets. Custom secrets should now be expressed with vault.extraVolumeMounts. For example:

yaml
vault:
  customSecrets:
    - secretName: vault-tls
      mountPath: /vault/tls

Would be expressed as:

yaml
vault:
  extraVolumes:
    - name: vault-tls
      secret:
        secretName: vault-tls
  extraVolumeMounts:
    - name: vault-tls
      mountPath: /vault/tls
      readOnly: true