Back to Kubesphere

Redis

config/ks-core/charts/redis-ha/README.md

4.1.370.6 KB
Original Source

Redis

Redis is an advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs.

TL;DR

bash
helm repo add dandydev https://dandydeveloper.github.io/charts
helm install dandydev/redis-ha

By default this chart install 3 pods total:

  • one pod containing a redis master and sentinel container (optional prometheus metrics exporter sidecar available)
  • two pods each containing a redis slave and sentinel containers (optional prometheus metrics exporter sidecars available)

Introduction

This chart bootstraps a Redis highly available master/slave statefulset in a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.8+ with Beta APIs enabled
  • PV provisioner support in the underlying infrastructure

Upgrading the Chart

Please note that there have been a number of changes simplifying the redis management strategy (for better failover and elections) in the 3.x version of this chart. These changes allow the use of official redis images that do not require special RBAC or ServiceAccount roles. As a result when upgrading from version >=2.0.1 to >=3.0.0 of this chart, Role, RoleBinding, and ServiceAccount resources should be deleted manually.

Upgrading the chart from 3.x to 4.x

Starting from version 4.x HAProxy sidecar prometheus-exporter removed and replaced by the embedded HAProxy metrics endpoint, as a result when upgrading from version 3.x to 4.x section haproxy.exporter should be removed and the haproxy.metrics need to be configured for fit your needs.

Installing the Chart

To install the chart

bash
helm repo add dandydev https://dandydeveloper.github.io/charts
helm install dandydev/redis-ha

The command deploys Redis on the Kubernetes cluster in the default configuration. By default this chart install one master pod containing redis master container and sentinel container along with 2 redis slave pods each containing their own sentinel sidecars. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the deployment:

bash
helm delete <chart-name>

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the Redis chart and their default values.

ParameterDescriptionDefault
image.repositoryRedis image repositoryredis
image.tagRedis image tag6.2.5-alpine
image.pullPolicyRedis image pull policyIfNotPresent
imagePullSecretsReference to one or more secrets to be used when pulling redis images[]
tagRedis tag6.2.5-alpine
replicasNumber of redis master/slave pods3
podManagementPolicyThe statefulset pod management policyOrderedReady
ro_replicasComma separated list of slaves which never get promoted to be master. Count starts with 0. Allowed values 1-9. i.e. 3,4 - 3th and 4th redis slave never make it to be master, where master is index 0.``
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to createGenerated using the redis-ha.fullname template
serviceAccount.automountTokenOpt in/out of automounting API credentials into containerfalse
serviceAnnotationsAnnotations to set on Redis HA servicenull
serviceLabelsLabels to set on Redis HA service{}
rbac.createCreate and use RBAC resourcestrue
redis.portPort to access the redis service6379
redis.tlsPortTLS Port to access the redis service``
redis.tlsReplicationConfigures redis with tls-replication parameter, if true sets "tls-replication yes" in redis.conf``
redis.authClientsIt is possible to disable client side certificates authentication when "authClients" is set to "no"``
redis.livenessProbe.initialDelaySecondsInitial delay in seconds for liveness probe30
redis.livenessProbe.periodSecondsPeriod in seconds after which liveness probe will be repeated15
redis.livenessProbe.timeoutSecondsTimeout seconds for liveness probe15
redis.livenessProbe.successThresholdSuccess threshold for liveness probe1
redis.livenessProbe.failureThresholdFailure threshold for liveness probe5
redis.readinessProbe.initialDelaySecondsInitial delay in seconds for readiness probe30
redis.readinessProbe.periodSecondsPeriod in seconds after which readiness probe will be repeated15
redis.readinessProbe.timeoutSecondsTimeout seconds for readiness probe15
redis.readinessProbe.successThresholdSuccess threshold for readiness probe1
redis.readinessProbe.failureThresholdFailure threshold for readiness probe5
redis.masterGroupNameRedis convention for naming the cluster group: must match ^[\\w-\\.]+$ and can be templatedmymaster
redis.disableCommandsArray with commands to disable["FLUSHDB","FLUSHALL"]
redis.configAny valid redis config options in this section will be applied to each server (see below)see values.yaml
redis.customConfigAllows for custom redis.conf files to be applied. If this is used then redis.config is ignored``
redis.resourcesCPU/Memory for master/slave nodes resource requests/limits{}
redis.lifecycleContainer Lifecycle Hooks for redis containersee values.yaml
redis.annotationsAnnotations for the redis statefulset{}
redis.updateStategy.typeUpdate strategy for redis statefulSetRollingUpdate
redis.extraVolumeMountsExtra volume mounts for Redis container[]
sentinel.portPort to access the sentinel service26379
sentinel.bindConfigure the 'bind' directive to bind to a list of network interfaces``
sentinel.tlsPortTLS Port to access the sentinel service``
sentinel.tlsReplicationConfigures sentinel with tls-replication parameter, if true sets "tls-replication yes" in sentinel.conf``
sentinel.authClientsIt is possible to disable client side certificates authentication when "authClients" is set to "no"``
sentinel.livenessProbe.initialDelaySecondsInitial delay in seconds for liveness probe30
sentinel.livenessProbe.periodSecondsPeriod in seconds after which liveness probe will be repeated15
sentinel.livenessProbe.timeoutSecondsTimeout seconds for liveness probe15
sentinel.livenessProbe.successThresholdSuccess threshold for liveness probe1
sentinel.livenessProbe.failureThresholdFailure threshold for liveness probe5
sentinel.readinessProbe.initialDelaySecondsInitial delay in seconds for readiness probe30
sentinel.readinessProbe.periodSecondsPeriod in seconds after which readiness probe will be repeated15
sentinel.readinessProbe.timeoutSecondsTimeout seconds for readiness probe15
sentinel.readinessProbe.successThresholdSuccess threshold for readiness probe3
sentinel.readinessProbe.failureThresholdFailure threshold for readiness probe5
sentinel.authEnables or disables sentinel AUTH (Requires sentinel.password to be set)false
sentinel.passwordA password that configures a requirepass in the conf parameters (Requires sentinel.auth: enabled)``
sentinel.existingSecretAn existing secret containing a key defined by sentinel.authKey that configures requirepass in the conf parameters (Requires sentinel.auth: enabled, cannot be used in conjunction with .Values.sentinel.password)``
sentinel.authKeyThe key holding the sentinel password in an existing secret.sentinel-password
sentinel.quorumMinimum number of servers necessary to maintain quorum2
sentinel.configValid sentinel config options in this section will be applied as config options to each sentinel (see below)see values.yaml
sentinel.customConfigAllows for custom sentinel.conf files to be applied. If this is used then sentinel.config is ignored``
sentinel.resourcesCPU/Memory for sentinel node resource requests/limits{}
sentinel.lifecycleContainer Lifecycle Hooks for sentinel container{}
sentinel.extraVolumeMountsExtra volume mounts for Sentinel container[]
init.resourcesCPU/Memory for init Container node resource requests/limits{}
authEnables or disables redis AUTH (Requires redisPassword to be set)false
redisPasswordA password that configures a requirepass and masterauth in the conf parameters (Requires auth: enabled)``
authKeyThe key holding the redis password in an existing secret.auth
existingSecretAn existing secret containing a key defined by authKey that configures requirepass and masterauth in the conf parameters (Requires auth: enabled, cannot be used in conjunction with .Values.redisPassword)``
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
hardAntiAffinityWhether the Redis server pods should be forced to run on separate nodes.true
additionalAffinitiesAdditional affinities to add to the Redis server pods.{}
securityContextSecurity context to be added to the Redis StatefulSet.{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}
containerSecurityContextSecurity context to be added to the Redis containers.{ runAsNonRoot: true, allowPrivilegeEscalation: false, seccompProfile: { type: RuntimeDefault }, capabilities: { drop: [ "ALL" ] }
affinityOverride all other affinity settings with a string.""
labelsLabels for the Redis pod.{}
configmap.labelsLabels for the Redis configmap.{}
configmapTest.image.repositoryRepository of the configmap shellcheck test image.koalaman/shellcheck
configmapTest.image.tagTag of the configmap shellcheck test image.v0.5.0
configmapTest.resourcesResources for the ConfigMap tests.{}
persistentVolume.sizeSize for the volume10Gi
persistentVolume.annotationsAnnotations for the volume{}
persistentVolume.labelsLabels for the volume{}
emptyDirConfiguration of emptyDir, used only if persistentVolume is disabled and no hostPath specified{}
exporter.enabledIf true, the prometheus exporter sidecar is enabledfalse
exporter.imageExporter imageoliver006/redis_exporter
exporter.tagExporter tagv1.27.0
exporter.portExporter port9121
exporter.portNameExporter port nameexporter-port
exporter.addressRedis instance Hostname/Address Exists to circumvent some issues with issues in IPv6 hostname resolutionlocalhost
exporter.annotationsPrometheus scrape annotations{prometheus.io/path: /metrics, prometheus.io/port: "9121", prometheus.io/scrape: "true"}
exporter.extraArgsAdditional args for the exporter{}
exporter.scriptA custom custom Lua script that will be mounted to exporter for collection of custom metrics. Creates a ConfigMap and sets env var REDIS_EXPORTER_SCRIPT.
exporter.serviceMonitor.enabledUse servicemonitor from prometheus operatorfalse
exporter.serviceMonitor.namespaceNamespace the service monitor is created indefault
exporter.serviceMonitor.intervalScrape interval, If not set, the Prometheus default scrape interval is usednil
exporter.serviceMonitor.telemetryPathPath to redis-exporter telemetry-path/metrics
exporter.serviceMonitor.labelsLabels for the servicemonitor passed to Prometheus Operator{}
exporter.serviceMonitor.timeoutHow long until a scrape request times out. If not set, the Prometheus default scape timeout is usednil
exporter.serviceMonitor.endpointAdditionalPropertiesSet additional properties for the ServiceMonitor endpoints such as relabeling, scrapeTimeout, tlsConfig, and more.{}
haproxy.enabledEnabled HAProxy LoadBalancing/Proxyfalse
haproxy.replicasNumber of HAProxy instances3
haproxy.servicePortModify HAProxy service port6379
haproxy.containerPortModify HAProxy deployment container port6379
haproxy.image.repositoryHAProxy Image Repositoryhaproxy
haproxy.image.tagHAProxy Image Tag2.4.2
haproxy.image.pullPolicyHAProxy Image PullPolicyIfNotPresent
haproxy.imagePullSecretsReference to one or more secrets to be used when pulling haproxy images[]
haproxy.tls.enabledIf "true" this will enable TLS termination on haproxyfalse
haproxy.tls.secretNameSecret containing the .pem file""
haproxy.tls.certMountPathPath to mount the secret that contains the certificates. haproxyfalse
haproxy.tls.secretNameSecret containing the .pem file""
haproxy.annotationsHAProxy template annotations{}
haproxy.customConfigAllows for custom config-haproxy.cfg file to be applied. If this is used then default config will be overwriten``
haproxy.extraConfigAllows to place any additional configuration section to add to the default config-haproxy.cfg``
haproxy.resourcesHAProxy resources{}
haproxy.emptyDirConfiguration of emptyDir{}
haproxy.labelsLabels for the HAProxy pod{}
haproxy.serviceAccountNameHAProxy serviceAccountNamedefault
haproxy.service.typeHAProxy service type "ClusterIP", "LoadBalancer" or "NodePort"ClusterIP
haproxy.service.nodePortHAProxy service nodePort value (haproxy.service.type must be NodePort)not set
haproxy.service.externalTrafficPolicyHAProxy service externalTrafficPolicy value (haproxy.service.type must be LoadBalancer)not set
haproxy.service.annotationsHAProxy service annotations{}
haproxy.service.labelsHAProxy service labels{}
haproxy.service.loadBalancerIPHAProxy service loadbalancer IPnot set
haproxy.service.externalIPsHAProxy external IPs{}
haproxy.stickyBalancingHAProxy sticky load balancing to Redis nodes. Helps with connections shutdown.false
haproxy.hapreadport.enableEnable a read only port for redis slavesfalse
haproxy.hapreadport.portHaproxy port for read only redis slaves6380
haproxy.metrics.enabledHAProxy enable prometheus metric scrapingfalse
haproxy.metrics.portHAProxy prometheus metrics scraping port9101
haproxy.metrics.portNameHAProxy metrics scraping port namehttp-exporter-port
haproxy.metrics.scrapePathHAProxy prometheus metrics scraping port/metrics
haproxy.metrics.serviceMonitor.enabledUse servicemonitor from prometheus operator for HAProxy metricsfalse
haproxy.metrics.serviceMonitor.namespaceNamespace the service monitor for HAProxy metrics is created indefault
haproxy.metrics.serviceMonitor.intervalScrape interval, If not set, the Prometheus default scrape interval is usednil
haproxy.metrics.serviceMonitor.telemetryPathPath to HAProxy metrics telemetry-path/metrics
haproxy.metrics.serviceMonitor.labelsLabels for the HAProxy metrics servicemonitor passed to Prometheus Operator{}
haproxy.metrics.serviceMonitor.timeoutHow long until a scrape request times out. If not set, the Prometheus default scape timeout is usednil
haproxy.metrics.serviceMonitor.endpointAdditionalPropertiesSet additional properties for the ServiceMonitor endpoints such as relabeling, scrapeTimeout, tlsConfig, and more.{}
haproxy.init.resourcesExtra init resources{}
haproxy.timeout.connecthaproxy.cfg timeout connect setting4s
haproxy.timeout.serverhaproxy.cfg timeout server setting30s
haproxy.timeout.clienthaproxy.cfg timeout client setting30s
haproxy.timeout.checkhaproxy.cfg timeout check setting2s
haproxy.checkIntervalhaproxy.cfg check inter setting1s
haproxy.checkFallhaproxy.cfg check fall setting1
haproxy.priorityClassNamepriorityClassName for haproxy deploymentnot set
haproxy.securityContextSecurity context to be added to the HAProxy deployment.{runAsUser: 99, fsGroup: 99, runAsNonRoot: true}
haproxy.containerSecurityContextSecurity context to be added to the HAProxy containers.{ runAsNonRoot: true, allowPrivilegeEscalation: false, seccompProfile: { type: RuntimeDefault }, capabilities: { drop: [ "ALL" ] }
haproxy.hardAntiAffinityWhether the haproxy pods should be forced to run on separate nodes.true
haproxy.affinityOverride all other haproxy affinity settings with a string.""
haproxy.additionalAffinitiesAdditional affinities to add to the haproxy server pods.{}
haproxy.tests.resourcesPod resources for the tests against HAProxy.{}
haproxy.IPv6.enabledDisables certain binding options to support non-IPv6 environments.true
networkPolicy.enabledCreate NetworkPolicy for Haproxy podsfalse
networkPolicy.labelsLabels for Haproxy NetworkPolicy{}
networkPolicy.annotationsAnnotations for Haproxy NetworkPolicy{}
networkPolicy.ingressRules[].selectorsLabel selector query to define resources for this ingress rule[]
networkPolicy.ingressRules[].portsThe destination ports for the ingress rule[{port: redis.port, protocol: TCP}, {port: sentinel.port, protocol: TCP}]
networkPolicy.egressRules[].selectorsLabel selector query to define resources for this egress rule[]
networkPolicy.egressRules[].portsThe destination ports for the egress rule``
podDisruptionBudgetPod Disruption Budget rules{}
nameOverrideOverride the chart name""
fullnameOverrideFully override the release name and chart name""
priorityClassNamepriorityClassName for redis-ha-statefulsetnot set
hostPath.pathUse this path on the host for data storagenot set
hostPath.chownRun an init-container as root to set ownership on the hostPathtrue
sysctlImage.enabledEnable an init container to modify Kernel settingsfalse
sysctlImage.commandsysctlImage command to execute[]
sysctlImage.registrysysctlImage Init container registrydocker.io
sysctlImage.repositorysysctlImage Init container namebusybox
sysctlImage.tagsysctlImage Init container tag1.31.1
sysctlImage.pullPolicysysctlImage Init container pull policyAlways
sysctlImage.mountHostSysMount the host /sys folder to /host-sysfalse
sysctlImage.resourcessysctlImage resources{}
schedulerNameAlternate scheduler namenil
tls.secretNameThe name of secret if you want to use your own TLS certificates. The secret should contains keys named by "tls.certFile" - the certificate, "tls.keyFile" - the private key, "tls.caCertFile" - the certificate of CA and "tls.dhParamsFile" - the dh parameter file``
tls.certFileName of certificate fileredis.crt
tls.keyFileName of key fileredis.key
tls.dhParamsFileName of Diffie-Hellman (DH) key exchange parameters file``
tls.caCertFileName of CA certificate fileca.crt
restore.s3.sourceRestore init container - AWS S3 location of dump - i.e. s3://bucket/dump.rdbfalse
restore.existingSecretSet to true to use existingSecret for the AWS S3 or SSH credentialsfalse
topologySpreadConstraints.enabledEnable topology spread constraintsfalse
topologySpreadConstraints.maxSkewMax skew of pods tolerated1
topologySpreadConstraints.topologyKeyTopology key for spreadtopology.kubernetes.io/zone
topologySpreadConstraints.whenUnsatisfiableEnforcement policy, hard or softScheduleAnyway
restore.s3.access_keyRestore init container - AWS AWS_ACCESS_KEY_ID to access restore.s3.source``
restore.s3.secret_keyRestore init container - AWS AWS_SECRET_ACCESS_KEY to access restore.s3.source``
restore.s3.regionRestore init container - AWS AWS_REGION to access restore.s3.source``
restore.ssh.sourceRestore init container - SSH scp location of dump - i.e. user@server:/path/dump.rdbfalse
restore.ssh.keyRestore init container - SSH private key to scp restore.ssh.source to init container. Key should be in one line separated with \n. i.e. -----BEGIN RSA PRIVATE KEY-----\n...\n...\n-----END RSA PRIVATE KEY-----``
extraContainersExtra containers to include in StatefulSet[]
extraInitContainersExtra init containers to include in StatefulSet[]
extraVolumesExtra volumes to include in StatefulSet[]
extraLabelsLabels that should be applied to all created resources{}
networkPolicy.enabledCreate NetworkPolicy for Redis StatefulSet podsfalse
networkPolicy.labelsLabels for NetworkPolicy{}
networkPolicy.annotationsAnnotations for NetworkPolicy{}
networkPolicy.ingressRules[].selectorsLabel selector query to define resources for this ingress rule[]
networkPolicy.ingressRules[].portsThe destination ports for the ingress rule[{port: redis.port, protocol: TCP}, {port: sentinel.port, protocol: TCP}]
networkPolicy.egressRules[].selectorsLabel selector query to define resources for this egress rule[]
networkPolicy.egressRules[].portsThe destination ports for the egress rule``
splitBrainDetection.intervalInterval between redis sentinel and server split brain checks (in seconds)60
splitBrainDetection.resourcessplitBrainDetection resources{}

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

bash
$ helm repo add dandydev https://dandydeveloper.github.io/charts
$ helm install \
  --set image=redis \
  --set tag=5.0.5-alpine \
    dandydev/redis-ha

The above command sets the Redis server within default namespace.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

bash
helm install -f values.yaml dandydev/redis-ha

Tip: You can use the default values.yaml

Custom Redis and Sentinel config options

This chart allows for most redis or sentinel config options to be passed as a key value pair through the values.yaml under redis.config and sentinel.config. See links below for all available options.

Example redis.conf Example sentinel.conf

For example repl-timeout 60 would be added to the redis.config section of the values.yaml as:

yml
    repl-timeout: "60"

Note:

  1. Some config options should be renamed by redis version,e.g.:

    yml
    # In redis 5.x,see https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf
    min-replicas-to-write: 1
    min-replicas-max-lag: 5
    
    # In redis 4.x and redis 3.x,see https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf and https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf
    min-slaves-to-write 1
    min-slaves-max-lag 5
    

Sentinel options supported must be in the the sentinel <option> <master-group-name> <value> format. For example, sentinel down-after-milliseconds 30000 would be added to the sentinel.config section of the values.yaml as:

yml
    down-after-milliseconds: 30000

If more control is needed from either the redis or sentinel config then an entire config can be defined under redis.customConfig or sentinel.customConfig. Please note that these values will override any configuration options under their respective section. For example, if you define sentinel.customConfig then the sentinel.config is ignored.

Host Kernel Settings

Redis may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn value and disabling transparent huge pages. To do so, you can set up a privileged initContainer with the sysctlImage config values, for example:

yml
sysctlImage:
  enabled: true
  mountHostSys: true
  command:
    - /bin/sh
    - -xc
    - |-
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled

HAProxy startup

When HAProxy is enabled, it will attempt to connect to each announce-service of each redis replica instance in its init container before starting. It will fail if announce-service IP is not available fast enough (10 seconds max by announce-service). A such case could happen if the orchestator is pending the nomination of redis pods. Risk is limited because announce-service is using publishNotReadyAddresses: true, although, in such case, HAProxy pod will be rescheduled afterward by the orchestrator.

PodDisruptionBudgets are not configured by default, you may need to set the haproxy.podDisruptionBudget parameter in values.yaml to enable it.

Network policies

If networkPolicy.enabled is set to true, then a NetworkPolicy resource is created with default rules to allow inter-Redis and Sentinel connectivity. This is a requirement for Redis Pods to come up successfully.

You will need to define ingressRules to permit your application connectivity to Redis. The selectors block should be in the format of a label selector. Templating is also supported in the selectors. See such a configuration below.

yaml
networkPolicy: true
  ingressRules:
    - selectors:
      - namespaceSelector:
          matchLabels:
            name: my-redis-client-namespace
        podSelector:
          matchLabels:
            # template example
            app: |-
              {{- .App.Name }}
      ## ports block is optional (defaults to below), define the block to override the defaults
      # ports:
      #   - port: 6379
      #     protocol: TCP
      #   - port: 26379
      #     protocol: TCP

Should your Pod require additional egress rules, define them in a egressRules key which is structured identically to an ingressRules key.

Sentinel and redis server split brain detection

Under not entirely known yet circumstances redis sentinel and its corresponding redis server reach a condition that this chart authors call "split brain" (for short). The observed behaviour is the following: the sentinel switches to the new re-elected master, but does not switch its redis server. Majority of original discussion on the problem has happened at the https://github.com/DandyDeveloper/charts/issues/121.

The proposed solution is currently implemented as a sidecar container that runs a bash script with the following logic:

  1. Every splitBrainDetection.interval seconds a master (as known by sentinel) is determined
  2. If it is the current node: ensure the redis server's role is master as well.
  3. If it is not the current node: ensure the redis server also replicates from the same node.

If any of the checks above fails - the redis server reinitialisation happens (it regenerates configs the same way it's done during the pod init), and then the redis server is instructed to shutdown. Then kubernetes restarts the container immediately.

Change Log

4.14.9 - ** POTENTIAL BREAKING CHANGE. **

Introduced the ability to change the Haproxy Deployment container pod

  • Container port in redis-haproxy-deployment.yam has been changed. Was redis.port To haproxy.containerPort. Default value is 6379.
  • Port in redis-haproxy-service.yaml has been changed. Was redis.port To haproxy.servicePort. Default value is 6379.

4.21.0 - BREAKING CHANGES (Kubernetes Deprecation)

This version introduced the deprecation of the PSP and subsequently added fields to the securityContexts that were introduced in Kubernetes v1.19:

https://kubernetes.io/docs/tutorials/security/seccomp/

As a result, from this version onwards Kubernetes versions older than 1.19 will fail to install without the removal of .Values.containerSecurityContext.seccompProfile and .Values.haproxy.containerSecurityContext.seccompProfile (If HAProxy is enabled)