Back to Charts

RabbitMQ

stable/rabbitmq/README.md

latest33.9 KB
Original Source

RabbitMQ

RabbitMQ is an open source message broker software that implements the Advanced Message Queuing Protocol (AMQP).

This Helm chart is deprecated

Given the stable deprecation timeline, the Bitnami maintained RabbitMQ Helm chart is now located at bitnami/charts.

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the bitnami repo and using it during the installation (bitnami/<chart> instead of stable/<chart>)

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2

To update an exisiting stable deployment with a chart hosted in the bitnami repository you can execute

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>

Issues and PRs related to the chart itself will be redirected to bitnami/charts GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue created as a common place for discussion.

TL;DR;

bash
$ helm install my-release stable/rabbitmq

Introduction

This chart bootstraps a RabbitMQ deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the BKPR.

Prerequisites

  • Kubernetes 1.12+
  • Helm 2.11+ or Helm 3.0-beta3+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

bash
$ helm install my-release stable/rabbitmq

The command deploys RabbitMQ on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

bash
$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

The following table lists the configurable parameters of the RabbitMQ chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
global.storageClassGlobal storage class for dynamic provisioningnil
image.registryRabbitmq Image registrydocker.io
image.repositoryRabbitmq Image namebitnami/rabbitmq
image.tagRabbitmq Image tag{TAG_NAME}
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an arraynil
image.debugSpecify if debug values should be setfalse
nameOverrideString to partially override rabbitmq.fullname template with a string (will prepend the release name)nil
fullnameOverrideString to fully override rabbitmq.fullname template with a stringnil
rbacEnabledSpecify if rbac is enabled in your clustertrue
podManagementPolicyPod management policyOrderedReady
rabbitmq.usernameRabbitMQ application usernameuser
rabbitmq.passwordRabbitMQ application passwordrandom 10 character long alphanumeric string
rabbitmq.existingPasswordSecretExisting secret with RabbitMQ credentialsnil
rabbitmq.erlangCookieErlang cookierandom 32 character long alphanumeric string
rabbitmq.existingErlangSecretExisting secret with RabbitMQ Erlang cookienil
rabbitmq.pluginsList of plugins to enablerabbitmq_management rabbitmq_peer_discovery_k8s
rabbitmq.extraPluginsExtra plugings to enablenil
rabbitmq.clustering.address_typeSwitch clustering modeip or hostname
rabbitmq.clustering.k8s_domainCustomize internal k8s cluster domaincluster.local
rabbitmq.clustering.rebalanceRebalance master for queues in cluster when new replica is createdfalse
rabbitmq.logsValue for the RABBITMQ_LOGS environment variable-
rabbitmq.setUlimitNofilesSpecify if max file descriptor limit should be settrue
rabbitmq.ulimitNofilesMax File Descriptor limit65536
rabbitmq.maxAvailableSchedulersRabbitMQ maximum available scheduler threads2
rabbitmq.onlineSchedulersRabbitMQ online scheduler threads1
rabbitmq.envRabbitMQ environment variables{}
rabbitmq.configurationRequired cluster configurationSee values.yaml
rabbitmq.extraConfigurationExtra configuration to add to rabbitmq.confSee values.yaml
rabbitmq.advancedConfigurationExtra configuration (in classic format) to add to advanced.configSee values.yaml
rabbitmq.tls.enabledEnable TLS support to rabbitmqfalse
rabbitmq.tls.failIfNoPeerCertWhen set to true, TLS connection will be rejected if client fails to provide a certificatetrue
rabbitmq.tls.sslOptionsVerifyverify_peerShould peer verification be enabled?
rabbitmq.tls.caCertificateCa certificateCertificate Authority (CA) bundle content
rabbitmq.tls.serverCertificateServer certificateServer certificate content
rabbitmq.tls.serverKeyServer KeyServer private key content
rabbitmq.tls.existingSecretExisting secret with certificate content to rabbitmq credentialsnil
ldap.enabledEnable LDAP supportfalse
ldap.serverLDAP server""
ldap.portLDAP port389
ldap.user_dn_patternDN used to bind to LDAPcn=${username},dc=example,dc=org
ldap.tls.enabledEnable TLS for LDAP connectionsfalse (if set to true, check advancedConfiguration parameter in values.yml)
service.typeKubernetes Service typeClusterIP
service.portAmqp port5672
service.loadBalancerIPLoadBalancerIP for the servicenil
service.tlsPortAmqp TLS port5671
service.distPortErlang distribution server port25672
service.nodePortNode port override, if serviceType NodePortrandom available between 30000-32767
service.nodeTlsPortNode port override, if serviceType NodePortrandom available between 30000-32767
service.managerPortRabbitMQ Manager port15672
service.extraPortsExtra ports to expose in the servicenil
service.extraContainerPortsExtra ports to be included in container spec, primarily informationalnil
persistence.enabledUse a PVC to persist datatrue
service.annotationsservice annotations{}
schedulerNameName of the k8s service (other than default)nil
persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
persistence.existingClaimRabbitMQ data Persistent Volume existing claim name, evaluated as a template""
persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
persistence.sizeSize of data volume8Gi
persistence.pathMount path of the data volume/opt/bitnami/rabbitmq/var/lib/rabbitmq
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container1001
securityContext.runAsUserUser ID for the container1001
resourcesresource needs and limits to apply to the pod{}
replicasReplica count1
priorityClassNamePod priority class name``
networkPolicy.enabledEnable NetworkPolicyfalse
networkPolicy.allowExternalDon't require client label for connectionstrue
networkPolicy.additionalRulesAdditional NetworkPolicy rulesnil
nodeSelectorNode labels for pod assignment{}
affinityAffinity settings for pod assignment{}
tolerationsToleration labels for pod assignment[]
updateStrategyStatefulset update strategy policyRollingUpdate
ingress.enabledEnable ingress resource for Management consolefalse
ingress.hostNameHostname to your RabbitMQ installationnil
ingress.pathPath within the url structure/
ingress.tlsenable ingress with tlsfalse
ingress.tlsSecrettls type secret to be usedmyTlsSecret
ingress.annotationsingress annotations as an array[]
livenessProbe.enabledwould you like a livenessProbed to be enabledtrue
livenessProbe.initialDelaySecondsnumber of seconds120
livenessProbe.timeoutSecondsnumber of seconds20
livenessProbe.periodSecondsnumber of seconds30
livenessProbe.failureThresholdnumber of failures6
livenessProbe.successThresholdnumber of successes1
podDisruptionBudgetPod Disruption Budget settings{}
readinessProbe.enabledwould you like a readinessProbe to be enabledtrue
readinessProbe.initialDelaySecondsnumber of seconds10
readinessProbe.timeoutSecondsnumber of seconds20
readinessProbe.periodSecondsnumber of seconds30
readinessProbe.failureThresholdnumber of failures3
readinessProbe.successThresholdnumber of successes1
metrics.enabledStart a side-car prometheus exporterfalse
metrics.image.registryExporter image registrydocker.io
metrics.image.repositoryExporter image namebitnami/rabbitmq-exporter
metrics.image.tagExporter image tag{TAG_NAME}
metrics.image.pullPolicyExporter image pull policyIfNotPresent
metrics.livenessProbe.enabledwould you like a livenessProbed to be enabledtrue
metrics.livenessProbe.initialDelaySecondsnumber of seconds15
metrics.livenessProbe.timeoutSecondsnumber of seconds5
metrics.livenessProbe.periodSecondsnumber of seconds30
metrics.livenessProbe.failureThresholdnumber of failures6
metrics.livenessProbe.successThresholdnumber of successes1
metrics.readinessProbe.enabledwould you like a readinessProbe to be enabledtrue
metrics.readinessProbe.initialDelaySecondsnumber of seconds5
metrics.readinessProbe.timeoutSecondsnumber of seconds5
metrics.readinessProbe.periodSecondsnumber of seconds30
metrics.readinessProbe.failureThresholdnumber of failures3
metrics.readinessProbe.successThresholdnumber of successes1
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using PrometheusOperatorfalse
metrics.serviceMonitor.namespaceNamespace where servicemonitor resource should be creatednil
metrics.serviceMonitor.intervalSpecify the interval at which metrics should be scraped30s
metrics.serviceMonitor.scrapeTimeoutSpecify the timeout after which the scrape is endednil
metrics.serviceMonitor.relabellingsSpecify Metric Relabellings to add to the scrape endpointnil
metrics.serviceMonitor.honorLabelshonorLabels chooses the metric's labels on collisions with target labels.false
metrics.serviceMonitor.additionalLabelsUsed to pass Labels that are required by the Installed Prometheus Operator{}
metrics.serviceMonitor.releaseUsed to pass Labels release that sometimes should be custom for Prometheus Operatornil
metrics.prometheusRule.enabledSet this to true to create prometheusRules for Prometheus operatorfalse
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so prometheusRules will be discovered by Prometheus{}
metrics.prometheusRule.namespacenamespace where prometheusRules resource should be createdSame namespace as rabbitmq
metrics.prometheusRule.rulesrules to be created, check values for an example.[]
metrics.portPrometheus metrics exporter port9419
metrics.envExporter configuration environment variables{}
metrics.resourcesExporter resource requests/limitnil
metrics.capabilitiesExporter: Comma-separated list of extended scraping capabilities supported by the target RabbitMQ serverbert,no_sort
podLabelsAdditional labels for the statefulset pod(s).{}
volumePermissions.enabledEnable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work)`false
volumePermissions.image.registryInit container volume-permissions image registrydocker.io
volumePermissions.image.repositoryInit container volume-permissions image namebitnami/minideb
volumePermissions.image.tagInit container volume-permissions image tagbuster
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyAlways
volumePermissions.resourcesInit container resource requests/limitnil
forceBoot.enabledExecutes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an unknown order. Use it only if you prefer availability over integrity.false
extraSecretsOptionally specify extra secrets to be created by the chart.{}

The above parameters map to the env variables defined in bitnami/rabbitmq. For more information please refer to the bitnami/rabbitmq image documentation.

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

bash
$ helm install my-release \
  --set rabbitmq.username=admin,rabbitmq.password=secretpassword,rabbitmq.erlangCookie=secretcookie \
    stable/rabbitmq

The above command sets the RabbitMQ admin username and password to admin and secretpassword respectively. Additionally the secure erlang cookie is set to secretcookie.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

bash
$ helm install my-release -f values.yaml stable/rabbitmq

Tip: You can use the default values.yaml

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Production configuration and horizontal scaling

This chart includes a values-production.yaml file where you can find some parameters oriented to production configuration in comparison to the regular values.yaml. You can use this file instead of the default one.

  • Resource needs and limits to apply to the pod:
diff
- resources: {}
+ resources:
+   requests:
+     memory: 256Mi
+     cpu: 100m
  • Replica count:
diff
- replicas: 1
+ replicas: 3
  • Node labels for pod assignment:
diff
- nodeSelector: {}
+ nodeSelector:
+   beta.kubernetes.io/arch: amd64
  • Enable ingress with TLS:
diff
- ingress.tls: false
+ ingress.tls: true
  • Start a side-car prometheus exporter:
diff
- metrics.enabled: false
+ metrics.enabled: true
  • Enable init container that changes volume permissions in the data directory:
diff
- volumePermissions.enabled: false
+ volumePermissions.enabled: true

To horizontally scale this chart once it has been deployed you have two options:

  • Use kubectl scale command

  • Upgrading the chart with the following parameters:

console
replicas=3
rabbitmq.password="$RABBITMQ_PASSWORD"
rabbitmq.erlangCookie="$RABBITMQ_ERLANG_COOKIE"

Note: please note it's mandatory to indicate the password and erlangCookie that was set the first time the chart was installed to upgrade the chart. Otherwise, new pods won't be able to join the cluster.

Load Definitions

It is possible to load a RabbitMQ definitions file to configure RabbitMQ. Because definitions may contain RabbitMQ credentials, store the JSON as a Kubernetes secret. Within the secret's data, choose a key name that corresponds with the desired load definitions filename (i.e. load_definition.json) and use the JSON object as the value. For example:

yaml
apiVersion: v1
kind: Secret
metadata:
  name: rabbitmq-load-definition
type: Opaque
stringData:
  load_definition.json: |-
    {
      "vhosts": [
        {
          "name": "/"
        }
      ]
    }

Then, specify the management.load_definitions property as an extraConfiguration pointing to the load definition file path within the container (i.e. /app/load_definition.json) and set loadDefinition.enable to true.

Any load definitions specified will be available within in the container at /app.

Loading a definition will take precedence over any configuration done through Helm values.

If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :

yaml
extraSecrets:
  load-definition:
    load_definition.json: |
      {
        "vhosts": [
          {
            "name": "/"
          }
        ]
      }
rabbitmq:
  loadDefinition:
    enabled: true
    secretName: load-definition
  extraConfiguration: |
    management.load_definitions = /app/load_definition.json

Enabling TLS support

To enable TLS support you must generate the certificates using RabbitMQ documentation.

You must include in your values.yaml the caCertificate, serverCertificate and serverKey files.

yaml
  caCertificate: |-
    -----BEGIN CERTIFICATE-----
    MIIDRTCCAi2gAwIBAgIJAJPh+paO6a3cMA0GCSqGSIb3DQEBCwUAMDExIDAeBgNV
    ...
    -----END CERTIFICATE-----
  serverCertificate: |-
    -----BEGIN CERTIFICATE-----
    MIIDqjCCApKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAxMSAwHgYDVQQDDBdUTFNH
    ...
    -----END CERTIFICATE-----
  serverKey: |-
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpAIBAAKCAQEA2iX3M4d3LHrRAoVUbeFZN3EaGzKhyBsz7GWwTgETiNj+AL7p
    ....
    -----END RSA PRIVATE KEY-----

This will be generate a secret with the certs, but is possible specify an existing secret using existingSecret: name-of-existing-secret-to-rabbitmq. The secret is of type kubernetes.io/tls.

Disabling failIfNoPeerCert allows a TLS connection if client fails to provide a certificate

sslOptionsVerify: When the sslOptionsVerify option is set to verify_peer, the client does send us a certificate, the node must perform peer verification. When set to verify_none, peer verification will be disabled and certificate exchange won't be performed.

LDAP

LDAP support can be enabled in the chart by specifying the ldap. parameters while creating a release. The following parameters should be configured to properly enable the LDAP support in the chart.

  • ldap.enabled: Enable LDAP support. Defaults to false.
  • ldap.server: LDAP server host. No defaults.
  • ldap.port: LDAP server port. 389.
  • ldap.user_dn_pattern: DN used to bind to LDAP. cn=${username},dc=example,dc=org.
  • ldap.tls.enabled: Enable TLS for LDAP connections. Defaults to false.

For example:

console
ldap.enabled="true"
ldap.server="my-ldap-server"
ldap.port="389"
ldap.user_dn_pattern="cn=${username},dc=example,dc=org"

If ldap.tls.enabled is set to true, consider using ldap.port=636 and checking the settings in the advancedConfiguration.

Common issues

  • Changing the password through RabbitMQ's UI can make the pod fail due to the default liveness probes. If you do so, remember to make the chart aware of the new password. Updating the default secret with the password you set through RabbitMQ's UI will automatically recreate the pods. If you are using your own secret, you may have to manually recreate the pods.

Persistence

The Bitnami RabbitMQ image stores the RabbitMQ data and configurations at the /opt/bitnami/rabbitmq/var/lib/rabbitmq/ path of the container.

The chart mounts a Persistent Volume at this location. By default, the volume is created using dynamic volume provisioning. An existing PersistentVolumeClaim can also be defined.

Existing PersistentVolumeClaims

  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
bash
$ helm install my-release --set persistence.existingClaim=PVC_NAME stable/rabbitmq

Adjust permissions of the persistence volume mountpoint

As the image runs as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Upgrading

To 6.0.0

This new version updates the RabbitMQ image to a new version based on bash instead of node.js. However, since this Chart overwrites the container's command, the changes to the container shouldn't affect the Chart. To upgrade, it may be needed to enable the fastBoot option, as it is already the case from upgrading from 5.X to 5.Y.

To 5.0.0

This major release changes the clustering method from ip to hostname. This change is needed to fix the persistence. The data dir will now depend on the hostname which is stable instead of the pod IP that might change.

IMPORTANT: Note that if you upgrade from a previous version you will lose your data.

To 3.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 3.0.0. The following example assumes that the release name is rabbitmq:

console
$ kubectl delete statefulset rabbitmq --cascade=false