Back to Charts

NATS

stable/nats/README.md

latest23.7 KB
Original Source

NATS

NATS is an open-source, cloud-native messaging system. It provides a lightweight server that is written in the Go programming language.

This Helm chart is deprecated

Given the stable deprecation timeline, the Bitnami maintained Nats Helm chart is now located at bitnami/charts.

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the bitnami repo and using it during the installation (bitnami/<chart> instead of stable/<chart>)

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2

To update an exisiting stable deployment with a chart hosted in the bitnami repository you can execute

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>

Issues and PRs related to the chart itself will be redirected to bitnami/charts GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue created as a common place for discussion.

TL;DR;

bash
$ helm install my-release stable/nats

Introduction

This chart bootstraps a NATS deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the BKPR.

Prerequisites

  • Kubernetes 1.12+
  • Helm 2.11+ or Helm 3.0-beta3+

Installing the Chart

To install the chart with the release name my-release:

bash
$ helm install my-release stable/nats

The command deploys NATS on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

bash
$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

The following table lists the configurable parameters of the NATS chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
image.registryNATS image registrydocker.io
image.repositoryNATS Image namebitnami/nats
image.tagNATS Image tag{TAG_NAME}
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
nameOverrideString to partially override nats.fullname template with a string (will prepend the release name)nil
fullnameOverrideString to fully override nats.fullname template with a stringnil
auth.enabledSwitch to enable/disable client authenticationtrue
auth.userClient authentication usernats_client
auth.passwordClient authentication passwordrandom alhpanumeric string (10)
auth.tokenClient authentication tokennil
clusterAuth.enabledSwitch to enable/disable cluster authenticationtrue
clusterAuth.userCluster authentication usernats_cluster
clusterAuth.passwordCluster authentication passwordrandom alhpanumeric string (10)
clusterAuth.tokenCluster authentication tokennil
debug.enabledSwitch to enable/disable debug on loggingfalse
debug.traceSwitch to enable/disable trace debug level on loggingfalse
debug.logtimeSwitch to enable/disable logtime on loggingfalse
maxConnectionsMax. number of client connectionsnil
maxControlLineMax. protocol control linenil
maxPayloadMax. payloadnil
writeDeadlineDuration the server can block on a socket write to a clientnil
replicaCountNumber of NATS nodes1
resourceTypeNATS cluster resource type under Kubernetes (Supported: StatefulSets, or Deployment)statefulset
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container1001
securityContext.runAsUserUser ID for the container1001
statefulset.updateStrategyStatefulsets Update strategyOnDelete
statefulset.rollingUpdatePartitionPartition for Rolling Update strategynil
podLabelsAdditional labels to be added to pods{}
priorityClassNameName of pod priority classnil
podAnnotationsAnnotations to be added to pods{}
nodeSelectorNode labels for pod assignmentnil
schedulerNameName of an alternatenil
antiAffinityAnti-affinity for pod assignmentsoft
tolerationsToleration labels for pod assignmentnil
resourcesCPU/Memory resource requests/limits{}
extraArgsOptional flags for NATS[]
natsFilenameFilename used by several NATS files (binary, configurarion file, and pid file)nats-server
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
livenessProbe.periodSecondsHow often to perform the probe10
livenessProbe.timeoutSecondsWhen the probe times out5
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
readinessProbe.initialDelaySecondsDelay before readiness probe is initiated5
readinessProbe.periodSecondsHow often to perform the probe10
readinessProbe.timeoutSecondsWhen the probe times out5
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
client.service.typeKubernetes Service type (NATS client)ClusterIP
client.service.portNATS client port4222
client.service.nodePortPort to bind to for NodePort service type (NATS client)nil
client.service.annotationsAnnotations for NATS client service{}
client.service.loadBalancerIPloadBalancerIP if NATS client service type is LoadBalancernil
cluster.service.typeKubernetes Service type (NATS cluster)ClusterIP
cluster.service.portNATS cluster port6222
cluster.service.nodePortPort to bind to for NodePort service type (NATS cluster)nil
cluster.service.annotationsAnnotations for NATS cluster service{}
cluster.service.loadBalancerIPloadBalancerIP if NATS cluster service type is LoadBalancernil
cluster.connectRetriesConfigure number of connect retries for implicit routesnil
monitoring.service.typeKubernetes Service type (NATS monitoring)ClusterIP
monitoring.service.portNATS monitoring port8222
monitoring.service.nodePortPort to bind to for NodePort service type (NATS monitoring)nil
monitoring.service.annotationsAnnotations for NATS monitoring service{}
monitoring.service.loadBalancerIPloadBalancerIP if NATS monitoring service type is LoadBalancernil
ingress.enabledEnable ingress controller resourcefalse
ingress.hosts[0].nameHostname for NATS monitoringnats.local
ingress.hosts[0].pathPath within the url structure/
ingress.hosts[0].tlsUtilize TLS backend in ingressfalse
ingress.hosts[0].tlsSecretTLS Secret (certificates)nats.local-tls-secret
ingress.hosts[0].annotationsAnnotations for this host's ingress record[]
ingress.secrets[0].nameTLS Secret Namenil
ingress.secrets[0].certificateTLS Secret Certificatenil
ingress.secrets[0].keyTLS Secret Keynil
networkPolicy.enabledEnable NetworkPolicyfalse
networkPolicy.allowExternalAllow external connectionstrue
metrics.enabledEnable Prometheus metrics via exporter side-carfalse
metrics.image.registryPrometheus metrics exporter image registrydocker.io
metrics.image.repositoryPrometheus metrics exporter image namebitnami/nats-exporter
metrics.image.tagPrometheus metrics exporter image tag{TAG_NAME}
metrics.image.pullPolicyPrometheus metrics image pull policyIfNotPresent
metrics.image.pullSecretsPrometheus metrics image pull secrets[] (does not add image pull secrets to deployed pods)
metrics.portPrometheus metrics exporter port7777
metrics.podAnnotationsPrometheus metrics exporter annotationsprometheus.io/scrape: "true", prometheus.io/port: "7777"
metrics.resourcesPrometheus metrics exporter resource requests/limit{}
sidecarsAttach additional containers to the podnil

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

bash
$ helm install my-release \
  --set auth.enabled=true,auth.user=my-user,auth.password=T0pS3cr3t \
    stable/nats

The above command enables NATS client authentication with my-user as user and T0pS3cr3t as password credentials.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

bash
$ helm install my-release -f values.yaml stable/nats

Tip: You can use the default values.yaml

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Production configuration and horizontal scaling

This chart includes a values-production.yaml file where you can find some parameters oriented to production configuration in comparison to the regular values.yaml. You can use this file instead of the default one.

  • Number of NATS nodes
diff
- replicaCount: 1
+ replicaCount: 3
  • Enable and set the max. number of client connections, protocol control line, payload and duration the server can block on a socket write to a client
diff
- # maxConnections: 100
- # maxControlLine: 512
- # maxPayload: 65536
- # writeDeadline: "2s"
+ maxConnections: 100
+ maxControlLine: 512
+ maxPayload: 65536
+ writeDeadline: "2s"
  • Enable NetworkPolicy:
diff
- networkPolicy.enabled: false
+ networkPolicy.enabled: true
  • Allow external connections:
diff
- networkPolicy.allowExternal: true
+ networkPolicy.allowExternal: false
  • Enable ingress controller resource:
diff
- ingress.enabled: false
+ ingress.enabled: true
  • Enable Prometheus metrics via exporter side-car:
diff
- metrics.enabled: false
+ metrics.enabled: true

To horizontally scale this chart, you can use the --replicas flag to modify the number of nodes in your NATS replica set.

Sidecars

If you have a need for additional containers to run within the same pod as NATS (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter. Simply define your container according to the Kubernetes container spec.

yaml
sidecars:
- name: your-image-name
  image: your-image
  imagePullPolicy: Always
  ports:
  - name: portname
   containerPort: 1234

Upgrading

Deploy chart with NATS version 1.x.x

NATS version 2.0.0 has renamed the server binary filename from gnatsd to nats-server. Therefore, the default values has been changed in the chart, however, it is still possible to use the chart to deploy NATS version 1.x.x using the natsFilename property.

bash
helm install nats-v1 --set natsFilename=gnatsd --set image.tag=1.4.1 stable/nats

To 1.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is nats:

console
$ kubectl delete statefulset nats-nats --cascade=false