Back to Charts

⚠️ DEPRECATED

stable/mongodb-replicaset/README.md

latest28.0 KB
Original Source

⚠️ DEPRECATED

This chart is deprecated and no longer maintained.

It is recommended to use the Bitnami maintained MongoDB chart which has a similar feature set.

Prerequisites Details

  • Kubernetes 1.9+
  • Kubernetes beta APIs enabled only if podDisruptionBudget is enabled
  • PV support on the underlying infrastructure

StatefulSet Details

StatefulSet Caveats

Chart Details

This chart implements a dynamically scalable MongoDB replica set using Kubernetes StatefulSets and Init Containers.

Installing the Chart

To install the chart with the release name my-release:

console
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install --name my-release stable/mongodb-replicaset

Configuration

The following table lists the configurable parameters of the mongodb chart and their default values.

ParameterDescriptionDefault
replicasNumber of replicas in the replica set3
replicaSetNameThe name of the replica setrs0
skipInitializationIf true skip replica set initialization during bootstrappingfalse
podDisruptionBudgetPod disruption budget{}
updateStrategyUpdate strategynil
portMongoDB port27017
imagePullSecretsImage pull secrets[]
installImage.repositoryImage name for the install containerunguiculus/mongodb-install
installImage.tagImage tag for the install container0.7
installImage.pullPolicyImage pull policy for the init container that establishes the replica setIfNotPresent
copyConfigImage.repositoryImage name for the copy config init containerbusybox
copyConfigImage.tagImage tag for the copy config init container1.29.3
copyConfigImage.pullPolicyImage pull policy for the copy config init containerIfNotPresent
image.repositoryMongoDB image namemongo
image.tagMongoDB image tag3.6
image.pullPolicyMongoDB image pull policyIfNotPresent
serviceAccountName of a ServiceAccount to be created and applied to StatefulSet pods. Won't be created by default.``
podAnnotationsAnnotations to be added to MongoDB pods{}
statefulSetAnnotationsAnnotations to be added to MongoDB statefulSet{}
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container999
securityContext.runAsUserUser ID for the container999
securityContext.runAsNonRoottrue
resourcesPod resource requests and limits{}
persistentVolume.enabledIf true, persistent volume claims are createdtrue
persistentVolume.storageClassPersistent volume storage class``
persistentVolume.accessModesPersistent volume access modes[ReadWriteOnce]
persistentVolume.sizePersistent volume size10Gi
persistentVolume.annotationsPersistent volume annotations{}
terminationGracePeriodSecondsDuration in seconds the pod needs to terminate gracefully30
tls.enabledEnable MongoDB TLS support including authenticationfalse
tls.modeSet the SSL operation mode (disabled, allowSSL, preferSSL, requireSSL)requireSSL
tls.cacertThe CA certificate used for the membersOur self signed CA certificate
tls.cakeyThe CA key used for the membersOur key for the self signed CA certificate
init.resourcesPod resource requests and limits (for init containers){}
init.timeoutThe amount of time in seconds to wait for bootstrap to finish900
metrics.enabledEnable Prometheus compatible metrics for pods and replicasetsfalse
metrics.image.repositoryImage name for metrics exporterbitnami/mongodb-exporter
metrics.image.tagImage tag for metrics exporter0.9.0-debian-9-r2
metrics.image.pullPolicyImage pull policy for metrics exporterIfNotPresent
metrics.portPort for metrics exporter9216
metrics.pathURL Path to expose metics/metrics
metrics.resourcesMetrics pod resource requests and limits{}
metrics.securityContext.enabledEnable security contexttrue
metrics.securityContext.fsGroupGroup ID for the metrics container1001
metrics.securityContext.runAsUserUser ID for the metrics container1001
metrics.prometheusServiceDiscoveryAdds annotations for Prometheus ServiceDiscoverytrue
auth.enabledIf true, keyfile access control is enabledfalse
auth.keyKey for internal authentication``
auth.existingKeySecretIf set, an existing secret with this name for the key is used``
auth.adminUserMongoDB admin user``
auth.adminPasswordMongoDB admin password``
auth.metricsUserMongoDB clusterMonitor user``
auth.metricsPasswordMongoDB clusterMonitor password``
auth.existingMetricsSecretIf set, and existing secret with this name is used for the metrics user``
auth.existingAdminSecretIf set, and existing secret with this name is used for the admin user``
secretAnnotationsAnnotations to be added to the secret if auth is enabled{}
serviceAnnotationsAnnotations to be added to the service{}
configmapContent of the MongoDB config file``
initMongodStandaloneIf set, initContainer executes script in standalone mode``
nodeSelectorNode labels for pod assignment{}
affinityNode/pod affinities{}
tolerationsList of node taints to tolerate[]
priorityClassNamePod priority class name``
livenessProbe.failureThresholdLiveness probe failure threshold3
livenessProbe.initialDelaySecondsLiveness probe initial delay seconds30
livenessProbe.periodSecondsLiveness probe period seconds10
livenessProbe.successThresholdLiveness probe success threshold1
livenessProbe.timeoutSecondsLiveness probe timeout seconds5
readinessProbe.failureThresholdReadiness probe failure threshold3
readinessProbe.initialDelaySecondsReadiness probe initial delay seconds5
readinessProbe.periodSecondsReadiness probe period seconds10
readinessProbe.successThresholdReadiness probe success threshold1
readinessProbe.timeoutSecondsReadiness probe timeout seconds1
startupProbe.failureThresholdStartup probe failure threshold60
startupProbe.initialDelaySecondsStartup probe initial delay seconds5
startupProbe.periodSecondsStartup probe period seconds10
startupProbe.successThresholdStartup probe success threshold2
startupProbe.timeoutSecondsStartup probe timeout seconds5
extraContainersAdditional containers to add to the StatefulSet[]
extraVarsSet environment variables for the main container{}
extraLabelsAdditional labels to add to resources{}
extraVolumesAdditional volumes to add to the resources[]
clientService.enabledEnables the headless client servicetrue
global.namespaceOverrideOverride the deployment namespaceNot set (Release.Namespace)

MongoDB config file

All options that depended on the chart configuration are supplied as command-line arguments to mongod. By default, the chart creates an empty config file. Entries may be added via the configmap configuration value.

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

console
helm install --name my-release -f values.yaml stable/mongodb-replicaset

Tip: You can use the default values.yaml

Once you have all 3 nodes in running, you can run the "test.sh" script in this directory, which will insert a key into the primary and check the secondaries for output. This script requires that the $RELEASE_NAME environment variable be set, in order to access the pods.

Authentication

By default, this chart creates a MongoDB replica set without authentication. Authentication can be enabled using the parameter auth.enabled. Once enabled, keyfile access control is set up and an admin user with root privileges is created. User credentials and keyfile may be specified directly. Alternatively, existing secrets may be provided. The secret for the admin user must contain the keys user and password, that for the key file must contain key.txt. The user is created with full root permissions but is restricted to the admin database for security purposes. It can be used to create additional users with more specific permissions.

To connect to the mongo shell with authentication enabled, use a command similar to the following (substituting values as appropriate):

shell
kubectl exec -it mongodb-replicaset-0 -- mongo mydb -u admin -p password --authenticationDatabase admin

TLS support

To enable full TLS encryption set tls.enabled to true. It is recommended to create your own CA by executing:

console
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"

After that paste the base64 encoded (cat ca.key | base64 -w0) cert and key into the fields tls.cacert and tls.cakey. Adapt the configmap for the replicaset as follows:

yml
configmap:
  storage:
    dbPath: /data/db
  net:
    port: 27017
    ssl:
      mode: requireSSL
      CAFile: /data/configdb/tls.crt
      PEMKeyFile: /work-dir/mongo.pem
      # Set to false to require mutual TLS encryption
      allowConnectionsWithoutCertificates: true
  replication:
    replSetName: rs0
  security:
    authorization: enabled
    # # Uncomment to enable mutual TLS encryption
    # clusterAuthMode: x509
    keyFile: /keydir/key.txt

To access the cluster you need one of the certificates generated during cluster setup in /work-dir/mongo.pem of the certain container or you generate your own one via:

console
$ cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $HOSTNAME1
DNS.1 = $HOSTNAME2
EOL
$ openssl genrsa -out mongo.key 2048
$ openssl req -new -key mongo.key -out mongo.csr -subj "/CN=$HOSTNAME" -config openssl.cnf
$ openssl x509 -req -in mongo.csr \
    -CA $MONGOCACRT -CAkey $MONGOCAKEY -CAcreateserial \
    -out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
$ rm mongo.csr
$ cat mongo.crt mongo.key > mongo.pem
$ rm mongo.key mongo.crt

Please ensure that you exchange the $HOSTNAME with your actual hostname and the $HOSTNAME1, $HOSTNAME2, etc. with alternative hostnames you want to allow access to the MongoDB replicaset. You should now be able to authenticate to the mongodb with your mongo.pem certificate:

console
mongo --ssl --sslCAFile=ca.crt --sslPEMKeyFile=mongo.pem --eval "db.adminCommand('ping')"

Promethus metrics

Enabling the metrics as follows will allow for each replicaset pod to export Prometheus compatible metrics on server status, individual replicaset information, replication oplogs, and storage engine.

yaml
metrics:
  enabled: true
  image:
    repository: ssalaues/mongodb-exporter
    tag: 0.6.1
    pullPolicy: IfNotPresent
  port: 9216
  path: "/metrics"
  socketTimeout: 3s
  syncTimeout: 1m
  prometheusServiceDiscovery: true
  resources: {}

More information on MongoDB Exporter metrics available.

Deep dive

Because the pod names are dependent on the name chosen for it, the following examples use the environment variable RELEASENAME. For example, if the helm release name is messy-hydra, one would need to set the following before proceeding. The example scripts below assume 3 pods only.

console
export RELEASE_NAME=messy-hydra

Cluster Health

console
for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(db.serverStatus())"'; done

Failover

One can check the roles being played by each node by using the following:

console
$ for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done

MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
{
  "hosts" : [
    "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
    "messy-hydra-mongodb-1.messy-hydra-mongodb.default.svc.cluster.local:27017",
    "messy-hydra-mongodb-2.messy-hydra-mongodb.default.svc.cluster.local:27017"
  ],
  "setName" : "rs0",
  "setVersion" : 3,
  "ismaster" : true,
  "secondary" : false,
  "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
  "me" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
  "electionId" : ObjectId("7fffffff0000000000000001"),
  "maxBsonObjectSize" : 16777216,
  "maxMessageSizeBytes" : 48000000,
  "maxWriteBatchSize" : 1000,
  "localTime" : ISODate("2016-09-13T01:10:12.680Z"),
  "maxWireVersion" : 4,
  "minWireVersion" : 0,
  "ok" : 1
}

This lets us see which member is primary.

Let us now test persistence and failover. First, we insert a key (in the below example, we assume pod 0 is the master):

console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-0 -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))"

MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "nInserted" : 1 }

Watch existing members:

console
$ kubectl run --attach bbox --image=mongo:3.6 --restart=Never --env="RELEASE_NAME=$RELEASE_NAME" -- sh -c 'while true; do for i in 0 1 2; do echo $RELEASE_NAME-mongodb-replicaset-$i $(mongo --host=$RELEASE_NAME-mongodb-replicaset-$i.$RELEASE_NAME-mongodb-replicaset --eval="printjson(rs.isMaster())" | grep primary); sleep 1; done; done';

Waiting for pod default/bbox2 to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",

Kill the primary and watch as a new master getting elected.

console
$ kubectl delete pod $RELEASE_NAME-mongodb-replicaset-0

pod "messy-hydra-mongodb-0" deleted

Delete all pods and let the statefulset controller bring it up.

console
$ kubectl delete po -l "app=mongodb-replicaset,release=$RELEASE_NAME"
$ kubectl get po --watch-only
NAME                    READY     STATUS        RESTARTS   AGE
messy-hydra-mongodb-0   0/1       Pending   0         0s
messy-hydra-mongodb-0   0/1       Pending   0         0s
messy-hydra-mongodb-0   0/1       Pending   0         7s
messy-hydra-mongodb-0   0/1       Init:0/2   0         7s
messy-hydra-mongodb-0   0/1       Init:1/2   0         27s
messy-hydra-mongodb-0   0/1       Init:1/2   0         28s
messy-hydra-mongodb-0   0/1       PodInitializing   0         31s
messy-hydra-mongodb-0   0/1       Running   0         32s
messy-hydra-mongodb-0   1/1       Running   0         37s
messy-hydra-mongodb-1   0/1       Pending   0         0s
messy-hydra-mongodb-1   0/1       Pending   0         0s
messy-hydra-mongodb-1   0/1       Init:0/2   0         0s
messy-hydra-mongodb-1   0/1       Init:1/2   0         20s
messy-hydra-mongodb-1   0/1       Init:1/2   0         21s
messy-hydra-mongodb-1   0/1       PodInitializing   0         24s
messy-hydra-mongodb-1   0/1       Running   0         25s
messy-hydra-mongodb-1   1/1       Running   0         30s
messy-hydra-mongodb-2   0/1       Pending   0         0s
messy-hydra-mongodb-2   0/1       Pending   0         0s
messy-hydra-mongodb-2   0/1       Init:0/2   0         0s
messy-hydra-mongodb-2   0/1       Init:1/2   0         21s
messy-hydra-mongodb-2   0/1       Init:1/2   0         22s
messy-hydra-mongodb-2   0/1       PodInitializing   0         25s
messy-hydra-mongodb-2   0/1       Running   0         26s
messy-hydra-mongodb-2   1/1       Running   0         30s


...
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",

Check the previously inserted key:

console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-1 -- mongo --eval="rs.slaveOk(); db.test.find({key1:{\$exists:true}}).forEach(printjson)"

MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "_id" : ObjectId("57b180b1a7311d08f2bfb617"), "key1" : "value1" }

Scaling

Scaling should be managed by helm upgrade, which is the recommended way.

Indexes and Maintenance

You can run Mongo in standalone mode and execute Javascript code on each replica at initContainer time using initMongodStandalone. This allows you to create indexes on replicasets following best practices.

Example: Creating Indexes

js
initMongodStandalone: |+
  db = db.getSiblingDB("mydb")
  db.my_users.createIndex({email: 1})

Tail the logs to debug running indexes or to follow their progress

sh
kubectl exec -it $RELEASE-mongodb-replicaset-0 -c bootstrap -- tail -f /work-dir/log.txt

Migrate existing ReplicaSets into Kubernetes

If you have an existing ReplicaSet that currently is deployed outside of Kubernetes and want to move it into a cluster you can do so by using the skipInitialization flag.

First set the skipInitialization variable to true in values.yaml and install the Helm chart. That way you end up with uninitialized MongoDB pods that can be added to the existing ReplicaSet.

Now take care of realizing the DNS correct resolution of all ReplicaSet members. In Kubernetes you can for example use an ExternalName.

apiVersion: v1
kind: Service
metadata:
  name: mongodb01
  namespace: mongo
spec:
  type: ExternalName
  externalName: mongodb01.mydomain.com

If you also put each StatefulSet member behind a loadbalancer the ReplicaSet members outside of the cluster will also be able to reach the pods inside the cluster.

apiVersion: v1
kind: Service
metadata:
  name: mongodb-0
  namespace: mongo
spec:
  selector:
    statefulset.kubernetes.io/pod-name: mongodb-0
  ports:
    - port: 27017
      targetPort: 27017
  type: LoadBalancer

Now all that is left to do is to put the LoadBalancer IP into the /etc/hosts file (or realize the DNS resolution through another way)

1.2.3.4       mongodb-0
5.6.7.8       mongodb-1

With a setup like this each replicaset member can resolve the DNS entry of each other and you can just add the new pods to your existing MongoDB cluster as if they where just normal nodes.

Of course you need to make sure to get your security settings right. Enforced TLS is a good idea in a setup like this. Also make sure that you activate auth and get the firewall settings right.

Once you fully migrated remove the old nodes from the replicaset.