Back to Charts

⚠️ Repo Archive Notice

stable/rethinkdb/README.md

latest7.0 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

RethinkDB 2.3.5 Helm Chart

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

Prerequisites Details

  • Kubernetes 1.5+ with Beta APIs enabled.
  • PV support on the underlying infrastructure.

StatefulSet Details

StatefulSet Caveats

Acknowledgment of Previous Works

I have heavily borrowed and extended code (peer discovery and probe) from the following project to build this Helm Chart and Docker image: https://github.com/rosskukulinski/kubernetes-rethinkdb-cluster

Chart Details

This chart implements a dynamically scalable RethinkDB Cluster using Kubernetes StatefulSets.

Installing the Chart

To install the chart with the release name my-release:

console
$ helm install --name my-release stable/rethinkdb

Configuration

The following table lists the configurable parameters of the rethinkdb chart and their default values.

ParameterDescriptionDefault
image.nameCustom RethinkDB image name for auto-joining and probecodylundquist/helm-rethinkdb-cluster
image.tagCustom RethinkDB image tag0.1.0
image.pullPolicyCustom RethinkDB image pull policyIfNotPresent
cluster.replicasNumber of RethinkDB Cluster replicas3
cluster.resourcesResource configuration for each RethinkDB Cluster Pod{}
cluster.podAnnotationsAnnotations to be added to RethinkDB Cluster Pods{}
cluster.service.annotationsAnnotations to be added to RethinkDB Cluster Service{}
cluster.storageClass.enabledIf true, create a StorageClass for the cluster. Note: You must set a provisionerfalse
cluster.storageClass.provisionerProvisioner definition for StorageClassundefined
cluster.storageClass.parametersParameters for StorageClassundefined
cluster.persistentVolume.enabledIf true, persistent volume claims are createdtrue
cluster.persistentVolume.storageClassPersistent volume storage classdefault
cluster.persistentVolume.accessModePersistent volume access modes[ReadWriteOnce]
cluster.persistentVolume.sizePersistent volume size1Gi
cluster.persistentVolume.annotationsPersistent volume annotations{}
cluster.rethinkCacheSizeRethinkDB cache-size value in MB100
proxy.replicasNumber of RethinkDB Proxy replicas1
proxy.resourcesResource configuration for each RethinkDB Proxy Pod{}
proxy.podAnnotationsAnnotations to be added to RethinkDB Proxy Pods{}
proxy.service.typeRethinkDB Proxy Service TypeClusterIP
proxy.service.annotationsAnnotations to be added to RethinkDB Cluster Service{}
proxy.service.clusterIPInternal controller proxy service IP""
proxy.service.externalIPsController service external IP addresses[]
proxy.service.loadBalancerIPIP address to assign to load balancer (if supported)""
proxy.service.loadBalancerSourceRangesList of IP CIDRs allowed access to load balancer (if supported)[]
proxy.driverTLS.enabledShould RethinkDB Proxy TLS be enabled. Note: If enabled, you must set a key and certfalse
proxy.driverTLS.keyRSA Private Keyundefined
proxy.driverTLS.certCertificateundefined
ports.clusterRethinkDB Cluster Port29015
ports.driverRethinkDB Driver Port28015
ports.adminRethinkDB Admin Port8080
rethinkdbPasswordPassword for the RethinkDB Admin userrethinkdb

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

console
$ helm install --name my-release -f values.yaml stable/rethinkdb

Tip: You can use the default values.yaml

Important: Admin Password Management

The initial admin password is set by the config value rethinkdbPassword. This value is also used by the probe which periodically checks if the RethinkDB Cluster and Proxy are still running. If you change the RethinkDB admin password via a query (i.e. r.db('rethinkdb').table('users').update({password: 'new-password'})) this will cause the probe to fail which then restarts the pods over and over. To stablize the cluster, you also need to use helm upgrade to update the password in the Kubernetes Secrets storage by doing:

console
$ helm upgrade --set rethinkdbPassword=new-password my-release stable/rethinkdb

Opening Up the RethinkDB Admin Console

The admin port is not available outside of the cluster for security reasons. The only way to access the admin console is to use a Kubernetes Proxy. To open up the admin console:

console
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Then use the following URL: http://localhost:8001/api/v1/namespaces/NAMESPACE/services/RELEASE_NAME-rethinkdb-admin/proxy Make sure a replace NAMEPSPACE with the correct namespace and RELEASE_NAME that was used when installing the chart.

And then open up your browser to http://localhost:8080 and you should see the admin console

Cleanup orphaned Persistent Volumes

Deleting a StatefulSet will not delete associated Persistent Volumes.

Do the following after deleting the chart release to clean up orphaned Persistent Volumes.

console
$ kubectl delete pvc -l release=my-release

Failover

If any RethinkDB server fails it gets re-joined eventually. You can test the scenario by killing process of one of the pods:

console
$ kubectl get pods -l release=my-release
NAME                                          READY     STATUS    RESTARTS   AGE
my-release-rethinkdb-cluster-0                1/1       Running   0          1m
my-release-rethinkdb-cluster-1                1/1       Running   0          2m
my-release-rethinkdb-cluster-2                1/1       Running   0          2m
my-release-rethinkdb-proxy-2517940628-81dxd   1/1       Running   1          1m

$ kubectl exec -it my-release-rethinkdb-cluster-0 -- ps aux | grep 'rethinkdb'
root         7  0.1  2.1 233496 43408 ?        Ssl  16:56   0:00 rethinkdb --ser
root        26  0.0  0.4 146948  8204 ?        S    16:56   0:00 rethinkdb --ser
root       100  0.0  0.7 157192 16060 ?        S    16:56   0:00 rethinkdb --ser

$ kubectl exec -it my-release-rethinkdb-cluster-0 -- kill 7

Scaling

Scaling should be managed by helm upgrade, which is the recommended way. Example:

$ helm upgrade --set cluster.replicas=4 my-release stable/rethinkdb