Back to Charts

⚠️ Repo Archive Notice

incubator/patroni/README.md

latest10.1 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

Patroni Helm Chart

This directory contains a Kubernetes chart to deploy a five node Patroni cluster using a Spilo and a StatefulSet.

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

Prerequisites Details

  • Kubernetes 1.9+
  • PV support on the underlying infrastructure

StatefulSet Details

StatefulSet Caveats

Todo

  • Make namespace configurable

Chart Details

This chart will do the following:

  • Implement a HA scalable PostgreSQL 10 cluster using a Kubernetes StatefulSet.

Installing the Chart

To install the chart with the release name my-release:

console
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
$ helm dependency update
$ helm install --name my-release incubator/patroni

To install the chart with randomly generated passwords:

console
$ helm install --name my-release incubator/patroni \
  --set credentials.superuser="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)",credentials.admin="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)",credentials.standby="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)"

Connecting to PostgreSQL

Your access point is a cluster IP. In order to access it spin up another pod:

console
$ kubectl run -i --tty --rm psql --image=postgres --restart=Never -- bash -il

Then, from inside the pod, connect to PostgreSQL:

console
$ psql -U admin -h my-release-patroni.default.svc.cluster.local postgres
<admin password from values.yaml>
postgres=>

Configuration

The following table lists the configurable parameters of the patroni chart and their default values.

ParameterDescriptionDefault
nameOverrideOverride the name of the chartnil
fullnameOverrideOverride the fullname of the chartnil
replicaCountAmount of pods to spawn5
image.repositoryThe image to pullregistry.opensource.zalan.do/acid/spilo-10
image.tagThe version of the image to pull1.5-p5
image.pullPolicyThe pull policyIfNotPresent
credentials.superuserPassword of the superusertea
credentials.adminPassword of the admincola
credentials.standbyPassword of the replication userpinacolada
kubernetes.dcs.enableUsing Kubernetes as DCStrue
kubernetes.configmaps.enableUsing Kubernetes configmaps instead of endpointsfalse
etcd.enableUsing etcd as DCSfalse
etcd.deployChartDeploy etcd chartfalse
etcd.hostHost name of etcd clusternil
etcd.discoveryDomain name of etcd clusternil
zookeeper.enableUsing ZooKeeper as DCSfalse
zookeeper.deployChartDeploy ZooKeeper chartfalse
zookeeper.hostsList of ZooKeeper cluster membershost1:port1,host2:port,etc...
consul.enableUsing Consul as DCSfalse
consul.deployChartDeploy Consul chartfalse
consul.hostHost name of consul clusternil
envExtra custom environment variables{}
walE.enableUse of Wal-E tool for base backup/restorefalse
walE.scheduleCronJobSchedule of Wal-E backups00 01 * * *
walE.retainBackupsNumber of base backups to retain2
walE.s3Bucket:Amazon S3 bucket used for wal-e backupsnil
walE.gcsBucketGCS storage used for Wal-E backupsnil
walE.kubernetesSecretK8s secret name for provider bucketnil
walE.backupThresholdMegabytesMaximum size of the WAL segments accumulated after the base backup to consider WAL-E restore instead of pg_basebackup1024
walE.backupThresholdPercentageMaximum ratio (in percents) of the accumulated WAL files to the base backup to consider WAL-E restore instead of pg_basebackup30
resourcesAny resources you wish to assign to the pod{}
nodeSelectorNode label to use for scheduling{}
tolerationsList of node taints to tolerate[]
affinityTemplateA template string to use to generate the affinity settingsAnti-affinity preferred on hostname
affinityAffinity settings. Overrides affinityTemplate if set.{}
schedulerNameAlternate scheduler namenil
persistentVolume.accessModesPersistent Volume access modes[ReadWriteOnce]
persistentVolume.annotationsAnnotations for Persistent Volume Claim`{}
persistentVolume.mountPathPersistent Volume mount root path/home/postgres/pgdata
persistentVolume.sizePersistent Volume size2Gi
persistentVolume.storageClassPersistent Volume Storage Classvolume.alpha.kubernetes.io/storage-class: default
persistentVolume.subPathSubdirectory of Persistent Volume to mount""
rbac.createCreate required role and rolebindingstrue
serviceAccount.createIf true, create a new service accounttrue
serviceAccount.nameService account to be used. If not set and serviceAccount.create is true, a name is generated using the fullname templatenil

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

console
$ helm install --name my-release -f values.yaml incubator/patroni

Tip: You can use the default values.yaml

Cleanup

To remove the spawned pods you can run a simple helm delete <release-name>.

Helm will however preserve created persistent volume claims, to also remove them execute the commands below.

console
$ release=<release-name>
$ helm delete $release
$ kubectl delete pvc -l release=$release

Internals

Patroni is responsible for electing a PostgreSQL master pod by leveraging the DCS of your choice. After election it adds a spilo-role=master label to the elected master and set the label to spilo-role=replica for all replicas. Simultaneously it will update the <release-name>-patroni endpoint to let the service route traffic to the elected master.

console
$ kubectl get pods -l spilo-role -L spilo-role
NAME                   READY     STATUS    RESTARTS   AGE       SPILO-ROLE
my-release-patroni-0   1/1       Running   0          9m        replica
my-release-patroni-1   1/1       Running   0          9m        master
my-release-patroni-2   1/1       Running   0          8m        replica
my-release-patroni-3   1/1       Running   0          8m        replica
my-release-patroni-4   1/1       Running   0          8m        replica