apps/docs/content/self-hosting/deploy/kubernetes/operations.mdx
This guide covers day-2 operations for Zitadel on Kubernetes.
values.yaml with any required changeshelm repo update
helm upgrade my-zitadel zitadel/zitadel --values values.yaml --version <target-version>
kubectl get pods --watch
Check the Helm release status:
helm status my-zitadel
Adjust the replica count in your values:
replicaCount: 3
Or scale directly:
kubectl scale deployment my-zitadel --replicas=3
Enable HPA for automatic scaling based on resource utilization:
zitadel:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
This creates an HPA that:
View HPA status:
kubectl get hpa
Get detailed HPA information:
kubectl describe hpa my-zitadel
Configure appropriate resource allocations:
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
Recommendations by deployment size:
| Size | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
| Small (dev) | 100m | 500m | 256Mi | 512Mi |
| Medium | 250m | 1000m | 512Mi | 1Gi |
| Large | 500m | 2000m | 1Gi | 2Gi |
Ensure availability during voluntary disruptions by using minAvailable:
podDisruptionBudget:
enabled: true
minAvailable: 1
Alternatively, use maxUnavailable:
podDisruptionBudget:
enabled: true
maxUnavailable: 1
This ensures at least one pod remains available during node drains, upgrades, or other voluntary disruptions.
When scaling Zitadel horizontally, ensure your PostgreSQL database can handle the increased connection load:
Example connection pooling setup with PgBouncer:
kubectl create secret generic zitadel-db-credentials \
--from-literal=dsn="postgresql://zitadel:[email protected]:6432/zitadel?sslmode=disable"
zitadel:
env:
- name: ZITADEL_DATABASE_POSTGRES_DSN
valueFrom:
secretKeyRef:
name: zitadel-db-credentials
key: dsn
configmapConfig:
Database:
Postgres:
MaxOpenConns: 20
MaxIdleConns: 10