apps/docs/content/self-hosting/manage/metrics/prometheus.mdx
This page shows platform-agnostic ways to scrape ZITADEL metrics using:
ZITADEL’s endpoint path is
/debug/metrics. If you changed ports or paths via your deployment tooling, adjust the examples accordingly.
http://zitadel.zitadel.svc:8080 in Kubernetes or http://localhost:8080 locally).This approach is common when you don’t use the Prometheus Operator. Vanilla Prometheus can auto-discover scrape targets by reading standard annotations on Pods/Services:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/debug/metrics"
prometheus.io/port: "8080"
When your Prometheus server is configured with Kubernetes service discovery and relabeling rules that honor these annotations, it will automatically discover and scrape ZITADEL without any per-target scrape_configs.
Your Prometheus configuration (often installed via Helm) should include jobs like the following. These are canonical examples that keep annotated Pods and map the annotated path/port to the actual metrics endpoint:
scrape_configs:
- job_name: "kubernetes-pods"
kubernetes_sd_configs:
- role: pod
relabel_configs:
# Only keep pods with prometheus.io/scrape: "true"
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Use prometheus.io/path for the metrics path
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# Replace the address with <pod_ip>:<prometheus.io/port>
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
Most Prometheus Helm charts already ship with similar discovery jobs and relabeling rules. If you installed Prometheus via Helm, you likely already have these in place.
If you deploy ZITADEL via Helm and the chart emits scrape annotations on the Deployment/Pods, no extra work is needed. Otherwise, add the annotations yourself (via values override or a strategic patch):
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/debug/metrics"
prometheus.io/port: "8080"
If you prefer annotating the Service and your Prometheus config uses role: service discovery, add the same keys to the Service metadata.
Prometheus must have permission to list/watch Pods/Endpoints in the target namespaces. Ensure its ServiceAccount has the standard ClusterRole/ClusterRoleBinding for discovery. Missing RBAC typically shows up as discovery errors in the Prometheus logs.
If you run kube-prometheus-stack or the Prometheus Operator, use a ServiceMonitor (or PodMonitor). ZITADEL’s Helm chart provides out-of-the-box ServiceMonitor support that you can enable via values—no manual YAML is required.
helm repo add zitadel https://charts.zitadel.com
helm upgrade --install zitadel zitadel/zitadel \
--namespace zitadel --create-namespace \
--set metrics.enabled=true \
--set metrics.serviceMonitor.enabled=true
Optional but common settings:
Make the ServiceMonitor discoverable by your Prometheus (Operator) instance (many stacks match a label like release=kube-prometheus-stack):
--set metrics.serviceMonitor.additionalLabels.release=kube-prometheus-stack
Place the ServiceMonitor in a central monitoring namespace (default is the ZITADEL release namespace):
--set metrics.serviceMonitor.namespace=monitoring
Tune scraping:
--set metrics.serviceMonitor.scrapeInterval=15s \
--set metrics.serviceMonitor.scrapeTimeout=10s
Network/TLS customization (use only if you need them):
--set metrics.serviceMonitor.scheme=https \
--set metrics.serviceMonitor.tlsConfig.insecureSkipVerify=true
Relabeling:
--set metrics.serviceMonitor.relabellings[0].action=replace \
--set metrics.serviceMonitor.metricRelabellings[0].action=drop
The chart renders a ServiceMonitor roughly equivalent to:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: <release-name> # {{ include "zitadel.fullname" . }}
# namespace: <metrics.serviceMonitor.namespace> # only if you set it
labels:
# Standard chart labels + any you add:
# {{- include "zitadel.start.labels" . | nindent 4 }}
# {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }}
spec:
jobLabel: <release-name> # {{ include "zitadel.fullname" . }}
namespaceSelector:
matchNames:
- "<release-namespace>" # defaults to the Helm release namespace
selector:
matchLabels:
# Matches the ZITADEL Service created by the chart
# {{- include "zitadel.service.selectorLabels" . | nindent 6 }}
endpoints:
- port: "<protocol>-server" # e.g., "http-server" or "https-server"
path: /debug/metrics
# Optional tunables below are included only if set:
interval: <metrics.serviceMonitor.scrapeInterval>
scrapeTimeout: <metrics.serviceMonitor.scrapeTimeout>
scheme: <metrics.serviceMonitor.scheme> # http|https
tlsConfig: # metrics.serviceMonitor.tlsConfig
# ...
proxyUrl: <metrics.serviceMonitor.proxyUrl>
honorLabels: <metrics.serviceMonitor.honorLabels>
honorTimestamps: <metrics.serviceMonitor.honorTimestamps>
relabelings: # metrics.serviceMonitor.relabellings
# ...
metricRelabelings: # metrics.serviceMonitor.metricRelabellings
# ...
Details that matter:
Port name: The chart uses
port: {{ regexReplaceAll "\\W+" .Values.service.protocol "-" }}-server
which resolves to http-server when service.protocol=http (default) or https-server when service.protocol=https. You do not need to edit this—just make sure you didn’t rename the ZITADEL Service port.
Namespace selection: By default, the ServiceMonitor targets the ZITADEL release namespace via:
namespaceSelector:
matchNames:
- "<release-namespace>"
Set metrics.serviceMonitor.namespace if you want the ServiceMonitor object itself to live elsewhere (e.g., monitoring). The selector.matchLabels still points to the ZITADEL Service labels.
Labels for discovery: If your Prometheus (Operator) instance selects ServiceMonitors by label (common in kube-prometheus-stack), add those under metrics.serviceMonitor.additionalLabels—for example:
metrics:
serviceMonitor:
additionalLabels:
release: kube-prometheus-stack
Path: The chart fixes the metrics path to /debug/metrics (matches ZITADEL’s endpoint).
If you prefer to manage the ServiceMonitor yourself, keep it aligned with the chart’s conventions:
http-server or https-server) and path (/debug/metrics).Tip If you already run the Prometheus Operator, prefer this ServiceMonitor approach. If you run vanilla Prometheus without the Operator, consider the annotation-based discovery method instead (Option A).
If you run Prometheus outside of Kubernetes, add a static job pointing at ZITADEL’s metrics endpoint:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "zitadel"
metrics_path: "/debug/metrics"
scheme: "http" # use https if TLS is enabled for Zitadel
static_configs:
- targets: ["<ZITADEL_HOST>:8080"] # e.g., "localhost:8080", "zitadel.internal:8080" or "host.docker.internal:8080"
In this snippet, replace <ZITADEL_HOST>:8080 with the appropriate address. This could be localhost:8080 for local deployments, or a DNS name / IP of the server or Kubernetes service where ZITADEL is running. If ZITADEL is behind a reverse proxy or ingress, ensure that /debug/metrics is reachable (you might expose it internally only). The metrics_path is set to /debug/metrics to match ZITADEL’s endpoint. We use http scheme assuming an internal/non-TLS endpoint; if you have enabled TLS on ZITADEL, use https and the appropriate port (e.g., 443) and adjust any hostname (like zitadel.yourdomain.com).
When running Prometheus in Docker on your workstation:
- macOS/Windows: if ZITADEL runs on your host, use
host.docker.internal:8080.- Linux: either add
--add-host=host.docker.internal:host-gatewaytodocker run, attach Prometheus to the same Docker network as ZITADEL and use the service name (e.g.,zitadel:8080), or run Prometheus with--network host(Linux only).
Use Prometheus’s built-in UI to confirm your target is up:
http://localhost:9090).zitadel target. It should be UP.up{job="zitadel"}
To explore ZITADEL metrics, type zitadel in the expression box and pick from auto-complete. Common families include:
go_goroutines, process_resident_memory_bytes)Metric names and labels can change between releases. Use the Graph → Insert metric at cursor dropdown to discover what your instance exposes.
Create a basic rule to alert when ZITADEL stops scraping:
# /etc/prometheus/alerts/zitadel.yml
groups:
- name: zitadel-basic
rules:
- alert: ZitadelTargetDown
expr: up{job="zitadel"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "ZITADEL metrics target is down"
description: "Prometheus has not scraped ZITADEL successfully for 5 minutes."
Reference the rule file in your Prometheus config:
rule_files:
- "/etc/prometheus/alerts/*.yml"
(Configure Alertmanager routing according to your environment.)
Target is DOWN / connection refused
localhost inside the Prometheus container is the container itself, not your host. Use host.docker.internal:8080 (plus --add-host=host.docker.internal:host-gateway on Linux), or join Prometheus to the same Docker network as ZITADEL and use the service name (e.g., zitadel:8080), or run with --network host on Linux.http-server or https-server) and path (/debug/metrics) match your ServiceMonitor (or annotations). Check that Prometheus has RBAC to list Pods/Endpoints.Metrics path mismatch
/debug/metrics. If you see 404s, confirm your Prometheus job or annotations aren’t still using /metrics.No targets discovered (Kubernetes)
prometheus.io/* annotations and that the Pods/Service are annotated.Nothing shows up in the Graph dropdown
up{job="zitadel"} returns 1. If yes, metrics are being scraped—start typing generic prefixes like go_ or process_ to explore. ZITADEL’s exported metric set can evolve; check the raw output at /debug/metrics to see exactly what is exposed by your version.While Prometheus is the most common choice, other collectors and services can ingest the same endpoint:
If you already operate one of these platforms, you can point their agents/collectors at /debug/metrics or use an OTel Collector with a Prometheus receiver and the appropriate exporter.