docs/sources/alerting/set-up/configure-high-availability/_index.md
Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.
{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
When running multiple instances of Grafana, all alert rules are evaluated on all instances by default. You can think of the evaluation of alert rules as being duplicated by the number of running Grafana instances. This is how Grafana Alerting ensures that as long as at least one Grafana instance is working, alert rules are still evaluated and notifications for alerts are still sent.
If you want to reduce this duplication, you can enable single-node evaluation mode so that only one instance evaluates alert rules.
You can find this duplication in state history and it is a good way to verify your high availability setup.
While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid duplicate notifications. The alertmanagers use a gossip protocol to share information between them to prevent sending duplicated notifications.
Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.
Alertmanagers also gossip silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.
Before you begin
Since gossiping of notifications and silences uses both TCP and UDP port 9094, ensure that each Grafana instance is able to accept incoming connections on these ports.
To enable high availability support:
[unified_alerting] section.[ha_peers] to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094.
You must have at least one (1) Grafana instance added to the ha_peers section.[ha_listen_address] to the instance IP address using a format of host:port (or the Pod's IP in the case of using Kubernetes).
By default, it is set to listen to all interfaces (0.0.0.0).[ha_advertise_address] to the instance's hostname or IP address in the format "host:port". Use this setting when the instance is behind NAT (Network Address Translation), such as in Docker Swarm or Kubernetes service, where external and internal addresses differ. This address helps other cluster instances communicate with it. The setting is optional.[ha_peer_timeout] in the [unified_alerting] section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.For a demo, see this example using Docker Compose.
As an alternative to Memberlist, you can configure Redis to enable high availability. Redis standalone, Redis Cluster and Redis Sentinel modes are supported.
{{< admonition type="note" >}}
Memberlist is the preferred option for high availability. Use Redis only in environments where direct communication between Grafana servers is not possible, such as when TCP or UDP ports are blocked.
{{< /admonition >}}
[unified_alerting] section.ha_redis_address to the Redis server address or addresses Grafana should connect to. It can be a single Redis address if using Redis standalone, or a list of comma-separated addresses if using Redis Cluster or Sentinel.ha_redis_cluster_mode_enabled to true if you are using Redis Cluster.ha_redis_sentinel_mode_enabled to true if you are using Redis Sentinel. Also set ha_redis_sentinel_master_name to the Redis Sentinel master name.ha_redis_username and ha_redis_password.ha_redis_sentinel_username and ha_redis_sentinel_password.ha_redis_prefix to something unique if you plan to share the Redis server with multiple Grafana instances.ha_redis_tls_enabled to true and configure the corresponding ha_redis_tls_* fields to secure communications between Grafana and Redis with Transport Layer Security (TLS).[ha_advertise_address] to ha_advertise_address = "${POD_IP}:9094" This is required if the instance doesn't have an IP address that is part of RFC 6890 with a default route.For a demo, see this example using Docker Compose.
You can expose the Pod IP through an environment variable via the container definition.
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Add the port 9094 to the Grafana deployment:
ports:
- name: grafana
containerPort: 3000
protocol: TCP
- name: gossip-tcp
containerPort: 9094
protocol: TCP
- name: gossip-udp
containerPort: 9094
protocol: UDP
Add the environment variables to the Grafana deployment:
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Create a headless service that returns the Pod IP instead of the service IP, which is what the ha_peers need:
apiVersion: v1
kind: Service
metadata:
name: grafana-alerting
namespace: grafana
labels:
app.kubernetes.io/name: grafana-alerting
app.kubernetes.io/part-of: grafana
spec:
type: ClusterIP
clusterIP: 'None'
ports:
- port: 9094
selector:
app: grafana
Make sure your grafana deployment has the label matching the selector, e.g. app:grafana:
Add in the grafana.ini:
[unified_alerting]
enabled = true
ha_listen_address = "${POD_IP}:9094"
ha_peers = "grafana-alerting.grafana:9094"
ha_advertise_address = "${POD_IP}:9094"
ha_peer_timeout = 15s
ha_reconnect_timeout = 2m
{{< docs/public-preview product="Single-node evaluation mode" >}}
By default, all Grafana instances in a high-availability cluster evaluate all alert rules. This means query load on data sources is multiplied by the number of Grafana instances. Single-node evaluation mode changes this so that only one instance evaluates alert rules, reducing query load from N times to 1.
To enable single-node evaluation mode, add the following to your [unified_alerting] section:
[unified_alerting]
ha_single_node_evaluation = true
This setting requires high availability clustering to be configured (either Memberlist or Redis).
The Grafana cluster automatically chooses a primary instance that is responsible for evaluating all alert rules. Other instances skip evaluation entirely.
| Default HA (all instances evaluate) | Single-node evaluation mode |
|---|---|
| Redundant evaluation on all nodes | Only one node evaluates |
| Higher query load on data sources (N times) | Reduced query load (1 time) |
| No evaluation gap on instance failure | Brief evaluation gap during failure recovery |
You can verify that single-node evaluation mode is working correctly by monitoring the following metrics.
| Metric | Description |
|---|---|
grafana_alertmanager_peer_position | The position of each instance in the cluster. The instance at position 0 is the primary and evaluates all alert rules. |
grafana_alerting_alerts_received_total | Total number of alerts received by each instance. Non-primary instances should receive alerts through the cluster broadcast channel. |
grafana_alerting_alertmanager_alerts{state="active"} | Number of active alerts on each instance. This value should be the same across all instances. |
The following metrics are specific to the HA backend you are using:
Memberlist (gossip)
| Metric | Description |
|---|---|
grafana_alertmanager_oversized_gossip_message_dropped_total{key="alerts:broadcast"} | Number of broadcast messages dropped due to a full message queue. A non-zero value indicates message loss. |
grafana_alertmanager_oversized_gossip_message_failure_total{key="alerts:broadcast"} | Number of broadcast messages that failed to send to a peer. |
grafana_alertmanager_oversized_gossip_message_sent_total{key="alerts:broadcast"} | Number of broadcast messages sent to peers. |
grafana_alertmanager_oversize_gossip_message_duration_seconds{key="alerts:broadcast"} | Duration of broadcast message sends. Useful for detecting network latency between peers. |
Redis
| Metric | Description |
|---|---|
grafana_alertmanager_cluster_messages_publish_failures_total{msg_type="update",reason="buffer_overflow"} | Number of state sync messages dropped due to a full message queue. A non-zero value indicates message loss. These metrics are shared across all HA state channels (alerts, silences, notification log), not only alert broadcasts. |
grafana_alertmanager_cluster_messages_publish_failures_total{msg_type="update",reason="redis_issue"} | Number of state sync messages that failed due to a Redis error. |
grafana_alertmanager_cluster_messages_sent_total{msg_type="update"} | Total number of state sync messages sent to Redis. Includes all HA state channels, not only alert broadcasts. |
The primary instance uses a message queue to broadcast alerts to other instances. By default, the queue holds up to 200 messages. If you have a large number of alert rules, the queue may fill up, causing messages to be dropped. You can detect this by monitoring the drop metric for your HA backend (see metrics tables above).
To increase the queue size, add the following to your [unified_alerting] section:
[unified_alerting]
ha_single_evaluation_alert_broadcast_queue_size = 500
The default value is 200. This setting applies to both Memberlist and Redis HA backends.
When running multiple Grafana instances, all alert rules are evaluated on every instance by default. This multiple evaluation of alert rules is visible in the state history and provides a straightforward way to verify that your high availability configuration is working correctly.
{{< admonition type="note" >}}
If using a mix of execute_alerts=false and execute_alerts=true on the HA nodes, since the alert state is not shared amongst the Grafana instances, the instances with execute_alerts=false do not show any alert status.
The HA settings (ha_peers, etc.) apply only to communication between alertmanagers, synchronizing silences and attempting to avoid duplicate notifications, as described in the introduction.
{{< /admonition >}}
You can also confirm your high availability setup by monitoring Alertmanager metrics exposed by Grafana.
{{< admonition type="note" >}}
Starting in Grafana v12.4, these metrics are prefixed with grafana_ (for example, grafana_alertmanager_cluster_members). If you are upgrading from an earlier version, update your dashboards and alert rules accordingly.
{{< /admonition >}}
| Metric | Description |
|---|---|
grafana_alertmanager_cluster_members | Number indicating current number of members in cluster. |
grafana_alertmanager_cluster_messages_received_total | Total number of cluster messages received. |
grafana_alertmanager_cluster_messages_received_size_total | Total size of cluster messages received. |
grafana_alertmanager_cluster_messages_sent_total | Total number of cluster messages sent. |
grafana_alertmanager_cluster_messages_sent_size_total | Total number of cluster messages received. |
grafana_alertmanager_cluster_messages_publish_failures_total | Total number of messages that failed to be published. |
grafana_alertmanager_cluster_pings_seconds | Histogram of latencies for ping messages. |
grafana_alertmanager_cluster_pings_failures_total | Total number of failed pings. |
grafana_alertmanager_peer_position | The position an Alertmanager instance believes it holds, which defines its role in the cluster. Peers should be numbered sequentially, starting from zero. |
You can confirm the number of Grafana instances in your alerting high availability setup by querying the grafana_alertmanager_cluster_members and grafana_alertmanager_peer_position metrics.
Note that these alerting high availability metrics are exposed via the /metrics endpoint in Grafana, and are not automatically collected or displayed. If you have a Prometheus instance connected to Grafana, add a scrape_config to scrape Grafana metrics and then query these metrics in Explore.
- job_name: grafana
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
- targets:
- grafana:3000
For more information on monitoring alerting metrics, refer to Alerting meta-monitoring. For a demo, see alerting high availability examples using Docker Compose.
In high-availability mode, each Grafana instance runs its own pre-configured alertmanager to handle alert notifications.
When multiple Grafana instances are running, all alert rules are evaluated on each instance by default. Each instance sends firing alerts to its respective Alertmanager. This results in notification handling being duplicated across all running Grafana instances.
Alertmanagers in HA mode communicate with each other to coordinate notification delivery. However, this setup can sometimes lead to duplicated or out-of-order notifications. By design, HA prioritizes sending duplicate notifications over the risk of missing notifications.
To avoid duplicate notifications, you can configure a shared alertmanager to manage notifications for all Grafana instances. For more information, refer to add an external alertmanager.