docs/reporting.md
Extract infrastructure insights for stakeholders, leadership, and external tools. Netdata provides four ways to create reports—from asking a simple question to exporting raw metrics into your existing business intelligence stack.
| Method | Effort | Best for |
|---|---|---|
| AI Insights | Easiest | Executive summaries, recurring reports, natural language queries |
| AI Assistants (MCP) | Easy | Ad-hoc analysis, deep investigation, developer workflows |
| Grafana | Medium | Custom dashboards, teams already using Grafana |
| Export to BI | Advanced | Power BI, Tableau, Looker, custom analytics pipelines |
Ask Netdata anything about your infrastructure in plain language and receive an executive-ready report. No configuration required—just describe what you need.
InsightsNew Investigation for a custom promptGenerateReports complete in 2–3 minutes and are saved in Insights. You receive an email when the report is ready.
Automate your reporting workflow with scheduled reports:
Schedule (next to Generate)Scheduled reports run automatically and deliver results to your email and the Insights tab.
Weekly infrastructure health:
Generate a weekly infrastructure summary for services A, B, and C.
Include major incidents, anomalies, capacity risks, and recommended follow-ups.
Cost optimization:
Identify underutilized nodes for cost savings. Monthly compute is ~$12K
with mixed workloads. Goal: save $2–3K/month without reliability impact.
SLO conformance:
Generate an SLO conformance report for 'user-auth' (99.9% uptime,
p95 latency <200ms) for the last 7 days. Include breaches, contributing
factors, and remediation recommendations.
Connect your AI assistant directly to Netdata using the Model Context Protocol (MCP). Ask questions in natural language and receive answers based on live infrastructure data.
MCP is available in two ways:
app.netdata.cloud/api/v1/mcp — infrastructure-wide access to all your nodes (Business/Homelab plan)AI assistants can query metrics, alerts, logs, and live system information across your entire infrastructure.
| Client | Description |
|---|---|
| Claude Desktop | Anthropic's desktop AI assistant |
| Claude Code | Anthropic's CLI for development workflows |
| Cursor | AI-powered code editor |
| VS Code | Visual Studio Code with MCP support |
| JetBrains IDEs | IntelliJ, PyCharm, WebStorm, and others |
| Gemini CLI | Google's Gemini CLI |
| OpenAI Codex CLI | OpenAI's development tools |
# Export your MCP key
export NETDATA_MCP_API_KEY="$(cat /var/lib/netdata/mcp_dev_preview_api_key)"
# Connect using mcp-remote
npx mcp-remote@latest --http http://YOUR_NETDATA_IP:19999/mcp \
--allow-http \
--header "Authorization: Bearer $NETDATA_MCP_API_KEY"
Once connected, ask natural language questions:
See Netdata MCP for detailed setup instructions.
Connect Grafana to Netdata Cloud for infrastructure-wide dashboards. Use Grafana's visualization capabilities with Netdata's real-time metrics.
:::tip
Generate API tokens from Netdata Cloud under User Settings → API Tokens. See API Tokens for details.
:::
Export metrics from Netdata to external databases and business intelligence platforms. You can query data from individual Agents or use Netdata Cloud to aggregate metrics from your entire infrastructure.
Netdata integrates with popular business intelligence tools through several pathways:
| BI Platform | Integration Options |
|---|---|
| Power BI | Netdata Cloud API, Prometheus endpoint, or database export |
| Tableau | Netdata Cloud API, PostgreSQL, or Prometheus |
| Looker / Looker Studio | Netdata Cloud API, BigQuery, or Prometheus |
| Qlik | Netdata Cloud API, PostgreSQL, or InfluxDB |
| SAP Analytics Cloud | Netdata Cloud API or PostgreSQL |
| Metabase | Netdata Cloud API, PostgreSQL, or TimescaleDB |
| Apache Superset | Netdata Cloud API, PostgreSQL, or Prometheus |
| Domo | Netdata Cloud API or database connectors |
| ThoughtSpot | Netdata Cloud API or PostgreSQL |
The Netdata Cloud API lets you query metrics from all your nodes through a single endpoint. This is the simplest approach for multi-node infrastructure.
https://app.netdata.cloud/api/v2/data# Query CPU metrics from all nodes
curl -H 'Accept: application/json' \
-H "Authorization: Bearer YOUR_API_TOKEN" \
'https://app.netdata.cloud/api/v2/data?contexts=system.cpu&after=-3600'
# Get list of all nodes in your space
curl -H 'Accept: application/json' \
-H "Authorization: Bearer YOUR_API_TOKEN" \
'https://app.netdata.cloud/api/v2/nodes'
The Cloud API returns aggregated data from all nodes in your infrastructure, making it ideal for BI tools that need a unified view.
For single-node deployments or Prometheus-based workflows, query the Agent or Parent directly:
http://NODE_IP:19999/api/v3/allmetrics?format=prometheus
Replace NODE_IP with your Netdata Agent or Parent IP address:
This endpoint is useful when you need metrics from a specific node or when your BI tool already integrates with Prometheus.
Query specific metrics from an Agent or Parent in JSON format. This is useful for BI tools that need to combine Netdata metrics with other business data.
Common BI use cases:
# Daily averages for last 30 days, grouped by node
curl 'http://NODE_IP:19999/api/v3/data?contexts=system.cpu&after=-2592000&points=30&time_group=avg&group_by=node'
# Weekly max values for capacity planning
curl 'http://NODE_IP:19999/api/v3/data?contexts=system.ram&after=-604800&points=4&time_group=max&group_by=node'
# Hourly sum for cost analysis
curl 'http://NODE_IP:19999/api/v3/data?contexts=system.cpu&after=-86400&points=24&time_group=sum'
Key parameters for BI workflows:
| Parameter | Description | Example |
|---|---|---|
contexts | Metric context to query | system.cpu, system.ram, disk.io |
after / before | Timeframe (seconds or Unix timestamp) | -2592000 = last 30 days |
points | Number of output points | 30 = daily for monthly view |
time_group | Aggregation function | avg, sum, min, max |
group_by | How to group results | node, context, label:LABEL_NAME |
Power BI, Tableau, and similar tools can consume this JSON through their data transformation features (Power Query, etc.).
For persistent storage and historical analysis, export metrics to a database:
| Database | Connector |
|---|---|
| PostgreSQL | Prometheus remote write adapter |
| TimescaleDB | Prometheus remote write or netdata-timescale-relay |
| InfluxDB | Graphite or Prometheus remote write |
| Elasticsearch | Graphite or Prometheus remote write |
| Google BigQuery | Prometheus remote write |
| AWS services | AWS Kinesis Data Streams |
| Azure services | Prometheus remote write |
See Export Metrics to External Time-Series Databases for full connector documentation.