src/plugins.d/FUNCTION_UI_REFERENCE.md
Note: This is the technical specification. For a practical guide to implementing functions, see Functions Developer Guide.
This document provides the complete technical reference for Netdata Functions protocol v3, combining all information about simple tables (has_history=false) and log explorers (has_history=true). This is the authoritative internal documentation for maintaining and extending Netdata functions.
Netdata Functions allow collectors/plugins to expose interactive data through a streaming protocol. The protocol has evolved from GET-based CLI parameters (legacy) to POST-based JSON payloads (modern v3).
Simple Table View (has_history: false)
processes, network-connections, block-devicesLog Explorer Format (has_history: true)
systemd-journal, windows-events| Aspect | Simple Tables | Log Explorers |
|---|---|---|
| Data Processing | All data sent to frontend | Backend filters before sending |
| Facet Counts | Frontend counts occurrences in received data | Backend computes counts during query execution |
| Full-Text Search | Frontend substring search across visible data | Backend pattern matching with facets library |
| Histograms | Not supported - no time-based visualization | Optional - backend generates Netdata chart format |
| Performance | Limited by browser memory and processing | Scales to millions of records with sampling |
| Query Parameter | Ignored by backend (frontend only) | Processed by backend using simple patterns |
Info Request (Always GET)
GET /api/v3/function?function=systemd-journal info after:1234567890 before:1234567890
Info Response
{
"v": 3, // Indicates POST should be used for data requests
"accepted_params": [...],
"required_params": [...],
"status": 200,
"type": "table",
"has_history": false
}
Data Request (POST when v=3)
{
"query": "*error* !*debug*",
"selections": {
"priority": ["error", "warning"],
"unit": ["nginx.service"]
},
"after": 1234567890,
"before": 1234567890,
"last": 100
}
Request Format:
GET /api/v3/function?function=systemd-journal info after:1234567890 before:1234567890
Required Response Fields:
{
"v": 3,
"status": 200,
"type": "table",
"has_history": false,
"accepted_params": ["info", "after", "before", "direction", "last"],
"required_params": [
{
"id": "priority",
"name": "Log Level",
"type": "select",
"options": [
{"id": "error", "name": "Error", "defaultSelected": true},
{"id": "warn", "name": "Warning"}
]
}
],
"help": "Function description"
}
Frontend Processing:
accepted_params: drives outgoing payload (only params in this list are sent). When facets are selected, filters are restricted to accepted_params plus required param IDs.required_params: Generates filter UI, prevents execution if missingv: 3: Enables POST requests with JSON payloadsParameter shapes (wire format):
accepted_params: array of strings (parameter IDs). Example: ["sockets"], ["group"].required_params: array of objects that define UI selectors.
id, name, type, optionshelp, unique_viewoptions[]: id, name, optional defaultSelected, disabled, sortCloud-frontend UI notes (verified):
type: "select" renders as a single-select.defaultSelected, UI selects the first option by default.// Simplified from logs_query_status.h
if(payload) {
// POST request - parse JSON payload
facets_use_hashes_for_ids(facets, false); // Use plain field names
rq->fields_are_ids = false;
} else {
// GET request - parse CLI parameters (legacy)
facets_use_hashes_for_ids(facets, true); // Use hash IDs
rq->fields_are_ids = true;
}
| Aspect | GET (Legacy) | POST (Modern v3) |
|---|---|---|
| Version | v < 3 | v = 3 |
| Field IDs | 11-char hashes | Plain names |
| Parameters | URL encoded | JSON body |
| Facet filters | field_hash:value1,value2 | {"field": ["value1", "value2"]} |
| Full-text search | query:search terms | {"query": "search terms"} |
Core Parameters (All Functions):
"info" - Function info requests (always supported)"after" - Time range start (seconds epoch)"before" - Time range end (seconds epoch)Log Explorer Parameters (has_history=true):
"direction" - Query direction: "backward" | "forward""last" - Result limit (default: 200)"anchor" - Pagination cursor (timestamp or row identifier)"query" - Full-text search using Netdata simple patterns"facets" - Facet selection: "field1,field2""histogram" - Histogram field selection"if_modified_since" - Conditional updates (microseconds epoch)"data_only" - Skip metadata in response (boolean)"delta" - Incremental responses (boolean)"tail" - Streaming mode (boolean)"sampling" - Data sampling controlUI Feature Parameters:
"slice" - Enables "Full data queries" toggle in UIFrontend Behavior:
// Only accepted parameters are sent to functions
const allowedFilterIds = [...selectedFacets, ...requiredParamIds, ...acceptedParams]
filtersToSend = allowedFilterIds.reduce((acc, filterId) => {
if (filterId in filters) acc[filterId] = filters[filterId]
return acc
}, {})
The query parameter uses Netdata's simple pattern matching for full-text search across all fields:
Pattern Syntax:
| - Pattern separator (OR logic between patterns)* - Wildcard matching any number of characters! - Negates the pattern (exclude matches)Matching Modes:
pattern - Substring match (default) - finds "pattern" anywhere*pattern - Suffix match - finds strings ending with "pattern"pattern* - Prefix match - finds strings starting with "pattern"*pattern* - Substring match (explicit) - same as defaultExamples:
{
"query": "error" // Finds "error" anywhere (substring)
"query": "error|warning" // Finds "error" OR "warning"
"query": "error|warning|critical" // Multiple OR patterns
"query": "!debug" // Exclude ALL rows containing "debug"
"query": "!*debugging*|*debug*" // Include "debug" but exclude "debugging"
"query": "connection failed" // Find exact phrase (spaces included)
"query": "*error" // Find strings ending with "error"
"query": "nginx*" // Find strings starting with "nginx"
}
Pattern Evaluation Rules:
Within a field: Left-to-right, first match wins
"!*debugging*|*debug*" - If text contains "debugging", it's negative match. Otherwise, if contains "debug", it's positive match.Across all fields: ALL fields are evaluated (no short-circuit)
Row decision (after all fields evaluated):
Example: Query "!*debugging*|*debug*"
Row 1: message="debug info", category="debugging tips"
→ message: positive match (debug)
→ category: negative match (debugging)
→ Result: EXCLUDED (has negative match)
Row 2: message="debug info", category="testing"
→ message: positive match (debug)
→ category: no match
→ Result: INCLUDED (positive match, no negative)
Row 3: message="error info", category="testing"
→ message: no match
→ category: no match
→ Result: EXCLUDED (no positive matches)
Key Point: The order fields are evaluated doesn't matter - the same counters are updated and the same decision is made regardless of whether positive or negative matches are found first.
Common Use Cases:
"timeout|failed|refused" - Find various connection issues"!trace|!debug|nginx|apache" - Find web server logs, but exclude debug/trace"error code 500" - Find exact phrase with spaces"critical|fatal|emergency" - Find severe log levels"!*test*|!*debug*|*" - Include everything except test and debug contentImplementation Details:
simple_pattern_create(query, "|", SIMPLE_PATTERN_SUBSTRING, false)FACET_KEY_OPTION_FTS or when FACETS_OPTION_ALL_KEYS_FTS is set? single-character wildcards\ is supported for literal matchesImportant: Hash IDs (like priority_hash) are obsolete and only exist for backward compatibility with GET requests. All new functions should use v:3 with plain field names.
{
// Required fields
"status": 200,
"type": "table",
"has_history": false,
"data": [
[value1, value2, value3, ...], // Row 1
[value1, value2, value3, ...], // Row 2
// Special rowOptions as last element
[..., {"rowOptions": {"severity": "warning|error|notice|normal"}}]
],
"columns": {
"column_name": {
// Required
"index": 0, // Position in data array
"name": "Display Name", // Column header
"type": "string", // Field type
// Optional
"unique_key": false, // Row identifier (exactly one required)
"visible": true, // Default visibility
"sticky": false, // Pin when scrolling
"visualization": "value", // How to render
"value_options": { // Value formatting options
"units": "bytes",
"transform": "none",
"decimal_points": 2,
"default_value": ""
},
"max": 100, // For bar types
"pointer_to": "col_id", // Optional reference target
"sort": "descending", // Default sort
"sortable": true, // User can sort
"filter": "multiselect", // Filter type
"full_width": false, // Expand to fill
"wrap": false, // Text wrapping
"default_expanded_filter": false,
"summary": "sum", // Aggregation (backend only)
"dummy": false // True for hidden/internal columns
}
},
// Optional extensions
"help": "Function description",
"update_every": 1,
"expires": 1234567890,
"default_sort_column": "column_name",
"group_by": {
"aggregated": [{
"id": "group_id",
"name": "Group Name",
"column": "column_to_group_by"
}]
},
"charts": {
"chart_id": {
"name": "Chart Name",
"type": "stacked|bar",
"columns": ["col1", "col2"]
}
},
"accepted_params": ["param_id"],
"required_params": [
{
"id": "param_id",
"name": "Parameter Name",
"type": "select",
"unique_view": true,
"options": [
{"id": "opt1", "name": "Option 1", "defaultSelected": true},
{"id": "opt2", "name": "Option 2"}
]
}
]
}
Special last element in data array for row styling:
[..., {"rowOptions": {"severity": "error"}}] // Red background
[..., {"rowOptions": {"severity": "warning"}}] // Yellow background
[..., {"rowOptions": {"severity": "notice"}}] // Blue background
[..., {"rowOptions": {"severity": "normal"}}] // Default appearance
{
// Basic fields (same as simple table)
"status": 200,
"type": "table",
"has_history": true, // REQUIRED: Enables log explorer UI
"help": "System log explorer",
"update_every": 1,
// Table metadata
"table": {
"id": "logs",
"has_history": true,
"pin_alert": false
},
// Faceted filters (dynamic with counts)
"facets": [
{
"id": "priority", // Plain field name (not hash)
"name": "Priority",
"order": 1,
"options": [
{
"id": "ERROR",
"name": "ERROR",
"count": 45, // Real-time count
"order": 1
}
]
}
],
// Enhanced columns
"columns": {
"timestamp": {
"index": 0,
"id": "timestamp",
"name": "Time",
"type": "timestamp",
"transform": "datetime_usec",
"sort": "descending|fixed",
"sortable": false,
"sticky": true
},
"level": {
"index": 1,
"id": "priority", // Links to facet
"name": "Level",
"type": "string",
"visualization": "pill",
"filter": "facet", // Not multiselect!
"options": ["facet", "visible", "sticky"]
},
"message": {
"index": 3,
"id": "message",
"name": "Message",
"type": "string",
"full_width": true,
"options": [
"full_width",
"wrap",
"visible",
"main_text", // Primary content
"fts", // Full-text searchable
"rich_text" // May contain formatting
]
}
},
// Data with microsecond timestamps
"data": [
[
1697644320000000, // Microseconds
{"severity": "error"}, // rowOptions
"ERROR", // level
"nginx", // source
"Connection failed" // message
]
],
// Histogram configuration
"available_histograms": [
{"id": "priority", "name": "Priority", "order": 1},
{"id": "source", "name": "Source", "order": 2}
],
"histogram": {
"id": "priority",
"name": "Priority",
"chart": {
"summary":,
"result": {
"labels": ["time", "ERROR", "WARN", "INFO"],
"data": [
[1697644200, 5, 12, 234],
[1697644260, 3, 8, 198]
]
}
}
},
// Pagination metadata
"items": {
"evaluated": 50000, // Total scanned
"matched": 2520, // Match filters
"unsampled": 100, // Skipped (sampling)
"estimated": 0, // Statistical estimate
"returned": 100, // In this response
"max_to_return": 100,
"before": 0,
"after": 2420
},
// Navigation anchor
"anchor": {
"last_modified": 1697644320000000,
"direction": "backward" // or "forward"
},
// Request echo (optional)
"request": {
"query": "*error* *warning*",
"filters": ["priority:error,warning"],
"histogram": "priority"
},
// Additional metadata
"expires": 1697644920000,
"sampling": 10 // 1 in N sampling
}
| Feature | Simple Table | Log Explorer |
|---|---|---|
| Facet counts | Frontend computes from data | Backend computes in facets library |
| Full-text search | Frontend substring matching | Backend pattern matching |
| Histograms | Not supported | Optional (backend generated) |
| Filtering | Static multiselect | Dynamic facets with counts |
| Pagination | All data at once | Anchor-based infinite scroll |
| Time visualization | None | Histogram chart |
| Navigation | None | Bi-directional with timestamps |
| Performance | All data loaded | Sampling for large datasets |
| Option | UI Effect |
|---|---|
"facet" | Field is filterable via facets |
"fts" | Full-text searchable |
"main_text" | Primary content field |
"rich_text" | May contain formatting |
"pretty_xml" | Format as XML |
"hidden" | Hide by default |
Facet values in the sidebar display pills with counts that change based on the function's aggregation mode. The UI uses a smart component that displays different formats:
The pills are rendered using this logic:
{!!actualCount && <TextSmall>{actualCount} ⊃ </TextSmall>}
<TextSmall>{(pill || count).toString()}</TextSmall>
Symbol: ⊃ renders as ⊃ (superset symbol, looks like rotated 'u')
Display Formats:
"42" (just the count)"15 ⊃ 42" (15 aggregated items containing 42 total)Functions enable aggregated counts by including an aggregated_view object in their response:
{
"aggregated_view": {
"column": "Count",
"results_label": "unique combinations",
"aggregated_label": "sockets"
}
}
Example from network-connections function:
// In network-viewer.c when aggregated=true
buffer_json_member_add_object(wb, "aggregated_view");
{
buffer_json_member_add_string(wb, "column", "Count");
buffer_json_member_add_string(wb, "results_label", "unique combinations");
buffer_json_member_add_string(wb, "aggregated_label", "sockets");
}
buffer_json_object_close(wb);
Functions can provide both standard charts (computed by frontend) and custom visualizations.
Configuration:
{
"charts": {
"cpu_usage": {
"name": "CPU Usage by Service",
"type": "stacked-bar",
"columns": ["user_cpu", "system_cpu"],
"groupBy": "column",
"aggregation": "sum"
}
},
"default_charts": [
["cpu_usage", "service_type"]
]
}
Supported Types:
"bar" - Basic bar chart"stacked-bar" - Multi-column stacked bars"doughnut" - Pie/doughnut chart"value" - Simple numeric displayGroupBy Options:
"column" (default) - Group by selected filter column"all" - Aggregate all data togetherFor specialized visualizations beyond standard chart types, some functions may use predefined custom chart types:
{
"customCharts": {
"network_topology": {
"type": "network-viewer",
"config": {
"layout": "force",
"showLabels": true
}
}
}
}
Available Custom Types:
"network-viewer" - Interactive network topology (for network-connections function)Charts are computed from table data:
Simple tables support grouping rows with backend-defined aggregation rules.
Backend Summary Types:
typedef enum {
RRDF_FIELD_SUMMARY_COUNT, // Count rows in group
RRDF_FIELD_SUMMARY_UNIQUECOUNT, // Count unique values
RRDF_FIELD_SUMMARY_SUM, // Sum numeric values
RRDF_FIELD_SUMMARY_MIN, // Minimum value
RRDF_FIELD_SUMMARY_MAX, // Maximum value
RRDF_FIELD_SUMMARY_MEAN, // Average value
RRDF_FIELD_SUMMARY_MEDIAN, // Median value
} RRDF_FIELD_SUMMARY;
Column Summary Configuration:
buffer_rrdf_table_add_field(
wb, field_id++, "cpu", "CPU Usage",
RRDF_FIELD_TYPE_INTEGER,
RRDF_FIELD_VISUAL_VALUE,
RRDF_FIELD_TRANSFORM_NUMBER,
2, "%", NAN, RRDF_FIELD_SORT_DESCENDING, NULL,
RRDF_FIELD_SUMMARY_SUM, // How to aggregate when grouping
RRDF_FIELD_FILTER_RANGE,
RRDF_FIELD_OPTS_VISIBLE, NULL
);
Group By Support:
{
"group_by": {
"aggregated": [{
"id": "by_status",
"name": "By Status",
"column": "status"
}]
}
}
The frontend processes table data to generate facet counts:
const getFilterTableOptions = (data, { param, columns, aggregatedView } = {}) =>
Object.entries(
data.reduce((h, fn) => {
h[fn[param]] = {
count: (h[fn[param]]?.count || 0) + (fn.hidden ? 0 : 1),
...(aggregatedView && {
actualCount: (h[fn[param]]?.actualCount || 0) +
(fn.hidden ? 0 : fn[aggregatedView.column] || 1),
actualCountLabel: aggregatedView.aggregatedLabel,
countLabel: aggregatedView.resultsLabel,
}),
}
return h
}, {})
).map(([id, values]) => ({ id, ...values }))
The aggregated count system uses an existing data column to track how many items were aggregated into each row:
Backend declares which column contains the aggregation count:
"aggregated_view": {
"column": "Count" // Use the "Count" column from data rows
}
Frontend calculates two values for each facet value:
count: How many rows have this facet valueactualCount: Sum of the aggregation column for all rows with this facet valueExample: Network connections aggregated by protocol
Data returned by backend:
Direction | Protocol | LocalPort | RemotePort | Count
---------|----------|-----------|------------|-------
Inbound | TCP | * | * | 5
Inbound | UDP | * | * | 3
Outbound | TCP | * | * | 7
Outbound | UDP | * | * | 2
Facet pills displayed in UI:
8 ⊃ 2 (8 connections aggregated into 2 rows)9 ⊃ 2 (9 connections aggregated into 2 rows)12 ⊃ 2 (12 connections aggregated into 2 rows)5 ⊃ 2 (5 connections aggregated into 2 rows)Reading the pills: 12 ⊃ 2 means "12 original items shown in 2 table rows"
The tooltip provides human-readable context:
"42 results""15 sockets aggregated in 42 unique combinations"| Function | Aggregated Mode | Trigger | Count Meaning |
|---|---|---|---|
network-connections | sockets:aggregated | Parameter | Sockets → unique combinations |
| Other functions | N/A | None currently | Single count only |
To add aggregated count support to a function:
aggregated_view object to response when in aggregated modeaggregated_view presence| Type | UI Component | Use Case | Notes |
|---|---|---|---|
string | ValueCell | Text, names, categories | Left-aligned |
integer | ValueCell | Numbers, counts, IDs | Right-aligned |
bar-with-integer | BarCell | Percentages, metrics | Requires max |
duration | BarCell | Time intervals | Auto-formats seconds |
timestamp | DatetimeCell* | Date/time points | *UI maps to datetime |
feedTemplate | FeedTemplateCell | Rich content | Auto full_width |
| Type | Behavior | Notes |
|---|---|---|
boolean | ValueCell | No special boolean UI |
float | ValueCell | Go functions may emit floats; UI falls back to default |
detail-string | ValueCell | No expandable functionality |
array | ValueCell | Works with pill visualization |
none | ValueCell | Avoid using |
| Type | UI Component | Use Case |
|---|---|---|
value | ValueCell | Standard text display (default) |
bar | BarCell | Progress bar without text |
pill | PillCell | Badge/tag display |
richValue | RichValueCell | Enhanced value display |
feedTemplate | FeedTemplateCell | Full-width template |
rowOptions | null | Special row configuration |
Fallback Behavior: gauge is not a recognized visualization. If used, it is ignored, and the renderer falls back to using the field's type to select a component.
JSON path: columns[*].value_options.transform (and columns[*].value_options.decimal_points for numeric formatting).
| Type | Input | Output | Notes |
|---|---|---|---|
none | Any | Unchanged | Default |
number | Number | Formatted with decimals | Uses decimal_points |
duration | Seconds | "Xd Yh Zm" | Human-readable |
datetime | Epoch ms | Localized date/time | |
datetime_usec | Epoch μs | Localized date/time | For logs |
xml | XML string | Formatted XML | No specialized UI |
text | Any | Unchanged | UI falls back to default |
Not all transform values are compatible with all type values. The backend enforces the following compatibility rules:
Field Type (type) | Compatible Transforms (transform) |
|---|---|
timestamp | datetime_ms, datetime_usec |
duration | duration_s |
integer, bar-with-integer | number |
string, boolean, array | none, xml |
Using an incompatible transform will result in unexpected behavior or errors.
rowOptions Dummy ColumnTo add rowOptions for row-level severity styling, a special "dummy" column must be added to the columns definition. This column is not displayed in the UI but provides the necessary metadata. It must be created with this specific combination of values:
none (RRDF_FIELD_TYPE_NONE)rowOptions (RRDF_FIELD_VISUAL_ROW_OPTIONS)dummy (RRDF_FIELD_OPTS_DUMMY)Example C code:
buffer_rrdf_table_add_field(wb, field_id++, "row_options", "Row Options",
RRDF_FIELD_TYPE_NONE, RRDF_FIELD_VISUAL_ROW_OPTIONS, RRDF_FIELD_TRANSFORM_NONE,
0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL,
RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_NONE, RRDF_FIELD_OPTS_DUMMY, NULL);
| Type | UI Component | Use Case | Location |
|---|---|---|---|
multiselect | Checkboxes | Column filtering (default) | Dynamic filters |
range | RangeFilter | Numeric min/max | Dynamic filters |
facet | Facets component | With counts (logs) | Sidebar |
| Option | Bit | UI Effect |
|---|---|---|
unique_key | 0x01 | Row identifier (one required) |
visible | 0x02 | Show by default |
sticky | 0x04 | Pin column when scrolling |
full_width | 0x08 | Expand to fill space |
wrap | 0x10 | Enable text wrapping |
dummy | 0x20 | Internal use only |
default_expanded_filter | 0x40 | Expand filter by default |
ascending - Sort low to highdescending - Sort high to lowBackend calculates but UI doesn't display directly:
count, sum, min, max, mean, medianuniqueCount - Number of unique valuesextent - Range [min, max] (UI supported)unique - List of unique values (UI supported)The UI automatically sizes columns based on metadata:
| Size | Pixels | Applied To |
|---|---|---|
| xxxs | 90px | unique_key fields, bar types |
| xxs | 110px | Default for most types |
| xs | 130px | Available but rarely used |
| sm | 160px | timestamp, datetime types |
| md-xl | 190-290px | Available but rarely used |
| xxl | 1000px | feedTemplate with full_width |
Algorithm:
full_width: → xxl with expansionunique_key: → xxxs// Field type → Component
componentByType = {
"bar": BarCell,
"bar-with-integer": BarCell,
"duration": BarCell,
"pill": PillCell,
"feedTemplate": FeedTemplateCell,
"datetime": DatetimeCell,
// Others → ValueCell
}
// Visualization → Component
componentByVisualization = {
"bar": BarCell,
"pill": PillCell,
"richValue": RichValueCell,
"feedTemplate": FeedTemplateCell,
"rowOptions": null, // Skip rendering
// Others → ValueCell
}
The frontend uses a modular architecture with:
// Add a field to the table
buffer_rrdf_table_add_field(
BUFFER *wb,
size_t field_id,
const char *key,
const char *name,
RRDF_FIELD_TYPE type,
RRDF_FIELD_VISUAL visual,
RRDF_FIELD_TRANSFORM transform,
size_t decimal_points,
const char *units,
NETDATA_DOUBLE max,
RRDF_FIELD_SORT sort,
const char *pointer_to_dim_in_rrdr,
RRDF_FIELD_SUMMARY summary,
RRDF_FIELD_FILTER filter,
RRDF_FIELD_OPTS options,
const char *default_value
);
// CPU usage with progress bar
buffer_rrdf_table_add_field(
wb, field_id++, "CPU", "CPU %",
RRDF_FIELD_TYPE_BAR_WITH_INTEGER,
RRDF_FIELD_VISUAL_BAR,
RRDF_FIELD_TRANSFORM_NUMBER,
2, "%", 100.0, RRDF_FIELD_SORT_DESCENDING, NULL,
RRDF_FIELD_SUMMARY_SUM,
RRDF_FIELD_FILTER_RANGE,
RRDF_FIELD_OPTS_VISIBLE, NULL
);
// Process name (unique key)
buffer_rrdf_table_add_field(
wb, field_id++, "Name", "Name",
RRDF_FIELD_TYPE_STRING,
RRDF_FIELD_VISUAL_VALUE,
RRDF_FIELD_TRANSFORM_NONE,
0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL,
RRDF_FIELD_SUMMARY_COUNT,
RRDF_FIELD_FILTER_MULTISELECT,
RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_STICKY | RRDF_FIELD_OPTS_UNIQUE_KEY,
NULL
);
// Row options in data
buffer_json_add_array_item_object(wb);
buffer_json_member_add_object(wb, "rowOptions");
buffer_json_member_add_string(wb, "severity", "error");
buffer_json_object_close(wb);
buffer_json_object_close(wb);
netdata/src/libnetdata/buffer/functions_fields.hnetdata/src/libnetdata/buffer/functions_fields.cnetdata/src/libnetdata/facets/apps.plugin/apps_functions.cnetwork-viewer.plugin/network-viewer.csystemd-journal.plugin/systemd-journal.cunique_key optionname for displaytype for the data"v": 3bar-with-integer for percentagesmax valueunits for clarityrange filterduration type for intervalstimestamp type for points in timerowOptions as last array elementmultiselect (default) for categoriesrange for numeric datafacet only for has_history=truebar-with-integer + range filterstring + multiselect filterduration + duration transformstring + pill visualizationrowOptions with severity| Function | Plugin | Key Features |
|---|---|---|
| processes | apps.plugin | CPU/memory bars, grouping |
| socket | ebpf.plugin | Complex filters, charts |
| network-connections | network-viewer.plugin | Aggregated views, severity |
| systemd-list-units | systemd-units.plugin | Unit status, severity |
| ipmi-sensors | freeipmi.plugin | Hardware monitoring |
| block-devices | proc.plugin | I/O statistics, charts |
| network-interfaces | proc.plugin | Network stats, severity |
| mount-points | diskspace.plugin | Filesystem usage |
| cgroup-top | cgroups.plugin | Container metrics |
| systemd-top | cgroups.plugin | Service metrics |
| metrics-cardinality | web api | Dynamic columns |
| streaming | web api | Replication status |
| all-queries | web api | Monitors the progress of in-flight queries |
| Function | Plugin | Key Features |
|---|---|---|
| systemd-journal | systemd-journal.plugin | System logs, faceted search |
| windows-events | windows-events.plugin | Windows logs, faceted search |
"v": 3unique_key fieldupdate_everyThe protocol handles empty results gracefully:
{
"status": 200,
"type": "table",
"has_history": false,
"columns": {...}, // Full column definitions
"data": [] // Empty array is valid
}
{"status": 200, "type": "table", "columns": {}, "data": []}null values in data arrays are rendered as empty cellsnullNaN for numeric fields: For fields with transform: "number", NaN values will be displayed as the string "NaN". For timestamp fields with datetime or datetime_usec transforms, NaN epoch values will display as empty cells.NAN constant for missing numeric valuesThe protocol properly escapes:
", \, control chars)decimal_points field controls display precision, using JavaScript's toFixed() method.NETDATA_DOUBLE type for floating-point numbers.Intl.NumberFormat with a fixed locale (e.g., "en-US").toLocaleString() which respects the browser's locale.toFixed(), very large numbers will be displayed as a long string with the specified decimal places, not automatically in scientific notation.NaN values in numeric fields will be displayed as the string "NaN".Infinity and -Infinity values will be displayed as the strings "Infinity" and "-Infinity" respectively.timestamp fields with datetime or datetime_usec transforms, NaN epoch values will display as empty cells.Simple Tables: Use milliseconds with datetime transform
{
"type": "timestamp",
"transform": "datetime", // Expects milliseconds
"data": [1697644320000] // JavaScript Date format
}
Log Explorers: Use microseconds with datetime_usec transform
{
"type": "timestamp",
"transform": "datetime_usec", // Expects microseconds
"data": [1697644320000000] // Microsecond precision
}
Frontend Conversion:
// datetime_usec automatically converts to milliseconds
if (usec) {
epoch = epoch ? Math.floor(epoch / 1000) : epoch
}
API Parameters: after and before are automatically converted from milliseconds to seconds when sent to functions.
Display Format:
utc) or system defaultIntl.DateTimeFormat with browser's localeRRDF_FIELD_SORT_*sortable: true enables UI sorting (default)default_sort_column specifies startup sortLog explorer functions use anchor-based pagination for efficient navigation through large datasets.
{
"pagination": {
"enabled": true,
"column": "timestamp",
"key": "anchor",
"units": "timestamp_usec"
}
}
Anchor Management:
// Frontend calculates anchors from data boundaries
anchorBefore: latestData[latestData.length - 1][pagination.column],
anchorAfter: latestData[0][pagination.column],
anchorUnits: pagination.units
Infinite Scroll Navigation:
anchorBeforeanchorAfterhasNextPage, hasPrevPage control load triggersRequired Parameters:
anchor: {VALUE} - Pagination cursor valuedirection: "backward"|"forward" - Navigation directionlast: NUMBER - Page size (default: 200)When in PLAY mode (after < 0), pagination automatically coordinates with real-time updates:
{
direction: "forward",
merge: true,
tail: true,
delta: true,
anchor: anchorAfter
}
Simple tables load all data at once, but handle large sets efficiently:
Log explorer uses anchor-based pagination:
last parameter)Delta mode enables efficient real-time updates by sending only changes since the last request.
{
if_modified_since: 1697644320000000, // Previous modification timestamp
direction: "forward",
merge: true,
tail: true,
delta: true,
data_only: true,
anchor: anchorAfter
}
Facets Delta:
{
"facetsDelta": [
{
"id": "priority",
"options": [
{"id": "ERROR", "count": 5}, // Incremental counts
{"id": "WARN", "count": 12}
]
}
]
}
Histogram Delta:
{
"histogramDelta": {
"chart": {
"result": {
"labels": ["time", "ERROR", "WARN"],
"data": [
[1697644320, 3, 8] // New data points only
]
}
}
}
}
Frontend merges delta responses with existing data:
count = (existing || 0) + (delta || 0)directionPLAY mode enables live data streaming with efficient polling and conditional updates.
after < 0 (relative time from now)after > 0 (absolute timestamp)When if_modified_since is present, the system automatically includes:
{
"if_modified_since": 1697644320000000,
"direction": "forward",
"merge": true,
"tail": true,
"delta": true,
"data_only": true,
"anchor": "anchorAfter"
}
304 Not Modified: Indicates no new data available
{
"status": 304
}
Frontend handles 304 responses gracefully without showing errors to users.
update_every valueif_modified_since to avoid unnecessary data transfersMinimum Required Fields:
{
"status": 200, // Required
"type": "table", // Required
"columns": {}, // Required (can be empty)
"data": [] // Required (can be empty)
}
Common Optional Fields:
has_history: Default falsehelp: Documentation textupdate_every: Default 1expires: Cache controldefault_sort_column: Initial sortaccepted_params, required_params: Some backends include these in data responses (e.g., go.d functions)index, name, typev, type, has_history, accepted_params, required_params, helptype, columns, dataerrorMessage (camelCase) is used by the Functions UINote on casing: cloud-frontend camelizes successful responses (info/data) before use, but does not camelize error payloads.
When a function encounters an error, the backend returns a JSON object. The primary error generator (rrd_call_function_error) produces the following minimal format:
{
"status": 400, // The HTTP status code (e.g., 400, 404, 500)
"errorMessage": "A descriptive error message"
}
Frontend Consumption and Interpretation: The frontend is designed to handle a more comprehensive error structure, allowing for richer error display and localization. When an error occurs, the frontend will attempt to extract information from the received error object using the following hierarchy:
status: The HTTP status code, used for general error classification (e.g., 400 for bad request, 404 for not found).errorMessage: Primary detailed message used by the cloud-frontend Functions UI.error: A short, machine-readable error identifier (e.g., "MissingParameter"). While not consistently generated by rrd_call_function_error, other parts of the system or future backend implementations might provide this.message: A user-friendly message. The frontend often maps errorMessage or an internal errorMsgKey to this for display.help: Optional additional guidance for resolving the error. This field is not currently generated by rrd_call_function_error.Compatibility note (cloud-frontend):
errorMessage (camelCase) and does not camelize error payloads.Example of Frontend Interpretation (Conceptual):
The frontend might internally map specific errorMessage strings to predefined errorMsgKey values to provide localized or more context-specific messages to the user. For instance, a backend errorMessage like "The 'time_range' parameter is required" might be mapped to an errorMsgKey of "ErrMissingTimeRange" in the frontend, which then displays a user-friendly message like "Please specify a time range for this function."
Therefore, while the backend currently provides status and errorMessage, developers should be aware that the frontend's error handling is capable of utilizing the more detailed fields (error, message, help) if they are provided by the backend in the future or by other API endpoints.
Common Error Codes:
| Code | Use Case | Example |
|---|---|---|
| 400 | Bad input | Missing/invalid parameters |
| 401 | Auth required | User not logged in |
| 403 | Forbidden | Insufficient permissions |
| 404 | Not found | No data matches query |
| 500 | Server error | Internal failures |
| 503 | Unavailable | Service overloaded |
Functions integrate with Netdata through the PLUGINSD protocol:
Registration:
FUNCTION "function_name" timeout "help text" "tags" "http_access" priority version
Execution Flow:
FUNCTION registers the functionFUNCTION_CALL with transaction IDFUNCTION_RESULT_BEGIN status format expiresFUNCTION_RESULT_ENDWith Payload:
FUNCTION_PAYLOAD_BEGIN transaction timeout function access source content_type
<payload data>
FUNCTION_PAYLOAD_END
Cancellation:
FUNCTION_CANCEL transaction_id
Progress Updates:
FUNCTION_PROGRESS transaction_id done total
Functions support streaming for real-time updates:
// Transaction management
dictionary_set(parser->inflight.functions, transaction_str, &function_data);
// Timeout handling
if (*pf->stop_monotonic_ut + RRDFUNCTIONS_TIMEOUT_EXTENSION_UT < now_ut) {
// Function timed out
}
// Progress callback
if(stream_has_capability(s, STREAM_CAP_PROGRESS)) {
// Enable progress updates
}
pluginsd_functions.c: Function execution and managementstream-sender-execute.c: Streaming function callsplugins.d/README.md: Protocol documentationplugins.d/functions-table.md: Table format specification"v": 3has_history: trueThis document represents the complete v3 protocol specification. Key areas for future development:
Protocol Extensions
UI Enhancements
Performance Optimizations
This section tracks the validation work completed and remaining tasks
processes function implementation in apps.pluginnetwork-connections function in network-viewer.pluginThis section documents the investigation process to avoid repeating work
processes function in apps.plugin
/netdata/src/collectors/apps.plugin/apps_functions.capps_plugin.c:752network-connections function in network-viewer.plugin
/netdata/src/collectors/network-viewer.plugin/network-viewer.c/netdata/src/libnetdata/buffer/functions_fields.h/netdata/src/libnetdata/buffer/functions_fields.cpluginsd_functions.c and stream-sender-execute.c{"status": 200, "type": "table", "columns": {}, "data": []}decimal_points fieldbuffer_rrdf_table_add_field() for column definitionsbuffer_json_* functions to build JSON responsesrowOptions field for row severity/status (5/12 functions)charts and default_charts definitions (most functions)group_by aggregation options (most functions)accepted_params and required_params (metrics-cardinality)default_sort_column for initial sortingDetailed UI behaviors when has_history=true
When properly formatted, the log explorer UI provides:
Sidebar with Faceted Filters
Time-based Histogram
Advanced Table Features
Search and Navigation
Live Features
update_everyLast Updated: Based on analysis of Netdata codebase and cloud-frontend implementation