content/telegraf/v1/configuration.md
Telegraf uses a configuration file to define what plugins to enable and what settings to use when Telegraf starts. Each Telegraf plugin has its own set of configuration options. Telegraf also provides global options for configuring specific Telegraf settings.
[!Note] See Get started to quickly get up and running with Telegraf.
The telegraf config command lets you generate a configuration file using Telegraf's list of plugins.
To generate a configuration file with default input and output plugins enabled, enter the following command in your terminal:
{{< code-tabs-wrapper >}} {{% code-tabs %}} Linux and macOS Windows {{% /code-tabs %}} {{% code-tab-content %}}
telegraf config > telegraf.conf
{{% /code-tab-content %}} {{% code-tab-content %}}
.\telegraf.exe config > telegraf.conf
{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
The generated file contains settings for all available plugins--some are enabled and the rest are commented out.
To generate a configuration file that contains settings for only specific plugins,
use the --input-filter and --output-filter options to
specify input plugins
and output plugins.
Use a colon (:) to separate plugin names.
{{< code-tabs-wrapper >}} {{% code-tabs %}} Linux and macOS Windows {{% /code-tabs %}} {{% code-tab-content %}}
telegraf \
--input-filter <INPUT_PLUGIN_NAME>[:<INPUT_PLUGIN_NAME>] \
--output-filter <OUTPUT_PLUGIN_NAME>[:<OUTPUT_PLUGIN_NAME>] \
config > telegraf.conf
{{% /code-tab-content %}} {{% code-tab-content %}}
.\telegraf.exe `
--input-filter <INPUT_PLUGIN_NAME>[:<INPUT_PLUGIN_NAME>] `
--output-filter <OUTPUT_PLUGIN_NAME>[:<OUTPUT_PLUGIN_NAME>] `
config > telegraf.conf
{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
The following example shows how to include configuration sections for the
inputs.cpu,
inputs.http_listener_v2,
outputs.influxdb_v2, and
outputs.file plugins:
{{< code-tabs-wrapper >}} {{% code-tabs %}} Linux and macOS Windows {{% /code-tabs %}} {{% code-tab-content %}}
telegraf \
--input-filter cpu:http_listener_v2 \
--output-filter influxdb_v2:file \
config > telegraf.conf
{{% /code-tab-content %}} {{% code-tab-content %}}
.\telegraf.exe `
--input-filter cpu:http_listener_v2 `
--output-filter influxdb_v2:file `
config > telegraf.conf
{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
For more advanced configuration details, see the configuration documentation.
In PowerShell 5, the default encoding is UTF-16LE and not UTF-8. Telegraf expects a valid UTF-8 file. This is not an issue with PowerShell 6 or newer, as well as the Command Prompt or with using the Git Bash shell.
When using PowerShell 5 or earlier, specify the output encoding when generating a full configuration file:
telegraf.exe config | Out-File -Encoding utf8 telegraf.conf
This will generate a UTF-8 encoded file with a byte-order mark (BOM). However, Telegraf correctly handles the leading BOM.
When starting Telegraf, use the --config flag to specify the configuration file location:
--config /etc/default/telegraf--config "http://remote-URL-endpoint"Use the --config-directory flag to include files ending with .conf in the
specified directory in the Telegraf configuration.
On most systems, the default locations are /etc/telegraf/telegraf.conf for
the main configuration file and /etc/telegraf/telegraf.d (on Windows, C:\"Program Files"\Telegraf\telegraf.d) for the directory of
configuration files.
Telegraf processes each configuration file separately, and the effective configuration is the union of all the files. If any file isn't a valid configuration, Telegraf returns an error.
[!Warning]
Telegraf doesn't support partial configurations
Telegraf doesn't concatenate configuration files before processing them. Each configuration file that you provide must be a valid configuration.
If you want to use separate files to manage a configuration, you can use your own custom code to concatenate and pre-process the files, and then provide the complete configuration to Telegraf--for example:
Configure plugin sections and assign partial configs a file extension different from
.confto prevent Telegraf loading them--for example:toml# main.opcua: Main configuration file ... [[inputs.opcua_listener]] name = "PluginSection" endpoint = "opc.tcp://10.0.0.53:4840" ...toml# group_1.opcua [[inputs.opcua_listener.group]] name = "SubSection1" ...toml# group_2.opcua [[inputs.opcua_listener.group]] name = "SubSection2" ...Before you start Telegraf, run your custom script to concatenate
main.opcua,group_1.opcua,group_2.opcuainto a validtelegraf.conf.Start Telegraf with the complete, valid
telegraf.confconfiguration.
Use environment variables anywhere in the configuration file by enclosing them in ${}.
For strings, variables must be in quotes (for example, "test_${STR_VAR}").
For numbers and booleans, variables must be unquoted (for example, ${INT_VAR},
${BOOL_VAR}).
When using double quotes, escape any backslashes (for example: "C:\\Program Files") or
other special characters.
If using an environment variable with a single backslash, enclose the variable
in single quotes to signify a string literal (for example:
'C:\Program Files').
Telegraf also supports Shell parameter expansion for environment variables which allows the following:
${VARIABLE:-default}: evaluates to default if VARIABLE is unset or empty
in the environment.${VARIABLE-default}: evaluates to default only if VARIABLE is unset in
the environment. Similarly, the following syntax allows you to specify
mandatory variables:${VARIABLE:?err}: exits with an error message containing err if VARIABLE
is unset or empty in the environment.${VARIABLE?err}: exits with an error message containing err if VARIABLE
is unset in the environment.When using the .deb or .rpm packages, you can define environment variables
in the /etc/default/telegraf file.
You can also set environment variables using the Linux export command:
export password=mypassword
Note: Use a secret store or environment variables to store sensitive credentials.
Set environment variables in the Telegraf environment variables file
(/etc/default/telegraf).
USER="alice"
INFLUX_URL="http://localhost:8086"
INFLUX_SKIP_DATABASE_CREATION="true"
INFLUX_PASSWORD="passw0rd123"
INFLUX_HOST="http://localhost:8086"
INFLUX_TOKEN="replace_with_your_token"
INFLUX_ORG="your_username"
INFLUX_BUCKET="replace_with_your_bucket_name"
# For AWS West (Oregon)
INFLUX_HOST="https://us-west-2-1.aws.cloud2.influxdata.com"
# Other Cloud URLs at https://docs.influxdata.com/influxdb/cloud/reference/regions/
INFLUX_TOKEN="replace_with_your_token"
INFLUX_ORG="[email protected]"
INFLUX_BUCKET="replace_with_your_bucket_name"
In the Telegraf configuration file (/etc/telegraf.conf), reference the variables--for example:
[global_tags]
user = "${USER}"
[[inputs.mem]]
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = ["${INFLUX_URL}"]
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
password = "${INFLUX_PASSWORD}"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
# For InfluxDB Cloud:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
When Telegraf runs, the effective configuration is the following:
[global_tags]
user = "alice"
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = ["http://localhost:8086"]
skip_database_creation = true
password = "passw0rd123"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "replace_with_your_token"
organization = "your_username"
bucket = "replace_with_your_bucket_name"
# For InfluxDB Cloud:
[[outputs.influxdb_v2]]
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
token = "replace_with_your_token"
organization = "[email protected]"
bucket = "replace_with_your_bucket_name"
Telegraf also supports secret stores for providing credentials or similar. Configure one or more secret store plugins and then reference the secret in your plugin configurations.
Reference secrets using the following syntax:
@{<secret_store_id>:<secret_name>}
secret_store_id: the unique ID you define for your secret store plugin.secret_name: the name of the secret to use.[!Note] Both and
secret_store_idandsecret_nameonly support alphanumeric characters and underscores.
This example illustrates the use of secret stores in plugins:
[global_tags]
user = "alice"
[[secretstores.os]]
id = "local_secrets"
[[secretstores.jose]]
id = "cloud_secrets"
path = "/etc/telegraf/secrets"
# Optional reference to another secret store to unlock this one.
password = "@{local_secrets:cloud_store_passwd}"
[[inputs.http]]
urls = ["http://server.company.org/metrics"]
username = "@{local_secrets:company_server_http_metric_user}"
password = "@{local_secrets:company_server_http_metric_pass}"
[[outputs.influxdb_v2]]
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
token = "@{cloud_secrets:influxdb_token}"
organization = "[email protected]"
bucket = "replace_with_your_bucket_name"
Not all plugins support secrets.
When using plugins that support secrets, Telegraf locks the memory pages
containing the secrets.
Therefore, the locked memory limit has to be set to a
suitable value.
Telegraf will check the limit and the number of used secrets at
startup and will warn if your limit is too low.
In this case, please increase
the limit via ulimit -l.
If you are running Telegraf in a jail you might need to allow locked pages in
that jail by setting allow.mlock = 1; in your config.
Global tags can be specified in the [global_tags] section of the configuration
file in key="value" format.
Telegraf applies the global tags to all metrics gathered on this host.
The [agent] section contains the following configuration options:
interval.
For example, if interval is set to 10s, then the agent collects on :00, :10, :20, etc.metric_batch_size metrics.
This controls the size of writes that Telegraf sends to output plugins.flush_interval is flush_interval + flush_jitter.5s and an interval of 10s means flushes happen
every 10-15 seconds.text, structured or, on Windows, eventlog.
The output file (if any) is determined by the logfile setting.msg.
Ignored if logformat is not structured.eventlog format.os.Hostname().snmptranslate and snmptable, or "gosmi" which translates using the built-in
gosmi library.taginclude or tagexclude.
This removes the need to specify local tags twice.global_tags section will always pass
tag-filtering via taginclude or tagexclude.
This removes the need to specify those tags twice.memory, the default and original buffer type, and disk,
an experimental disk-backed buffer which serializes all metrics to disk as
needed to improve data durability and reduce the chance for data loss.
This is only supported at the agent level.disk buffer mode.
Each output plugin makes another subdirectory in this directory with the
output plugin's ID.The following config parameters are available for all inputs:
interval can be increased to reduce data-in rate limits.precision setting of the agent. Collected
metrics are rounded to the precision specified as an interval. When this value is
set on a service input (ex: statsd), multiple events occurring at the same
timestamp may be merged by the output database.collection_jitter setting of the agent.intervalflush_interval on a per plugin basis.flush_jitter on a per plugin basis.metric_batch_size on a per plugin basis.metric_buffer_limit on a per plugin basis.Some output plugins support the data_format option, which specifies a serializer
to convert metrics before writing.
Common serializers include json, influx, prometheus, and csv.
Output plugins that support serializers may also offer use_batch_format, which
controls whether the serializer receives metrics individually or as a batch.
Batch mode enables more efficient encoding for formats like JSON arrays.
[[outputs.file]]
files = ["stdout"]
data_format = "json"
use_batch_format = true
For available serializers and configuration options, see output data formats.
The following config parameters are available for all aggregators:
For a demonstration of how to configure SNMP, MQTT, and PostGRE SQL plugins to get data into Telegraf, see the following video:
{{< youtube 6XJdZ_kdx14 >}}
The following config parameters are available for all processors:
The metric filtering parameters can be used to limit what metrics are handled by the processor. Excluded metrics are passed downstream to the next processor.
Filters can be configured per input, output, processor, or aggregator.
Filters fall under two categories:
Selector filters include or exclude entire metrics. When a metric is excluded from an input or output plugin, the metric is dropped. If a metric is excluded from a processor or aggregator plugin, it skips the plugin and is sent onwards to the next stage of processing.
namepass: An array of glob pattern strings.
Only metrics whose measurement name matches a pattern in this list are emitted.
Additionally, custom list of separators can be specified using namepass_separator.
These separators are excluded from wildcard glob pattern matching.
namedrop: The inverse of namepass.
If a match is found the metric is discarded.
This is tested on metrics after they have passed the namepass test.
Additionally, custom list of separators can be specified using namedrop_separator.
These separators are excluded from wildcard glob pattern matching.
tagpass: A table mapping tag keys to arrays of glob pattern strings.
Only metrics that contain a tag key in the table and a tag value matching one of its
patterns is emitted.
This can either use the explicit table syntax (for example: a subsection using a [...] header)
or inline table syntax (e.g like a JSON table with {...}).
Please see the below notes on specifying the table.
tagdrop: The inverse of tagpass.
If a match is found the metric is discarded.
This is tested on metrics after they have passed the tagpass test.
metricpass: A Common Expression Language (CEL) expression with boolean result where
true will allow the metric to pass, otherwise the metric is discarded.
This filter expression is more general compared to namepass and also
supports time-based filtering.
Further details, such as available functions and expressions, are provided in the
CEL language definition as well as in the extension documentation or the CEL language introduction.
[!Note] *As CEL is an interpreted language. This type of filtering is much slower than
namepass,namedrop, and others. Consider using the more restrictive filter options where possible in case of high-throughput scenarios.
Modifier filters remove tags and fields from a metric. If all fields are removed, the metric is removed and as a result not passed through to the following processors or any output plugin. Tags and fields are modified before a metric is passed to a processor, aggregator, or output plugin. When used with an input plugin the filter applies after the input runs.
fieldinclude.
Fields with a field key matching one of the patterns will be discarded from the metric.
This is tested on metrics after they have passed the fieldinclude test.tagpass, which will pass an entire metric based on its tag,
taginclude removes all non matching tags from the metric.
Any tag can be filtered including global tags and the agent host tag.taginclude.
Tags with a tag key matching one of the patterns will be discarded from the metric.
Any tag can be filtered including global tags and the agent host tag.[!Note]
Include tagpass and tagdrop at the end of your plugin definition
Due to the way TOML is parsed,
tagpassandtagdropparameters must be defined at the end of the plugin definition, otherwise subsequent plugin configuration options are interpreted as part of the tagpass and tagdrop tables.
To learn more about metric filtering, watch the following video:
{{< youtube R3DnObs_OKA >}}
The following example configuration collects per-cpu data, drops any
fields that begin with time_, tags measurements with dc="denver-1", and then
outputs measurements at a 10 second interval to an InfluxDB database named
telegraf at the address 192.168.59.103:8086.
[global_tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "1s"
# INPUTS
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
fielddrop = ["time_*"]
tagpass and tagdropNOTE tagpass and tagdrop parameters must be defined at the end of
the plugin definition, otherwise subsequent plugin configuration options are
interpreted as part of the tagpass and tagdrop tables.
[[inputs.cpu]]
percpu = true
totalcpu = false
fielddrop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
fieldpass and fielddrop# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldpass = ["inodes*"]
namepass and namedrop# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_*"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_*"]
namepass and namedrop with separators# Pass all metrics of type 'A.C.B' and drop all others like 'A.C.D.B'
[[inputs.socket_listener]]
data_format = "graphite"
templates = ["measurement*"]
namepass = ["A.*.B"]
namepass_separator = "."
# Drop all metrics of type 'A.C.B' and pass all others like 'A.C.D.B'
[[inputs.socket_listener]]
data_format = "graphite"
templates = ["measurement*"]
namedrop = ["A.*.B"]
namedrop_separator = "."
taginclude and tagexclude# Only include the "cpu" tag in the measurements for the cpu plugin.
[[inputs.cpu]]
percpu = true
totalcpu = true
taginclude = ["cpu"]
# Exclude the `fstype` tag from the measurements for the disk plugin.
[[inputs.disk]]
tagexclude = ["fstype"]
prefix, suffix, and overrideThe following example emits measurements with the name cpu_total:
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
The following example emits measurements with the name foobar:
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
The following example emits measurements with two additional tags: tag1=foo and
tag2=bar.
NOTE: Order matters; the [inputs.cpu.tags] table must be at the end of the
plugin definition.
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
Additional inputs (or outputs) of the same type can be specified by defining
these instances in the configuration file. To avoid measurement collisions, use
the name_override, name_prefix, or name_suffix configuration options:
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
fielddrop = ["cpu_time*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "1s"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "1s"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "1s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
This will collect and emit the min/max of the system load1 metric every 30s, dropping the originals.
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
This will collect and emit the min/max of the swap metrics every
30s, dropping the originals. The aggregator will not be applied
to the system load metrics due to the namepass parameter.
[[inputs.swap]]
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
To learn more about configuring the Telegraf agent, watch the following video:
{{< youtube txUcAxMDBlQ >}}
You can control which plugin instances are enabled by adding labels to plugin configurations and passing one or more selectors on the command line.
Provide selectors with one or more --select flags when starting Telegraf.
Each --select value is a semicolon-separated list of key=value pairs:
<key>=<value>[;<key>=<value>]
--select value are combined with logical AND (all must match).--select flags are combined with logical OR (a plugin is enabled if it matches any selector set).Selectors support simple glob patterns in values (for example region=us-*).
Example:
telegraf --config config.conf --config-directory directory/ \
--select="app=payments;region=us-*" \
--select="env=prod" \
--watch-config --print-plugin-config-source=true
Add an optional labels table to a plugin, similar to tags.
Keys and values are plain strings.
Example:
[[inputs.cpu]]
[inputs.cpu.labels]
app = "payments"
region = "us-east"
env = "prod"
Telegraf matches the command-line selectors against a plugin's labels to decide whether that plugin instance should be enabled. For details on supported syntax and matching rules, see the labels selectors spec.
Many Telegraf plugins support TLS configuration for secure communication. Reference the detailed TLS documentation for configuration options and examples.