content/shared/influxdb-v2/reference/cli/influx/write/_index.md
The influx write command writes data to InfluxDB via stdin or from a specified file.
Write data using line protocol,
annotated CSV, or
extended annotated CSV.
If you write CSV data, CSV annotations determine how the data translates into line protocol.
influx write [flags]
influx write [command]
{{% note %}}
To write data to InfluxDB, you must provide the following for each row:
In line protocol, the structure of the line data determines the measurement, field, and value.
In annotated CSV, measurements, fields, and values are represented by the
_measurement, _field, and _value columns.
Their types are determined by CSV annotations.
To successfully write annotated CSV to InfluxDB, include all
annotation rows.
In extended annotated CSV, measurements, fields, and values and their types are determined by CSV annotations. {{% /note %}}
| Subcommand | Description |
|---|---|
| dryrun | Write to stdout instead of InfluxDB |
| Flag | Description | Input type | {{< cli/mapped >}} | |
|---|---|---|---|---|
-c | --active-config | CLI configuration to use for command | string | |
-b | --bucket | Bucket name (mutually exclusive with --bucket-id) | string | INFLUX_BUCKET_NAME |
--bucket-id | Bucket ID (mutually exclusive with --bucket) | string | INFLUX_BUCKET_ID | |
--configs-path | Path to influx CLI configurations (default ~/.influxdbv2/configs) | string | INFLUX_CONFIGS_PATH | |
--compression | Input compression (none or gzip, default is none unless input file ends with .gz.) | string | ||
--debug | Output errors to stderr | |||
--encoding | Character encoding of input (default UTF-8) | string | ||
--errors-file | Path to a file used for recording rejected row errors | string | ||
-f | --file | File to import | stringArray | |
--format | Input format (lp or csv, default lp) | string | ||
--header | Prepend header line to CSV input data | string | ||
-h | --help | Help for the write command | ||
--host | HTTP address of InfluxDB (default http://localhost:8086) | string | INFLUX_HOST | |
--max-line-length | Maximum number of bytes that can be read for a single line (default 16000000) | integer | ||
-o | --org | Organization name (mutually exclusive with --org-id) | string | INFLUX_ORG |
--org-id | Organization ID (mutually exclusive with --org) | string | INFLUX_ORG_ID | |
-p | --precision | Precision of the timestamps (default ns) | string | INFLUX_PRECISION |
--rate-limit | Throttle write rate (examples: 5MB/5min or 1MB/s). | string | ||
--skip-verify | Skip TLS certificate verification | INFLUX_SKIP_VERIFY | ||
--skipHeader | Skip first n rows of input data | integer | ||
--skipRowOnError | Output CSV errors to stderr, but continue processing | |||
-t | --token | API token | string | INFLUX_TOKEN |
-u | --url | URL to import data from | stringArray |
{{< cli/influx-creds-note >}}
influx write --bucket example-bucket "
m,host=host1 field1=1.2,field2=5i 1640995200000000000
m,host=host2 field1=2.4,field2=3i 1640995200000000000
"
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt
--skipHeader 8
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--file path/to/line-protocol-2.txt
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol.txt
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
# The influx CLI assumes files with the .gz extension use gzip compression
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt.gz
# Specify gzip compression for gzipped files without the .gz extension
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt.comp \
--compression gzip
influx write \
--bucket example-bucket \
--format csv \
"#group,false,false,false,false,true,true
#datatype,string,long,dateTime:RFC3339,double,string,string
#default,_result,,,,,
,result,table,_time,_value,_field,_measurement
,,0,2020-12-18T18:16:11Z,72.7,temp,sensorData
,,0,2020-12-18T18:16:21Z,73.8,temp,sensorData
,,0,2020-12-18T18:16:31Z,72.7,temp,sensorData
,,0,2020-12-18T18:16:41Z,72.8,temp,sensorData
,,0,2020-12-18T18:16:51Z,73.1,temp,sensorData
"
influx write \
--bucket example-bucket \
--format csv \
"#constant measurement,sensorData
#datatype dateTime:RFC3339,double
time,temperature
2020-12-18T18:16:11Z,72.7
2020-12-18T18:16:21Z,73.8
2020-12-18T18:16:31Z,72.7
2020-12-18T18:16:41Z,72.8
2020-12-18T18:16:51Z,73.1
"
influx write \
--bucket example-bucket \
--file path/to/data.csv
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--file path/to/data-2.csv
influx write \
--bucket example-bucket \
--url https://example.com/data.csv
influx write \
--bucket example-bucket \
--url https://example.com/data-1.csv \
--url https://example.com/data-2.csv
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--url https://example.com/data-2.csv
influx write \
--bucket example-bucket \
--header "#constant measurement,birds" \
--header "#datatype dateTime:2006-01-02,long,tag" \
--file path/to/data.csv
# The influx CLI assumes files with the .gz extension use gzip compression
influx write \
--bucket example-bucket \
--file path/to/data.csv.gz
# Specify gzip compression for gzipped files without the .gz extension
influx write \
--bucket example-bucket \
--file path/to/data.csv.comp \
--compression gzip
influx write \
--bucket example-bucket \
--file path/to/data.csv \
--rate-limit 5MB/5min