docs/en/14-reference/01-components/03-taosadapter.md
import Prometheus from "../../assets/resources/_prometheus.mdx" import CollectD from "../../assets/resources/_collectd.mdx" import StatsD from "../../assets/resources/_statsd.mdx" import Icinga2 from "../../assets/resources/_icinga2.mdx" import TCollector from "../../assets/resources/_tcollector.mdx"
taosAdapter is a companion tool for TDengine, serving as a bridge and adapter between the TDengine cluster and applications. It provides an easy and efficient way to ingest data directly from data collection agents (such as Telegraf, StatsD, collectd, etc.). It also offers InfluxDB/OpenTSDB compatible data ingestion interfaces, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine. The connectors of TDengine in various languages communicate with TDengine through the WebSocket interface, hence the taosAdapter must be installed.
The architecture diagram is as follows:
The taosAdapter provides the following features:
Through the WebSocket interface of taosAdapter, connectors in various languages can achieve SQL execution, schemaless writing, parameter binding, and data subscription functionalities. Refer to the Development Guide for more details.
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL http://<fqdn>:6041/influxdb/v1/write.
Supported InfluxDB parameters are as follows:
db specifies the database name used by TDengineprecision the time precision used by TDengineu TDengine usernamep TDengine passwordttl the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the table creation document.table_name_key the custom tag key for subtable names. If set, the subtable name will use the value of this tag keyNote: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
Example: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL http://<fqdn>:6041/<APIEndPoint>. EndPoint as follows:
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
OpenMetrics is an open standard supported by CNCF (Cloud Native Computing Foundation) that focuses on standardizing the collection and transmission of metric data. It serves as one of the core specifications for monitoring and observability systems in the cloud-native ecosystem.
Starting from version 3.3.7.0, taosAdapter supports OpenMetrics v1.0.0 data collection and writing, while maintaining compatibility with Prometheus 0.0.4 protocol to ensure seamless integration with the Prometheus ecosystem.
To enable OpenMetrics data collection and writing, follow these steps:
open_metrics.enableStarting with version 3.3.7.0, you can use the OpenMetrics plugin as a replacement for node_exporter to perform data collection and writing.
An exporter used by Prometheus that exposes hardware and operating system metrics from *NIX kernels
taosAdapter has supported writing JSON-formatted data to TDengine TSDB through the RESTful interface since version 3.4.0.0. You can use any HTTP-compatible client to send JSON-formatted data to TDengine TSDB via the POST RESTful endpoint at http://<fqdn>:6041/input_json/v1/{endpoint}.
The required JSON format is an array containing multiple rows of data, with each row being a JSON object. Each JSON object corresponds to a single data record. Data extraction can be defined through configuration files. If the input JSON format does not meet the requirements, it can be transformed using JSONata expressions(supports JSONata version 1.5.4).
A sample configuration is as follows (default configuration file path: /etc/taos/taosadapter.toml):
[input_json]
enable = true
[[input_json.rules]]
endpoint = "rule1"
dbKey = "db"
superTableKey = "stb"
subTableKey = "table"
timeKey = "time"
timeFormat = "datetime"
timezone = "UTC"
transformation = '''
$sort(
(
$ts := time;
$each($, function($value, $key) {
$key = "time" ? [] : (
$each($value, function($groupValue, $groupKey) {
$each($groupValue, function($deviceValue, $deviceKey) {
{
"db": "test_input_json",
"time": $ts,
"location": $key,
"groupid": $number($split($groupKey, "_")[1]),
"stb": "meters",
"table": $deviceKey,
"current": $deviceValue.current,
"voltage": $deviceValue.voltage,
"phase": $deviceValue.phase
}
})[]
})[]
)
})
).[*][*],
function($l, $r) {
$l.table > $r.table
}
)
'''
fields = [
{key = "current", optional = false},
{key = "voltage", optional = false},
{key = "phase", optional = false},
{key = "location", optional = false},
{key = "groupid", optional = false},
]
After modifying the configuration file, you need to restart the taosAdapter service for the changes to take effect.
Complete configuration parameter description:
input_json.enable: Enable or disable the JSON data writing function (default value: false).input_json.rules: An array defining JSON data writing rules, allowing multiple rules to be configured.
endpoint: Specifies the endpoint name for the RESTful interface, allowing only uppercase and lowercase letters, numbers, as well as _ and -.db: Specifies the target database name for writing data, prohibiting the inclusion of backticks `.dbKey: Specifies the key name in the JSON object used to represent the database name. Cannot be configured simultaneously with db.superTable: Specifies the target supertable name for writing data, prohibiting the inclusion of backticks `.superTableKey: Specifies the key name in the JSON object used to represent the supertable name. Cannot be configured simultaneously with superTable.subTable: Specifies the target subtable name for writing data.subTableKey: Specifies the key name in the JSON object used to represent the subtable name. Cannot be configured simultaneously with subTable.timeKey: Specifies the key name in the JSON object used to represent the timestamp. Defaults to ts if not set.timeFormat: Specifies the format for time parsing. Effective when timeKey is set. See Time Parsing Format Description for supported formats.timezone: Specifies the timezone for the timestamp. Effective when timeKey is set. Uses IANA timezone format, defaulting to the timezone of the machine where taosAdapter is located.transformation: Uses JSONata expressions to transform the input JSON data to meet TDengine TSDB's data writing requirements. For specific syntax, refer to the JSONata documentation.fields: Defines the list of fields to be written, with each field containing the following attributes:
key: Specifies the key name in the JSON object used to represent the field value. Must match the database field name and cannot contain backticks `.optional: Specifies whether the field is optional. The default value is false, indicating the field is mandatory. An error will occur if the key does not exist. If set to true, the field is optional, and no error will be generated if the key is missing; the column will be excluded from the generated SQL.Before writing data, ensure that the target database and supertable have been created. Assume the following database and supertable have been created:
create database test_input_json;
create table test_input_json.meters (ts timestamp, current float, voltage int, phase float) tags (location nchar(64), `groupid` int);
Request example:
```shell
curl -L 'http://localhost:6041/input_json/v1/rule1' \
-u root:taosdata \
-d '{"time":"2025-11-04 09:24:13.123","Los Angeles":{"group_1":{"d_001":{"current":10.5,"voltage":220,"phase":30},"d_002":{"current":15.2,"voltage":230,"phase":45},"d_003":{"current":8.7,"voltage":210,"phase":60}},"group_2":{"d_004":{"current":12.3,"voltage":225,"phase":15},"d_005":{"current":9.8,"voltage":215,"phase":75}}},"New York":{"group_1":{"d_006":{"current":11.0,"voltage":240,"phase":20},"d_007":{"current":14.5,"voltage":235,"phase":50}},"group_2":{"d_008":{"current":13.2,"voltage":245,"phase":10},"d_009":{"current":7.9,"voltage":220,"phase":80}}}}'
Response example:
{
"code": 0,
"desc": "",
"affected": 9
}
code: Indicates the status code of the request. 0 indicates success, while non-0 indicates failure.desc: Provides a description of the request. If code is non-0, it includes error information.affected: Indicates the number of records successfully written.Check the write result:
taos> select tbname,* from test_input_json.meters order by tbname asc;
tbname | ts | current | voltage | phase | location | groupid |
======================================================================================================================================================================
d_001 | 2025-11-04 17:24:13.123 | 10.5 | 220 | 30 | Los Angeles | 1 |
d_002 | 2025-11-04 17:24:13.123 | 15.2 | 230 | 45 | Los Angeles | 1 |
d_003 | 2025-11-04 17:24:13.123 | 8.7 | 210 | 60 | Los Angeles | 1 |
d_004 | 2025-11-04 17:24:13.123 | 12.3 | 225 | 15 | Los Angeles | 2 |
d_005 | 2025-11-04 17:24:13.123 | 9.8 | 215 | 75 | Los Angeles | 2 |
d_006 | 2025-11-04 17:24:13.123 | 11 | 240 | 20 | New York | 1 |
d_007 | 2025-11-04 17:24:13.123 | 14.5 | 235 | 50 | New York | 1 |
d_008 | 2025-11-04 17:24:13.123 | 13.2 | 245 | 10 | New York | 2 |
d_009 | 2025-11-04 17:24:13.123 | 7.9 | 220 | 80 | New York | 2 |
The data has been successfully written to TDengine TSDB. Since TDengine is configured with the UTC+8 timezone, the time is displayed as 2025-11-04 17:24:13.123.
The following time format presets are available:
unix: Timestamp as integer or floating-point number in secondsunix_ms: Timestamp as integer or floating-point number in millisecondsunix_us: Timestamp as integer or floating-point number in microsecondsunix_ns: Timestamp as integer or floating-point number in nanosecondsansic: Time format as Mon Jan _2 15:04:05 2006rubydate: Time format as Mon Jan 02 15:04:05 -0700 2006rfc822z: Time format as 02 Jan 06 15:04 -0700rfc1123z: Time format as Mon, 02 Jan 2006 15:04:05 -0700rfc3339: Time format as 2006-01-02T15:04:05Z07:00rfc3339nano: Time format as 2006-01-02T15:04:05.999999999Z07:00stamp: Time format as Jan _2 15:04:05stampmilli: Time format as Jan _2 15:04:05.000datetime: Time format as 2006-01-02 15:04:05.999999999If these presets do not meet your requirements, you can extend the format using the strftime parsing method.
For complex JSON data formats, you can use the transformation configuration with JSONata expressions to transform the input JSON data to meet TDengine TSDB's data writing requirements. You can use the JSONata online editor to debug and validate JSONata expressions.
Assume the input JSON data is as follows:
{
"time": "2025-11-04 09:24:13.123",
"Los Angeles": {
"group_1": {
"d_001": {
"current": 10.5,
"voltage": 220,
"phase": 30
},
"d_002": {
"current": 15.2,
"voltage": 230,
"phase": 45
},
"d_003": {
"current": 8.7,
"voltage": 210,
"phase": 60
}
},
"group_2": {
"d_004": {
"current": 12.3,
"voltage": 225,
"phase": 15
},
"d_005": {
"current": 9.8,
"voltage": 215,
"phase": 75
}
}
},
"New York": {
"group_1": {
"d_006": {
"current": 11.0,
"voltage": 240,
"phase": 20
},
"d_007": {
"current": 14.5,
"voltage": 235,
"phase": 50
}
},
"group_2": {
"d_008": {
"current": 13.2,
"voltage": 245,
"phase": 10
},
"d_009": {
"current": 7.9,
"voltage": 220,
"phase": 80
}
}
}
}
Using the configuration from the example transformation expression, the converted data is as follows:
[
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "Los Angeles",
"groupid": 1,
"stb": "meters",
"table": "d_001",
"current": 10.5,
"voltage": 220,
"phase": 30
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "Los Angeles",
"groupid": 1,
"stb": "meters",
"table": "d_002",
"current": 15.2,
"voltage": 230,
"phase": 45
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "Los Angeles",
"groupid": 1,
"stb": "meters",
"table": "d_003",
"current": 8.7,
"voltage": 210,
"phase": 60
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "Los Angeles",
"groupid": 2,
"stb": "meters",
"table": "d_004",
"current": 12.3,
"voltage": 225,
"phase": 15
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "Los Angeles",
"groupid": 2,
"stb": "meters",
"table": "d_005",
"current": 9.8,
"voltage": 215,
"phase": 75
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "New York",
"groupid": 1,
"stb": "meters",
"table": "d_006",
"current": 11,
"voltage": 240,
"phase": 20
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "New York",
"groupid": 1,
"stb": "meters",
"table": "d_007",
"current": 14.5,
"voltage": 235,
"phase": 50
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "New York",
"groupid": 2,
"stb": "meters",
"table": "d_008",
"current": 13.2,
"voltage": 245,
"phase": 10
},
{
"db": "test_input_json",
"time": "2025-11-04 09:24:13.123",
"location": "New York",
"groupid": 2,
"stb": "meters",
"table": "d_009",
"current": 7.9,
"voltage": 220,
"phase": 80
}
]
It should be noted that the $each function in the transformation expression is used to iterate over the key-value pairs of a JSON Object. Although the documentation states that the return value of the $each function is an array, when there is only one key-value pair, the return value will be a single object instead of an array. Therefore, when using the $each function, it is necessary to wrap the result with [] to forcibly convert it into an array, ensuring consistency in subsequent processing.
For details, please refer to the JSONata documentation.
Taking the reference configuration example, inputting the JSON data from the Transformation Example Description, the generated SQL is as follows:
insert into `test_input_json`.`meters`(`tbname`,`ts`,`current`,`voltage`,`phase`,`location`,`groupid`)values
('d_001','2025-11-04T09:24:13.123Z',10.5,220,30,'Los Angeles',1)
('d_002','2025-11-04T09:24:13.123Z',15.2,230,45,'Los Angeles',1)
('d_003','2025-11-04T09:24:13.123Z',8.7,210,60,'Los Angeles',1)
('d_004','2025-11-04T09:24:13.123Z',12.3,225,15,'Los Angeles',2)
('d_005','2025-11-04T09:24:13.123Z',9.8,215,75,'Los Angeles',2)
('d_006','2025-11-04T09:24:13.123Z',11,240,20,'New York',1)
('d_007','2025-11-04T09:24:13.123Z',14.5,235,50,'New York',1)
('d_008','2025-11-04T09:24:13.123Z',13.2,245,10,'New York',2)
('d_009','2025-11-04T09:24:13.123Z',7.9,220,80,'New York',2)
SQL generation description:
timeFormat and timezone, and ultimately formatted in RFC3339nano format when concatenated into the SQL statement.db, superTable, subTable, and the obtained fields (note that optional may be set to true, so the obtained data may not include all fields). After grouping, the data will be sorted in ascending time order before generating the SQL statement.To facilitate debugging and validating the correctness of JSON configuration rules, taosAdapter provides a dry run mode. This mode can be enabled by adding the query parameter dry_run=true to the write request. In dry-run mode, taosAdapter does not write data to TDengine TSDB but instead returns the converted JSON and generated SQL statements for user review and validation.
Request example:
curl -L 'http://localhost:6041/input_json/v1/rule1?dry_run=true' \
-u root:taosdata \
-d '{"time":"2025-11-04 09:24:13.123","Los Angeles":{"group_1":{"d_001":{"current":10.5,"voltage":220,"phase":30},"d_002":{"current":15.2,"voltage":230,"phase":45},"d_003":{"current":8.7,"voltage":210,"phase":60}},"group_2":{"d_004":{"current":12.3,"voltage":225,"phase":15},"d_005":{"current":9.8,"voltage":215,"phase":75}}},"New York":{"group_1":{"d_006":{"current":11.0,"voltage":240,"phase":20},"d_007":{"current":14.5,"voltage":235,"phase":50}},"group_2":{"d_008":{"current":13.2,"voltage":245,"phase":10},"d_009":{"current":7.9,"voltage":220,"phase":80}}}}'
Response example:
{
"code": 0,
"desc": "",
"json": "[{\"current\":10.5,\"db\":\"test_input_json\",\"groupid\":1,\"location\":\"Los Angeles\",\"phase\":30,\"stb\":\"meters\",\"table\":\"d_001\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":220},{\"current\":15.2,\"db\":\"test_input_json\",\"groupid\":1,\"location\":\"Los Angeles\",\"phase\":45,\"stb\":\"meters\",\"table\":\"d_002\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":230},{\"current\":8.7,\"db\":\"test_input_json\",\"groupid\":1,\"location\":\"Los Angeles\",\"phase\":60,\"stb\":\"meters\",\"table\":\"d_003\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":210},{\"current\":12.3,\"db\":\"test_input_json\",\"groupid\":2,\"location\":\"Los Angeles\",\"phase\":15,\"stb\":\"meters\",\"table\":\"d_004\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":225},{\"current\":9.8,\"db\":\"test_input_json\",\"groupid\":2,\"location\":\"Los Angeles\",\"phase\":75,\"stb\":\"meters\",\"table\":\"d_005\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":215},{\"current\":11,\"db\":\"test_input_json\",\"groupid\":1,\"location\":\"New York\",\"phase\":20,\"stb\":\"meters\",\"table\":\"d_006\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":240},{\"current\":14.5,\"db\":\"test_input_json\",\"groupid\":1,\"location\":\"New York\",\"phase\":50,\"stb\":\"meters\",\"table\":\"d_007\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":235},{\"current\":13.2,\"db\":\"test_input_json\",\"groupid\":2,\"location\":\"New York\",\"phase\":10,\"stb\":\"meters\",\"table\":\"d_008\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":245},{\"current\":7.9,\"db\":\"test_input_json\",\"groupid\":2,\"location\":\"New York\",\"phase\":80,\"stb\":\"meters\",\"table\":\"d_009\",\"time\":\"2025-11-04 09:24:13.123\",\"voltage\":220}]",
"sql": [
"insert into `test_input_json`.`meters`(`tbname`,`ts`,`current`,`voltage`,`phase`,`location`,`groupid`)values('d_001','2025-11-04T09:24:13.123Z',10.5,220,30,'Los Angeles',1)('d_002','2025-11-04T09:24:13.123Z',15.2,230,45,'Los Angeles',1)('d_003','2025-11-04T09:24:13.123Z',8.7,210,60,'Los Angeles',1)('d_004','2025-11-04T09:24:13.123Z',12.3,225,15,'Los Angeles',2)('d_005','2025-11-04T09:24:13.123Z',9.8,215,75,'Los Angeles',2)('d_006','2025-11-04T09:24:13.123Z',11,240,20,'New York',1)('d_007','2025-11-04T09:24:13.123Z',14.5,235,50,'New York',1)('d_008','2025-11-04T09:24:13.123Z',13.2,245,10,'New York',2)('d_009','2025-11-04T09:24:13.123Z',7.9,220,80,'New York',2)"
]
}
code: Indicates the status code of the request. 0 indicates success, while non-0 indicates failure.desc: Provides a description of the request. If code is non-0, it includes error information.json: Represents the converted JSON data.sql: Represents the array of generated SQL statements.You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL http://<fqdn>:6041/rest/sql. For details, please refer to the REST API documentation.
taosAdapter is part of the TDengine server software. If you are using TDengine server, you do not need any additional steps to install taosAdapter. If you need to deploy taosAdapter separately from the TDengine server, you should install the complete TDengine on that server to install taosAdapter. If you need to compile taosAdapter from source code, you can refer to the Build taosAdapter document.
After the installation is complete, you can start the taosAdapter service using the command systemctl start taosadapter.
taosAdapter supports configuration through command-line parameters, environment variables, and configuration files. The default configuration file is /etc/taos/taosadapter.toml, and you can specify the configuration file using the -c or --config command-line parameter..
Command-line parameters take precedence over environment variables, which take precedence over configuration files. The command-line usage is arg=val, such as taosadapter -p=30000 --debug=true.
See the example configuration file at example/config/taosadapter.toml.
The basic configuration parameters for taosAdapter are as follows:
debug: Whether to enable debug mode (pprof)
true (default): Enables Go pprof debug mode, allowing access to debug information via http://<fqdn>:<port>/debug/pprof.false: Disables debug mode, preventing access to debug information.instanceId: The instance ID of taosAdapter, used to distinguish logs from different instances. Default value: 32.
port: The port on which taosAdapter provides HTTP/WebSocket services. Default value: 6041.
taosConfigDir: The configuration file directory for TDengine. Default value: /etc/taos. The taos.cfg file in this directory will be loaded.
Starting from version 3.3.4.0, taosAdapter supports setting the number of concurrent calls for invoking C methods:
maxAsyncConcurrentLimit
Sets the maximum number of concurrent calls for C asynchronous methods (0 means using the number of CPU cores).
maxSyncConcurrentLimit
Sets the maximum number of concurrent calls for C synchronous methods (0 means using the number of CPU cores).
Starting from version 3.4.0.0, taosAdapter will register itself to the TDengine TSDB. It can be queried using the SQL statement select * from performance_schema.perf_instances where type = 'taosadapter'.
The registration configuration parameters are as follows:
register.instance: The address of the taosAdapter instance, with a maximum length of 255 bytes. If not set or set to an empty string, the system will automatically generate it by concatenating the hostname and port number. If ssl.enable is true, a https protocol header will be prepended.register.description: The description of the taosAdapter instance, with a maximum length of 511 bytes. The default value is an empty string.register.duration: The registration interval for the taosAdapter instance, in seconds. The default value is 10 seconds. Every time this interval elapses, the instance will re-register to refresh its expiration time. This value must be greater than 0 and less than register.expire.register.expire: The expiration time for the taosAdapter instance registration, in seconds. The default value is 30 seconds. If no registration refresh request is received within this time, the registration information will be deleted. This value must be greater than register.duration.When making API calls from the browser, please configure the following Cross-Origin Resource Sharing (CORS) parameters based on your actual situation:
cors.allowAllOrigins: Whether to allow all origins to access, default is true.cors.allowOrigins: A comma-separated list of origins allowed to access. Multiple origins can be specified.cors.allowHeaders: A comma-separated list of request headers allowed for cross-origin access. Multiple headers can be specified.cors.exposeHeaders: A comma-separated list of response headers exposed for cross-origin access. Multiple headers can be specified.cors.allowCredentials: Whether to allow cross-origin requests to include user credentials, such as cookies, HTTP authentication information, or client SSL certificates.cors.allowWebSockets: Whether to allow WebSockets connections.If you are not making API calls through a browser, you do not need to worry about these configurations.
The above configurations take effect for the following interfaces:
For details about the CORS protocol, please refer to: https://www.w3.org/wiki/CORS_Enabled or https://developer.mozilla.org/docs/Web/HTTP/CORS.
taosAdapter uses a connection pool to manage connections to TDengine, improving concurrency performance and resource utilization. The connection pool configuration applies to the following interfaces, and these interfaces share a single connection pool:
The configuration parameters for the connection pool are as follows:
pool.maxConnect: The maximum number of connections allowed in the pool, default is twice the number of CPU cores. It is recommended to keep the default setting.pool.maxIdle: The maximum number of idle connections in the pool, default is the same as pool.maxConnect. It is recommended to keep the default setting.pool.idleTimeout: Connection idle timeout, default is never timeout. It is recommended to keep the default setting.pool.waitTimeout: Timeout for obtaining a connection from the pool, default is set to 60 seconds. If a connection is not obtained within the timeout period, HTTP status code 503 will be returned. This parameter is available starting from version 3.3.3.0.pool.maxWait: The maximum number of requests waiting to get a connection in the pool, default is 0, which means no limit. When the number of queued requests exceeds this value, new requests will return HTTP status code 503. This parameter is available starting from version 3.3.3.0.taosAdapter uses the parameter httpCodeServerError to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See HTTP Response Codes for details.
This configuration only affects the RESTful interface.
Parameter Description:
httpCodeServerError:
true: Map the error code returned by the C interface to the corresponding HTTP status code.false: Regardless of the error returned by the C interface, always return the HTTP status code 200 (default value).taosAdapter will monitor the memory usage during its operation and adjust it through two thresholds. The valid value range is an integer from 1 to 100, and the unit is the percentage of system physical memory.
This configuration only affects the following interfaces:
pauseQueryMemoryThreshold:
70 (i.e. 70% of system physical memory).pauseAllMemoryThreshold:
80 (i.e. 80% of system physical memory).When memory usage falls below the threshold, taosAdapter will automatically resume the corresponding function.
When pauseQueryMemoryThreshold is exceeded:
503"query memory exceeds threshold"When pauseAllMemoryThreshold is exceeded:
503"memory exceeds threshold"The memory status of taosAdapter can be checked through the following interface:
http://<fqdn>:6041/-/ping returns code 200.pauseAllMemoryThreshold, code 503 is returned.pauseQueryMemoryThreshold and the request parameter contains action=query, code 503 is returned.monitor.collectDuration: memory monitoring interval, default value is 3s, environment variable is TAOS_MONITOR_COLLECT_DURATION.monitor.incgroup: whether to run in a container (set to true for running in a container), default value is false, environment variable is TAOS_MONITOR_INCGROUP.monitor.pauseQueryMemoryThreshold: memory threshold (percentage) for query request pause, default value is 70, environment variable is TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD.monitor.pauseAllMemoryThreshold: memory threshold (percentage) for query and write request pause, default value is 80, environment variable is TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD.You can make corresponding adjustments based on the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor the system memory status in a timely manner. The load balancer can also check the operation status of taosAdapter through this interface.
Starting from version 3.0.4.0, taosAdapter provides the parameter smlAutoCreateDB to control whether to automatically create a database (DB) when writing to the schemaless protocol.
The smlAutoCreateDB parameter only affects the following interfaces:
smlAutoCreateDB:
true: When writing to the schemaless protocol, if the target database does not exist, taosAdapter will automatically create the database.false: The user needs to manually create the database, otherwise the write will fail (default value).taosAdapter provides the parameter restfulRowLimit to control the number of results returned by the HTTP interface.
The restfulRowLimit parameter only affects the return results of the following interfaces:
restfulRowLimit:
-1: The number of results returned by the interface is unlimited (default value).The log can be configured with the following parameters:
log.path
Specifies the log storage path (Default: "/var/log/taos").
log.level
Sets the log level (Default: "info").
log.keepDays
Number of days to retain logs (Positive integer, Default: 30).
log.rotationCount
Number of log files to rotate (Default: 30).
log.rotationSize
Maximum size of a single log file (Supports KB/MB/GB units, Default: "1GB").
log.compress
Whether to compress old log files (Default: false).
log.rotationTime
Log rotation interval (Deprecated, fixed at 24-hour rotation).
log.reservedDiskSize
Disk space reserved for log directory (Supports KB/MB/GB units, Default: "1GB").
log.enableSqlToCsvLogging
Whether to record SQL to CSV files(Default: false).For details, see Recording SQL to CSV Files.
log.enableStmtToCsvLogging
Whether to record STMT to CSV files(Default: false).For details, see Recording STMT to CSV Files.
log.enableRecordHttpSql
This parameter is deprecated; use Recording SQL to CSV Files instead.
Whether to record HTTP SQL requests (Default: false).
log.sqlRotationCount
This parameter is deprecated; use Recording SQL to CSV Files instead.
Number of SQL log files to rotate (Default: 2).
log.sqlRotationSize
This parameter is deprecated; use Recording SQL to CSV Files instead.
Maximum size of a single SQL log file (Supports KB/MB/GB units, Default: "1GB").
log.sqlRotationTime
This parameter is deprecated; use Recording SQL to CSV Files instead.
SQL log rotation interval (Default: 24h).
You can set the taosAdapter log output detail level by setting the --log.level parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
collectd.enable
Enable/disable collectd protocol support (Default: false)
collectd.port
Collectd service listening port (Default: 6045)
collectd.db
Target database for collectd data (Default: "collectd")
collectd.user
Database username (Default: "root")
collectd.password
Database password (Default: "taosdata")
collectd.token
Database token (Default: ""). Available in TDengine Enterprise Edition 3.4.0.0 and above.
collectd.ttl
Data time-to-live (Default: 0 = no expiration)
collectd.worker
Number of write worker threads (Default: 10)
influxdb.enable
Enable/disable InfluxDB protocol support (Default: true)
opentsdb.enable
Enable OpenTSDB HTTP protocol (Default: true)
opentsdb_telnet.enable
Enable OpenTSDB Telnet (Warning: no auth, Default: false)
opentsdb_telnet.ports
Listening ports (Default: [6046,6047,6048,6049])
opentsdb_telnet.dbs
Target databases (Default: ["opentsdb_telnet","collectd_tsdb","icinga2_tsdb","tcollector_tsdb"])
opentsdb_telnet.user
Database username (Default: "root")
opentsdb_telnet.password
Database password (Default: "taosdata")
opentsdb_telnet.ttl
Data TTL (Default: 0)
opentsdb_telnet.token
Database token (Default: ""). Available in TDengine Enterprise Edition 3.4.0.0 and above.
opentsdb_telnet.batchSize
Batch write size (Default: 1)
opentsdb_telnet.flushInterval
Flush interval (Default: 0s)
opentsdb_telnet.maxTCPConnections
Max TCP connections (Default: 250)
opentsdb_telnet.tcpKeepAlive
Enable TCP KeepAlive (Default: false)
statsd.enable
Enable StatsD protocol (Default: false)
statsd.port
Listening port (Default: 6044)
statsd.protocol
Transport protocol (Options: tcp/udp/tcp4/udp4, Default: "udp4")
statsd.db
Target database (Default: "statsd")
statsd.user
Database username (Default: "root")
statsd.password
Database password (Default: "taosdata")
statsd.token
Database token (Default: ""). Available in TDengine Enterprise Edition 3.4.0.0 and above.
statsd.ttl
Data TTL (Default: 0)
statsd.gatherInterval
Collection interval (Default: 5s)
statsd.worker
Worker threads (Default: 10)
statsd.allowPendingMessages
Max pending messages (Default: 50000)
statsd.maxTCPConnections
Max TCP connections (Default: 250)
statsd.tcpKeepAlive
Enable TCP KeepAlive (Default: false)
statsd.deleteCounters
Clear counter cache after collection (Default: true)
statsd.deleteGauges
Clear gauge cache after collection (Default: true)
statsd.deleteSets
Clear sets cache after collection (Default: true)
statsd.deleteTimings
Clear timings cache after collection (Default: true)
prometheus.enable
Enable Prometheus protocol (Default: true)
open_metrics.enable
Enable/disable OpenMetrics data collection (Default: false).
open_metrics.user
Username for TDengine connection (Default: "root").
open_metrics.password
Password for TDengine connection (Default: "taosdata").
open_metrics.urls
List of OpenMetrics data collection endpoints (Default: ["http://localhost:9100"], automatically appends /metrics if no route specified).
open_metrics.dbs
Target databases for data writing (Default: ["open_metrics"], must match number of collection URLs).
open_metrics.responseTimeoutSeconds
Collection timeout in seconds (Default: [5], must match number of collection URLs).
open_metrics.httpUsernames
Basic authentication usernames (If enabled, must match number of collection URLs, Default: empty).
open_metrics.httpPasswords
Basic authentication passwords (If enabled, must match number of collection URLs, Default: empty).
open_metrics.httpBearerTokenStrings
Bearer token authentication strings (If enabled, must match number of collection URLs, Default: empty).
open_metrics.caCertFiles
Root certificate file paths (If enabled, must match number of collection URLs, Default: empty).
open_metrics.certFiles
Client certificate file paths (If enabled, must match number of collection URLs, Default: empty).
open_metrics.keyFiles
Client certificate key file paths (If enabled, must match number of collection URLs, Default: empty).
open_metrics.insecureSkipVerify
Skip HTTPS certificate verification (Default: true).
open_metrics.gatherDurationSeconds
Collection interval in seconds (Default: [5], must match number of collection URLs).
open_metrics.token
Database token (Default: ""). Available in TDengine Enterprise Edition 3.4.0.0 and above.
open_metrics.ttl
Table Time-To-Live in seconds (0 means no expiration, if enabled must match number of collection URLs, Default: empty).
open_metrics.ignoreTimestamp
Ignore timestamps in collected data (uses collection time if ignored, Default: false).
node_exporter.enable
Enable node_exporter data collection (Default: false)
node_exporter.db
Target database name (Default: "node_exporter")
node_exporter.urls
Service endpoints (Default: ["http://localhost:9100"])
node_exporter.gatherDuration
Collection interval (Default: 5s)
node_exporter.responseTimeout
Request timeout (Default: 5s)
node_exporter.user
Database username (Default: "root")
node_exporter.password
Database password (Default: "taosdata")
node_exporter.token
Database token (Default: ""). Available in TDengine Enterprise Edition 3.4.0.0 and above.
node_exporter.ttl
Data TTL (Default: 0)
node_exporter.httpUsername
HTTP Basic Auth username (Optional)
node_exporter.httpPassword
HTTP Basic Auth password (Optional)
node_exporter.httpBearerTokenString
HTTP Bearer Token (Optional)
node_exporter.insecureSkipVerify
Skip SSL verification (Default: true)
node_exporter.certFile
Client certificate path (Optional)
node_exporter.keyFile
Client key path (Optional)
node_exporter.caCertFile
CA certificate path (Optional)
taosAdapter reports metrics to taosKeeper with these parameters:
uploadKeeper.enable
Enable metrics reporting (Default: true)
uploadKeeper.url
taosKeeper endpoint (Default: http://127.0.0.1:6043/adapter_report)
uploadKeeper.interval
Reporting interval (Default: 15s)
uploadKeeper.timeout
Request timeout (Default: 5s)
uploadKeeper.retryTimes
Max retries (Default: 3)
uploadKeeper.retryInterval
Retry interval (Default: 5s)
Starting from version 3.3.6.29/3.3.8.3, taosAdapter supports configuring concurrency limits for query requests to prevent excessive concurrent queries from exhausting system resources. When this feature is enabled, taosAdapter controls the number of concurrent query requests being processed simultaneously based on the configured concurrency limit. Requests exceeding the limit will enter a waiting state until processing resources become available.
If the waiting time exceeds the configured timeout or the number of waiting requests exceeds the configured maximum waiting requests, taosAdapter will directly return an error response, indicating that there are too many requests.
RESTful requests will return HTTP status code 503, and WebSocket requests will return error code 0xFFFE.
This configuration affects the following interfaces:
Parameter Description
request.queryLimitEnable
true: Enables the query request concurrency limit feature.false: Disables the query request concurrency limit feature (default value).request.default.queryLimit
0, meaning no limit).request.default.queryWaitTimeout
900.request.default.queryMaxWait
0, meaning no limit.request.excludeQueryLimitSql
select (case-insensitive).request.excludeQueryLimitSqlRegex
Customizable per User
Configurable only via the configuration file:
request.users.<username>.queryLimit
request.users.<username>.queryWaitTimeout
request.users.<username>.queryMaxWait
Example
[request]
queryLimitEnable = true
excludeQueryLimitSql = ["select 1","select server_version()"]
excludeQueryLimitSqlRegex = ['(?i)^select\s+.*from\s+information_schema.*']
[request.default]
queryLimit = 200
queryWaitTimeout = 900
queryMaxWait = 0
[request.users.root]
queryLimit = 100
queryWaitTimeout = 200
queryMaxWait = 10
queryLimitEnable = true enables the query request concurrency limit feature.excludeQueryLimitSql = ["select 1","select server_version()"] excludes two commonly used SQL queries for ping.excludeQueryLimitSqlRegex = ['(?i)^select\s+.*from\s+information_schema.*'] excludes all SQL queries that query the information_schema database.request.default configures the default query request concurrency limit to 200, wait timeout to 900 seconds, and maximum wait requests to 0 (unlimited).request.users.root configures the query request concurrency limit for user root to 100, wait timeout to 200 seconds, and maximum wait requests to 10.When user root initiates a query request, taosAdapter will perform concurrency limit processing based on the above configuration. When the number of query requests exceeds 100, subsequent requests will enter a waiting state until resources are available. If the wait time exceeds 200 seconds or the number of waiting requests exceeds 10, taosAdapter will directly return an error response.
When other users initiate query requests, the default concurrency limit configuration will be used for processing.
Each user's configuration is independent and does not share the concurrency limit of request.default.
For example, when user user1 initiates 200 concurrent query requests, user user2 can also initiate 200 concurrent query requests simultaneously without blocking.
Starting from version 3.3.6.34 / 3.4.0.0, taosAdapter supports rejecting specific query SQL statements through configuration, preventing the execution of unsafe or highly resource-consuming queries.
When this feature is enabled, taosAdapter checks each SQL statement that does not start with insert (case-insensitive). If the SQL matches any of the configured reject patterns, an error response is returned indicating that the query is forbidden.
When a rejected SQL query is matched, the RESTful API returns HTTP status code 403, and the WebSocket interface returns error code 0xFFFD.
Meanwhile, taosAdapter prints a warning log containing details such as the SQL source, for example:
reject sql, client_ip:192.168.1.98, port:59912, user:root, app:test_app, reject_regex:(?i)^drop\s+table\s+.*, sql:DROP taBle testdb.stb
This configuration affects the following interfaces:
Parameter Description
rejectQuerySqlRegex
Example
rejectQuerySqlRegex = ['(?i)^drop\s+database\s+.*','(?i)^drop\s+table\s+.*','(?i)^alter\s+table\s+.*']
The configuration rejectQuerySqlRegex = ['(?i)^drop\\s+database\\s+.*','(?i)^drop\\s+table\\s+.*','(?i)^alter\\s+table\\s+.*'] rejects all drop database, drop table, and alter table queries, ignoring case.
Configuration Parameters and their corresponding environment variables:
<details> <summary>Details</summary>| Configuration Parameter | Environment Variable |
|---|---|
collectd.db | TAOS_ADAPTER_COLLECTD_DB |
collectd.enable | TAOS_ADAPTER_COLLECTD_ENABLE |
collectd.password | TAOS_ADAPTER_COLLECTD_PASSWORD |
collectd.port | TAOS_ADAPTER_COLLECTD_PORT |
collectd.token | TAOS_ADAPTER_COLLECTD_TOKEN |
collectd.ttl | TAOS_ADAPTER_COLLECTD_TTL |
collectd.user | TAOS_ADAPTER_COLLECTD_USER |
collectd.worker | TAOS_ADAPTER_COLLECTD_WORKER |
cors.allowAllOrigins | TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS |
cors.allowCredentials | TAOS_ADAPTER_CORS_ALLOW_Credentials |
cors.allowHeaders | TAOS_ADAPTER_ALLOW_HEADERS |
cors.allowOrigins | TAOS_ADAPTER_ALLOW_ORIGINS |
cors.allowWebSockets | TAOS_ADAPTER_CORS_ALLOW_WebSockets |
cors.exposeHeaders | TAOS_ADAPTER_Expose_Headers |
debug | TAOS_ADAPTER_DEBUG |
httpCodeServerError | TAOS_ADAPTER_HTTP_CODE_SERVER_ERROR |
influxdb.enable | TAOS_ADAPTER_INFLUXDB_ENABLE |
instanceId | TAOS_ADAPTER_INSTANCE_ID |
log.compress | TAOS_ADAPTER_LOG_COMPRESS |
log.enableRecordHttpSql | TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL |
log.enableSqlToCsvLogging | TAOS_ADAPTER_LOG_ENABLE_SQL_TO_CSV_LOGGING |
log.enableStmtToCsvLogging | TAOS_ADAPTER_LOG_ENABLE_STMT_TO_CSV_LOGGING |
log.keepDays | TAOS_ADAPTER_LOG_KEEP_DAYS |
log.level | TAOS_ADAPTER_LOG_LEVEL |
log.path | TAOS_ADAPTER_LOG_PATH |
log.reservedDiskSize | TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE |
log.rotationCount | TAOS_ADAPTER_LOG_ROTATION_COUNT |
log.rotationSize | TAOS_ADAPTER_LOG_ROTATION_SIZE |
log.rotationTime | TAOS_ADAPTER_LOG_ROTATION_TIME |
log.sqlRotationCount | TAOS_ADAPTER_LOG_SQL_ROTATION_COUNT |
log.sqlRotationSize | TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE |
log.sqlRotationTime | TAOS_ADAPTER_LOG_SQL_ROTATION_TIME |
logLevel | TAOS_ADAPTER_LOG_LEVEL |
maxAsyncConcurrentLimit | TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT |
maxSyncConcurrentLimit | TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT |
monitor.collectDuration | TAOS_ADAPTER_MONITOR_COLLECT_DURATION |
monitor.disable | TAOS_ADAPTER_MONITOR_DISABLE |
monitor.identity | TAOS_ADAPTER_MONITOR_IDENTITY |
monitor.incgroup | TAOS_ADAPTER_MONITOR_INCGROUP |
monitor.pauseAllMemoryThreshold | TAOS_ADAPTER_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD |
monitor.pauseQueryMemoryThreshold | TAOS_ADAPTER_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD |
node_exporter.caCertFile | TAOS_ADAPTER_NODE_EXPORTER_CA_CERT_FILE |
node_exporter.certFile | TAOS_ADAPTER_NODE_EXPORTER_CERT_FILE |
node_exporter.db | TAOS_ADAPTER_NODE_EXPORTER_DB |
node_exporter.enable | TAOS_ADAPTER_NODE_EXPORTER_ENABLE |
node_exporter.gatherDuration | TAOS_ADAPTER_NODE_EXPORTER_GATHER_DURATION |
node_exporter.httpBearerTokenString | TAOS_ADAPTER_NODE_EXPORTER_HTTP_BEARER_TOKEN_STRING |
node_exporter.httpPassword | TAOS_ADAPTER_NODE_EXPORTER_HTTP_PASSWORD |
node_exporter.httpUsername | TAOS_ADAPTER_NODE_EXPORTER_HTTP_USERNAME |
node_exporter.insecureSkipVerify | TAOS_ADAPTER_NODE_EXPORTER_INSECURE_SKIP_VERIFY |
node_exporter.keyFile | TAOS_ADAPTER_NODE_EXPORTER_KEY_FILE |
node_exporter.password | TAOS_ADAPTER_NODE_EXPORTER_PASSWORD |
node_exporter.responseTimeout | TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT |
node_exporter.token | TAOS_ADAPTER_NODE_EXPORTER_TOKEN |
node_exporter.ttl | TAOS_ADAPTER_NODE_EXPORTER_TTL |
node_exporter.urls | TAOS_ADAPTER_NODE_EXPORTER_URLS |
node_exporter.user | TAOS_ADAPTER_NODE_EXPORTER_USER |
open_metrics.enable | TAOS_ADAPTER_OPEN_METRICS_ENABLE |
open_metrics.user | TAOS_ADAPTER_OPEN_METRICS_USER |
open_metrics.password | TAOS_ADAPTER_OPEN_METRICS_PASSWORD |
open_metrics.urls | TAOS_ADAPTER_OPEN_METRICS_URLS |
open_metrics.dbs | TAOS_ADAPTER_OPEN_METRICS_DBS |
open_metrics.responseTimeoutSeconds | TAOS_ADAPTER_OPEN_METRICS_RESPONSE_TIMEOUT_SECONDS |
open_metrics.httpUsernames | TAOS_ADAPTER_OPEN_METRICS_HTTP_USERNAMES |
open_metrics.httpPasswords | TAOS_ADAPTER_OPEN_METRICS_HTTP_PASSWORDS |
open_metrics.httpBearerTokenStrings | TAOS_ADAPTER_OPEN_METRICS_HTTP_BEARER_TOKEN_STRINGS |
open_metrics.caCertFiles | TAOS_ADAPTER_OPEN_METRICS_CA_CERT_FILES |
open_metrics.certFiles | TAOS_ADAPTER_OPEN_METRICS_CERT_FILES |
open_metrics.keyFiles | TAOS_ADAPTER_OPEN_METRICS_KEY_FILES |
open_metrics.insecureSkipVerify | TAOS_ADAPTER_OPEN_METRICS_INSECURE_SKIP_VERIFY |
open_metrics.gatherDurationSeconds | TAOS_ADAPTER_OPEN_METRICS_GATHER_DURATION_SECONDS |
open_metrics.ignoreTimestamp | TAOS_ADAPTER_OPEN_METRICS_IGNORE_TIMESTAMP |
open_metrics.token | TAOS_ADAPTER_OPEN_METRICS_TOKEN |
open_metrics.ttl | TAOS_ADAPTER_OPEN_METRICS_TTL |
opentsdb.enable | TAOS_ADAPTER_OPENTSDB_ENABLE |
opentsdb_telnet.batchSize | TAOS_ADAPTER_OPENTSDB_TELNET_BATCH_SIZE |
opentsdb_telnet.dbs | TAOS_ADAPTER_OPENTSDB_TELNET_DBS |
opentsdb_telnet.enable | TAOS_ADAPTER_OPENTSDB_TELNET_ENABLE |
opentsdb_telnet.flushInterval | TAOS_ADAPTER_OPENTSDB_TELNET_FLUSH_INTERVAL |
opentsdb_telnet.maxTCPConnections | TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS |
opentsdb_telnet.password | TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD |
opentsdb_telnet.ports | TAOS_ADAPTER_OPENTSDB_TELNET_PORTS |
opentsdb_telnet.tcpKeepAlive | TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE |
opentsdb_telnet.token | TAOS_ADAPTER_OPENTSDB_TELNET_TOKEN |
opentsdb_telnet.ttl | TAOS_ADAPTER_OPENTSDB_TELNET_TTL |
opentsdb_telnet.user | TAOS_ADAPTER_OPENTSDB_TELNET_USER |
pool.idleTimeout | TAOS_ADAPTER_POOL_IDLE_TIMEOUT |
pool.maxConnect | TAOS_ADAPTER_POOL_MAX_CONNECT |
pool.maxIdle | TAOS_ADAPTER_POOL_MAX_IDLE |
pool.maxWait | TAOS_ADAPTER_POOL_MAX_WAIT |
pool.waitTimeout | TAOS_ADAPTER_POOL_WAIT_TIMEOUT |
P, port | TAOS_ADAPTER_PORT |
prometheus.enable | TAOS_ADAPTER_PROMETHEUS_ENABLE |
register.description | TAOS_ADAPTER_REGISTER_DESCRIPTION |
register.duration | TAOS_ADAPTER_REGISTER_DURATION |
register.expire | TAOS_ADAPTER_REGISTER_EXPIRE |
register.instance | TAOS_ADAPTER_REGISTER_INSTANCE |
request.default.queryLimit | TAOS_ADAPTER_REQUEST_DEFAULT_QUERY_LIMIT |
request.default.queryMaxWait | TAOS_ADAPTER_REQUEST_DEFAULT_QUERY_MAX_WAIT |
request.default.queryWaitTimeout | TAOS_ADAPTER_REQUEST_DEFAULT_QUERY_WAIT_TIMEOUT |
request.excludeQueryLimitSql | TAOS_ADAPTER_REQUEST_EXCLUDE_QUERY_LIMIT_SQL |
request.excludeQueryLimitSqlRegex | TAOS_ADAPTER_REQUEST_EXCLUDE_QUERY_LIMIT_SQL_REGEX |
request.queryLimitEnable | TAOS_ADAPTER_REQUEST_QUERY_LIMIT_ENABLE |
restfulRowLimit | TAOS_ADAPTER_RESTFUL_ROW_LIMIT |
smlAutoCreateDB | TAOS_ADAPTER_SML_AUTO_CREATE_DB |
statsd.allowPendingMessages | TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES |
statsd.db | TAOS_ADAPTER_STATSD_DB |
statsd.deleteCounters | TAOS_ADAPTER_STATSD_DELETE_COUNTERS |
statsd.deleteGauges | TAOS_ADAPTER_STATSD_DELETE_GAUGES |
statsd.deleteSets | TAOS_ADAPTER_STATSD_DELETE_SETS |
statsd.deleteTimings | TAOS_ADAPTER_STATSD_DELETE_TIMINGS |
statsd.enable | TAOS_ADAPTER_STATSD_ENABLE |
statsd.gatherInterval | TAOS_ADAPTER_STATSD_GATHER_INTERVAL |
statsd.maxTCPConnections | TAOS_ADAPTER_STATSD_MAX_TCP_CONNECTIONS |
statsd.password | TAOS_ADAPTER_STATSD_PASSWORD |
statsd.port | TAOS_ADAPTER_STATSD_PORT |
statsd.protocol | TAOS_ADAPTER_STATSD_PROTOCOL |
statsd.tcpKeepAlive | TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE |
statsd.token | TAOS_ADAPTER_STATSD_TOKEN |
statsd.ttl | TAOS_ADAPTER_STATSD_TTL |
statsd.user | TAOS_ADAPTER_STATSD_USER |
statsd.worker | TAOS_ADAPTER_STATSD_WORKER |
taosConfigDir | TAOS_ADAPTER_TAOS_CONFIG_FILE |
uploadKeeper.enable | TAOS_ADAPTER_UPLOAD_KEEPER_ENABLE |
uploadKeeper.interval | TAOS_ADAPTER_UPLOAD_KEEPER_INTERVAL |
uploadKeeper.retryInterval | TAOS_ADAPTER_UPLOAD_KEEPER_RETRY_INTERVAL |
uploadKeeper.retryTimes | TAOS_ADAPTER_UPLOAD_KEEPER_RETRY_TIMES |
uploadKeeper.timeout | TAOS_ADAPTER_UPLOAD_KEEPER_TIMEOUT |
uploadKeeper.url | TAOS_ADAPTER_UPLOAD_KEEPER_URL |
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command systemctl start taosadapter to start the taosAdapter service. Use the command systemctl stop taosadapter to stop the taosAdapter service.
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server. taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
Starting from 3.3.5.0 version, taosAdapter supports dynamic modification of log level through HTTP interface. Users can dynamically adjust the log level by sending HTTP PUT request to /config interface. The authentication method of this interface is the same as /rest/sql interface, and the configuration item key-value pair in JSON format must be passed in the request body.
The following is an example of setting the log level to debug through the curl command:
curl --location --request PUT 'http://127.0.0.1:6041/config' \
-u root:taosdata \
--data '{"log.level": "debug"}'
Starting from version 3.3.6.34/3.4.0.0, taosAdapter supports monitoring configuration file changes and automatically updates the following configurations:
log.level log level parameterrejectQuerySqlRegex rejected query SQL list configuration parameterStarting from version 3.3.6.13, taosAdapter supports IPv6. No additional configuration is required. taosAdapter automatically detects the system's IPv6 support: when available, it enables IPv6 and simultaneously listens on both IPv4 and IPv6 addresses.
taosAdapter supports recording SQL requests to CSV files. Users can enable this feature through the configuration parameter log.enableSqlToCsvLogging or dynamically enable/disable it via HTTP requests.
New configuration item log.enableSqlToCsvLogging (boolean, default: false) determines whether SQL logging is enabled.
When set to true, SQL records will be saved to CSV files.
The recording start time is the service startup time, and the end time is 2300-01-01 00:00:00.
File naming follows the same rules as logs: taosadapterSql_{instanceId}_{yyyyMMdd}.csv[.index]
instanceId: taosAdapter instance ID, configurable via the instanceId parameter.yyyyMMdd: Date in year-month-day format.index: If multiple files exist, a numeric suffix will be appended to the filename.Existing log parameters are used for space retention, file splitting, and storage path:
log.path: Storage pathlog.keepDays: Retention period in dayslog.rotationCount: Maximum number of retained fileslog.rotationSize: Maximum size per filelog.compress: Whether compression is enabledlog.reservedDiskSize: Reserved disk space sizeSend an HTTP POST request to the /record_sql endpoint to dynamically enable recording. Authentication is the same as for /rest/sql. Example:
curl --location --request POST 'http://127.0.0.1:6041/record_sql' \
-u root:taosdata \
--data '{"start_time":"2026-01-13 17:00:00","end_time":"2026-01-13 18:00:00","location":"Asia/Shanghai"}'
Supported parameters:
start_time: [Optional] Start time for recording, formatted as yyyy-MM-dd HH:mm:ss. Defaults to the current time if not specified.end_time: [Optional] End time for recording, formatted as yyyy-MM-dd HH:mm:ss. Defaults to 2300-01-01 00:00:00 if not specified.location: [Optional] Timezone for parsing start and end times, using IANA format (e.g., Asia/Shanghai). Defaults to the server's timezone.If all parameters use default values, the data field can be omitted. Example:
curl --location --request POST 'http://127.0.0.1:6041/record_sql' \
-u root:taosdata
Successful response: HTTP code 200 with the following structure:
{"code":0,"desc":""}
Failed response: Non-200 HTTP code with the following JSON structure (non-zero code and error description in desc):
{"code":65535,"desc":"unmarshal json error"}
Send an HTTP DELETE request to the /record_sql endpoint to disable recording. Authentication is the same as for /rest/sql. Example:
curl --location --request DELETE 'http://127.0.0.1:6041/record_sql' \
-u root:taosdata
Successful response: HTTP code 200.
If a task exists, the response is:
{
"code": 0,
"message": "",
"start_time": "2025-07-23 17:00:00",
"end_time": "2025-07-23 18:00:00"
}
start_time: Configured start time of the canceled task (timezone: server's timezone).end_time: Configured end time of the canceled task (timezone: server's timezone).If no task exists, the response is:
{
"code": 0,
"message": ""
}
Send an HTTP GET request to the /record_sql endpoint to query the task status. Authentication is the same as for /rest/sql. Example:
curl --location 'http://127.0.0.1:6041/record_sql' \
-u root:taosdata
Successful response: HTTP code 200 with the following structure:
{
"code": 0,
"desc": "",
"exists": true,
"running": true,
"start_time": "2025-07-16 17:00:00",
"end_time": "2025-07-16 18:00:00",
"current_concurrent": 100
}
code: Error code (0 for success).desc: Error message (empty string for success).exists: Whether the task exists.running: Whether the task is active.start_time: Start time (timezone: server's timezone).end_time: End time (timezone: server's timezone).current_concurrent: Current SQL recording concurrency.Records are written before taos_free_result is executed or when the task ends (reaching the end time or being manually stopped).
Records are stored in CSV format without headers. Each line includes the following fields:
TS: Log timestamp (format: yyyy-MM-dd HH:mm:ss.SSSSSS, timezone: server's timezone).
SQL: Executed SQL statement. Line breaks in SQL are preserved per CSV standards. Special characters (\n, \r, ") are wrapped in double quotes.
SQL containing special characters cannot be directly copied for use. Example:
Original SQL:
select * from t1
where c1 = "ab"
CSV record:
"select * from t1
where c1 = ""ab"""
IP: Client IP.
User: Username executing the SQL.
ConnType: Connection type (HTTP, WS).
QID: Request ID (saved as hexadecimal).
ReceiveTime: Time when the SQL was received (format: yyyy-MM-dd HH:mm:ss.SSSSSS, timezone: server's timezone).
FreeTime: Time when the SQL was released (format: yyyy-MM-dd HH:mm:ss.SSSSSS, timezone: server's timezone).
QueryDuration(us): Time consumed from taos_query_a to callback completion (microseconds).
FetchDuration(us): Cumulative time consumed by multiple taos_fetch_raw_block_a executions until callback completion (microseconds).
GetConnDuration(us): Time consumed to obtain a connection from the HTTP connection pool (microseconds).
TotalDuration(us): Total SQL request completion time (microseconds). For completed SQL: FreeTime - ReceiveTime. For incomplete SQL when the task ends: CurrentTime - ReceiveTime.
SourcePort: Client port. (Added in version 3.3.6.26 and above / 3.3.8.0 and above).
AppName: Client application name. (Added in version 3.3.6.26 and above / 3.3.8.0 and above).
Example:
2025-07-23 17:10:08.724775,show databases,127.0.0.1,root,http,0x2000000000000008,2025-07-23 17:10:08.707741,2025-07-23 17:10:08.724775,14191,965,1706,17034,53600,jdbc_test_app
taosAdapter supports recording STMT requests to CSV files starting from version v3.4.0.1. Users can enable this feature through the configuration parameter log.enableStmtToCsvLogging or dynamically enable/disable it via HTTP requests.
:::warning Enabling this feature leads to severe performance degradation. :::
New configuration item log.enableStmtToCsvLogging (boolean, default: false) determines whether STMT logging is enabled.
When set to true, STMT records will be saved to CSV files.
The recording start time is the service startup time, and the end time is 2300-01-01 00:00:00.
File naming follows the same rules as logs: taosadapterStmt_{instanceId}_{yyyyMMdd}.csv[.index]
instanceId: taosAdapter instance ID, configurable via the instanceId parameter.yyyyMMdd: Date in year-month-day format.index: If multiple files exist, a numeric suffix will be appended to the filename.Existing log parameters are used for space retention, file splitting, and storage path:
log.path: Storage pathlog.keepDays: Retention period in dayslog.rotationCount: Maximum number of retained fileslog.rotationSize: Maximum size per filelog.compress: Whether compression is enabledlog.reservedDiskSize: Reserved disk space sizeSend an HTTP POST request to the /record_stmt endpoint to dynamically enable recording. Authentication is the same as for /rest/sql. Example:
curl --location --request POST 'http://127.0.0.1:6041/record_stmt' \
-u root:taosdata \
--data '{"start_time":"2026-01-13 17:00:00","end_time":"2026-01-13 18:00:00","location":"Asia/Shanghai"}'
Supported parameters:
start_time: [Optional] Start time for recording, formatted as yyyy-MM-dd HH:mm:ss. Defaults to the current time if not specified.end_time: [Optional] End time for recording, formatted as yyyy-MM-dd HH:mm:ss. Defaults to the current time plus one hour if not specified.location: [Optional] Timezone for parsing start and end times, using IANA format (e.g., Asia/Shanghai). Defaults to the server's timezone.If all parameters use default values, the data field can be omitted. Example:
curl --location --request POST 'http://127.0.0.1:6041/record_stmt' \
-u root:taosdata
Successful response: HTTP code 200 with the following structure:
{"code":0,"desc":""}
Failed response: Non-200 HTTP code with the following JSON structure (non-zero code and error description in desc):
{"code":65535,"desc":"unmarshal json error"}
Send an HTTP DELETE request to the /record_stmt endpoint to disable recording. Authentication is the same as for /rest/sql. Example:
curl --location --request DELETE 'http://127.0.0.1:6041/record_stmt' \
-u root:taosdata
Successful response: HTTP code 200.
If a task exists, the response is:
{
"code": 0,
"message": "",
"start_time": "2026-01-13 17:00:00",
"end_time": "2026-01-13 18:00:00"
}
start_time: Configured start time of the canceled task (timezone: server's timezone).end_time: Configured end time of the canceled task (timezone: server's timezone).If no task exists, the response is:
{
"code": 0,
"message": ""
}
Send an HTTP GET request to the /record_stmt endpoint to query the task status. Authentication is the same as for /rest/sql. Example:
curl --location 'http://127.0.0.1:6041/record_stmt' \
-u root:taosdata
Successful response: HTTP code 200 with the following structure:
{
"code": 0,
"desc": "",
"exists": true,
"running": true,
"start_time": "2026-01-13 17:00:00",
"end_time": "2026-01-13 18:00:00",
"current_concurrent": 100
}
code: Error code (0 for success).desc: Error message (empty string for success).exists: Whether the task exists.running: Whether the task is active.start_time: Start time (timezone: server's timezone).end_time: End time (timezone: server's timezone).current_concurrent: Current STMT recording concurrency.Records are written after prepare, bind, and exec operations, or when the task ends (reaching the end time or being manually stopped).
Records are stored in CSV format without headers. Each line includes the following fields:
TS: Log timestamp (format: yyyy-MM-dd HH:mm:ss.SSSSSS, timezone: server's timezone).IP: Client IP.SourcePort: Client port.AppName: Client application name.User: Username of the current connection.ConnType: Connection type (ws).QID: Request ID, saved as hexadecimal.StartTime: Start processing time, formatted as yyyy-MM-dd HH:mm:ss.SSSSSS, timezone is the server timezone where taosAdapter is located.STMT2: stmt2 memory address, saved as hexadecimal.Action: Operation (prepare, bind, exec).Code: Operation result, 0 represents success, other values represent error codes.Duration(us): Execution time in microseconds, -1 if not completed.The Data field content for a bind operation is in JSON format and contains the following fields:
| Field Name | Type | Description |
|---|---|---|
| count | int | Number of tables with bound parameters |
| table_names | [count]string | List of table names with bound parameters |
| tags | [count][tag_count]col_info | List of tag column information for bound parameters |
| cols | [count][col_count]col_info | List of ordinary column information for bound parameters |
Where the col_info structure contains the following fields:
| Field Name | Type | Description |
|---|---|---|
| type | int | Column type |
| data | []any | One-dimensional array, each element represents a row of data |
The parsing rules for each type of data in the data field are as follows:
Sample example:
{
"count": 1,
"table_names": ["test1"],
"tags": [
[{
"type": 9,
"data": [1726803356466]
}, {
"type": 1,
"data": [true]
}, {
"type": 2,
"data": [1]
}, {
"type": 3,
"data": [2]
}, {
"type": 4,
"data": [3]
}, {
"type": 5,
"data": [4]
}, {
"type": 6,
"data": [5.5]
}, {
"type": 7,
"data": [6.6]
}, {
"type": 11,
"data": [7]
}, {
"type": 12,
"data": [8]
}, {
"type": 13,
"data": [9]
}, {
"type": 14,
"data": [10]
}, {
"type": 8,
"data": ["binary"]
}, {
"type": 10,
"data": ["nchar"]
}, {
"type": 20,
"data": ["010100000000000000000059400000000000005940"]
}, {
"type": 16,
"data": ["76617262696e617279"]
}, {
"type": 17,
"data": ["12345.6789"]
}, {
"type": 21,
"data": ["98765.4321"]
}, {
"type": 18,
"data": ["7468697320697320626c6f622064617461"]
}]
],
"cols": [
[{
"type": 9,
"data": [1726803356466, 1726803357466, 1726803358466]
}, {
"type": 1,
"data": [true, null, false]
}, {
"type": 2,
"data": [11, null, 12]
}, {
"type": 3,
"data": [11, null, 12]
}, {
"type": 4,
"data": [11, null, 12]
}, {
"type": 5,
"data": [11, null, 12]
}, {
"type": 6,
"data": [11.2, null, 12.2]
}, {
"type": 7,
"data": [11.2, null, 12.2]
}, {
"type": 11,
"data": [11, null, 12]
}, {
"type": 12,
"data": [11, null, 12]
}, {
"type": 13,
"data": [11, null, 12]
}, {
"type": 14,
"data": [11, null, 12]
}, {
"type": 8,
"data": ["binary1", null, "binary2"]
}, {
"type": 10,
"data": ["nchar1", null, "nchar2"]
}, {
"type": 20,
"data": ["010100000000000000000059400000000000005940", null, "010100000000000000000059400000000000005940"]
}, {
"type": 16,
"data": ["76617262696e61727931", null, "76617262696e61727932"]
}, {
"type": 17,
"data": ["12345.6789", null, "22345.6789"]
}, {
"type": 21,
"data": ["98765.4321", null, "88765.4321"]
}, {
"type": 18,
"data": ["7468697320697320626c6f622064617461", null, "7468697320697320616e6f7468657220626c6f622064617461"]
}]
]
}
Currently, taosAdapter only collects monitoring indicators for RESTful/WebSocket related requests. There are no monitoring indicators for other interfaces.
taosAdapter reports monitoring indicators to taosKeeper, which will be written to the monitoring database by taosKeeper. The default is the log database, which can be modified in the taoskeeper configuration file. The following is a detailed introduction to these monitoring indicators.
The adapter_requests table records taosAdapter monitoring data:
| field | type | is_tag | comment |
|---|---|---|---|
| ts | TIMESTAMP | data collection timestamp | |
| total | INT UNSIGNED | total number of requests | |
| query | INT UNSIGNED | number of query requests | |
| write | INT UNSIGNED | number of write requests | |
| other | INT UNSIGNED | number of other requests | |
| in_process | INT UNSIGNED | number of requests in process | |
| success | INT UNSIGNED | number of successful requests | |
| fail | INT UNSIGNED | number of failed requests | |
| query_success | INT UNSIGNED | number of successful query requests | |
| query_fail | INT UNSIGNED | number of failed query requests | |
| write_success | INT UNSIGNED | number of successful write requests | |
| write_fail | INT UNSIGNED | number of failed write requests | |
| other_success | INT UNSIGNED | number of successful other requests | |
| other_fail | INT UNSIGNED | number of failed other requests | |
| query_in_process | INT UNSIGNED | number of query requests in process | |
| write_in_process | INT UNSIGNED | number of write requests in process | |
| endpoint | VARCHAR | TAG | request endpoint |
| req_type | TINYINT UNSIGNED | TAG | request type: 0 for REST, 1 for WebSocket |
The adapter_status table records the status data of taosAdapter:
| field | type | is_tag | comment |
|---|---|---|---|
| _ts | TIMESTAMP | data collection timestamp | |
| go_heap_sys | DOUBLE | heap memory allocated by Go runtime (bytes) | |
| go_heap_inuse | DOUBLE | heap memory in use by Go runtime (bytes) | |
| go_stack_sys | DOUBLE | stack memory allocated by Go runtime (bytes) | |
| go_stack_inuse | DOUBLE | stack memory in use by Go runtime (bytes) | |
| rss | DOUBLE | actual physical memory occupied by the process (bytes) | |
| ws_query_conn | DOUBLE | current WebSocket connections for /rest/ws endpoint | |
| ws_stmt_conn | DOUBLE | current WebSocket connections for /rest/stmt endpoint | |
| ws_sml_conn | DOUBLE | current WebSocket connections for /rest/schemaless endpoint | |
| ws_ws_conn | DOUBLE | current WebSocket connections for /ws endpoint | |
| ws_tmq_conn | DOUBLE | current WebSocket connections for /rest/tmq endpoint | |
| async_c_limit | DOUBLE | total concurrency limit for the C asynchronous interface | |
| async_c_inflight | DOUBLE | current concurrency for the C asynchronous interface | |
| sync_c_limit | DOUBLE | total concurrency limit for the C synchronous interface | |
| sync_c_inflight | DOUBLE | current concurrency for the C synchronous interface | |
| ws_query_conn_inc | DOUBLE | New connections on /rest/ws interface (Available since v3.3.6.10) | |
| ws_query_conn_dec | DOUBLE | Closed connections on /rest/ws interface (Available since v3.3.6.10) | |
| ws_stmt_conn_inc | DOUBLE | New connections on /rest/stmt interface (Available since v3.3.6.10) | |
| ws_stmt_conn_dec | DOUBLE | Closed connections on /rest/stmt interface (Available since v3.3.6.10) | |
| ws_sml_conn_inc | DOUBLE | New connections on /rest/schemaless interface (Available since v3.3.6.10) | |
| ws_sml_conn_dec | DOUBLE | Closed connections on /rest/schemaless interface (Available since v3.3.6.10) | |
| ws_ws_conn_inc | DOUBLE | New connections on /ws interface (Available since v3.3.6.10) | |
| ws_ws_conn_dec | DOUBLE | Closed connections on /ws interface (Available since v3.3.6.10) | |
| ws_tmq_conn_inc | DOUBLE | New connections on /rest/tmq interface (Available since v3.3.6.10) | |
| ws_tmq_conn_dec | DOUBLE | Closed connections on /rest/tmq interface (Available since v3.3.6.10) | |
| ws_query_sql_result_count | DOUBLE | Current SQL query results held by /rest/ws interface (Available since v3.3.6.10) | |
| ws_stmt_stmt_count | DOUBLE | Current stmt objects held by /rest/stmt interface (Available since v3.3.6.10) | |
| ws_ws_sql_result_count | DOUBLE | Current SQL query results held by /ws interface (Available since v3.3.6.10) | |
| ws_ws_stmt_count | DOUBLE | Current stmt objects held by /ws interface (Available since v3.3.6.10) | |
| ws_ws_stmt2_count | DOUBLE | Current stmt2 objects held by /ws interface (Available since v3.3.6.10) | |
| cpu_percent | DOUBLE | CPU usage percentage of taosAdapter (Available since v3.3.6.24/v3.3.7.7) | |
| endpoint | NCHAR | TAG | request endpoint |
The adapter_conn_pool table records the connection pool monitoring data of taosAdapter:
| field | type | is_tag | comment |
|---|---|---|---|
| _ts | TIMESTAMP | data collection timestamp | |
| conn_pool_total | DOUBLE | maximum connection limit for the connection pool | |
| conn_pool_in_use | DOUBLE | current number of connections in use in the connection pool | |
| endpoint | NCHAR | TAG | request endpoint |
| user | NCHAR | TAG | username to which the connection pool belongs |
Starting from version 3.3.6.10, the adapter_c_interface table has been added to record taosAdapter C interface call metrics:
| field | type | is_tag | comment |
|---|---|---|---|
| _ts | TIMESTAMP | Data collection timestamp | |
| taos_connect_total | DOUBLE | Count of total connection attempts | |
| taos_connect_success | DOUBLE | Count of successful connections | |
| taos_connect_fail | DOUBLE | Count of failed connections | |
| taos_close_total | DOUBLE | Count of total close attempts | |
| taos_close_success | DOUBLE | Count of successful closes | |
| taos_schemaless_insert_total | DOUBLE | Count of schemaless insert operations | |
| taos_schemaless_insert_success | DOUBLE | Count of successful schemaless inserts | |
| taos_schemaless_insert_fail | DOUBLE | Count of failed schemaless inserts | |
| taos_schemaless_free_result_total | DOUBLE | Count of schemaless result set releases | |
| taos_schemaless_free_result_success | DOUBLE | Count of successful schemaless result set releases | |
| taos_query_total | DOUBLE | Count of synchronous SQL executions | |
| taos_query_success | DOUBLE | Count of successful synchronous SQL executions | |
| taos_query_fail | DOUBLE | Count of failed synchronous SQL executions | |
| taos_query_free_result_total | DOUBLE | Count of synchronous SQL result set releases | |
| taos_query_free_result_success | DOUBLE | Count of successful synchronous SQL result set releases | |
| taos_query_a_with_reqid_total | DOUBLE | Count of async SQL with request ID | |
| taos_query_a_with_reqid_success | DOUBLE | Count of successful async SQL with request ID | |
| taos_query_a_with_reqid_callback_total | DOUBLE | Count of async SQL callbacks with request ID | |
| taos_query_a_with_reqid_callback_success | DOUBLE | Count of successful async SQL callbacks with request ID | |
| taos_query_a_with_reqid_callback_fail | DOUBLE | Count of failed async SQL callbacks with request ID | |
| taos_query_a_free_result_total | DOUBLE | Count of async SQL result set releases | |
| taos_query_a_free_result_success | DOUBLE | Count of successful async SQL result set releases | |
| tmq_consumer_poll_result_total | DOUBLE | Count of consumer polls with data | |
| tmq_free_result_total | DOUBLE | Count of TMQ data releases | |
| tmq_free_result_success | DOUBLE | Count of successful TMQ data releases | |
| taos_stmt2_init_total | DOUBLE | Count of stmt2 initializations | |
| taos_stmt2_init_success | DOUBLE | Count of successful stmt2 initializations | |
| taos_stmt2_init_fail | DOUBLE | Count of failed stmt2 initializations | |
| taos_stmt2_close_total | DOUBLE | Count of stmt2 closes | |
| taos_stmt2_close_success | DOUBLE | Count of successful stmt2 closes | |
| taos_stmt2_close_fail | DOUBLE | Count of failed stmt2 closes | |
| taos_stmt2_get_fields_total | DOUBLE | Count of stmt2 field fetches | |
| taos_stmt2_get_fields_success | DOUBLE | Count of successful stmt2 field fetches | |
| taos_stmt2_get_fields_fail | DOUBLE | Count of failed stmt2 field fetches | |
| taos_stmt2_free_fields_total | DOUBLE | Count of stmt2 field releases | |
| taos_stmt2_free_fields_success | DOUBLE | Count of successful stmt2 field releases | |
| taos_stmt_init_with_reqid_total | DOUBLE | Count of stmt initializations with request ID | |
| taos_stmt_init_with_reqid_success | DOUBLE | Count of successful stmt initializations with request ID | |
| taos_stmt_init_with_reqid_fail | DOUBLE | Count of failed stmt initializations with request ID | |
| taos_stmt_close_total | DOUBLE | Count of stmt closes | |
| taos_stmt_close_success | DOUBLE | Count of successful stmt closes | |
| taos_stmt_close_fail | DOUBLE | Count of failed stmt closes | |
| taos_stmt_get_tag_fields_total | DOUBLE | Count of stmt tag field fetches | |
| taos_stmt_get_tag_fields_success | DOUBLE | Count of successful stmt tag field fetches | |
| taos_stmt_get_tag_fields_fail | DOUBLE | Count of failed stmt tag field fetches | |
| taos_stmt_get_col_fields_total | DOUBLE | Count of stmt column field fetches | |
| taos_stmt_get_col_fields_success | DOUBLE | Count of successful stmt column field fetches | |
| taos_stmt_get_col_fields_fail | DOUBLE | Count of failed stmt column field fetches | |
| taos_stmt_reclaim_fields_total | DOUBLE | Count of stmt field releases | |
| taos_stmt_reclaim_fields_success | DOUBLE | Count of successful stmt field releases | |
| tmq_get_json_meta_total | DOUBLE | Count of TMQ JSON metadata fetches | |
| tmq_get_json_meta_success | DOUBLE | Count of successful TMQ JSON metadata fetches | |
| tmq_free_json_meta_total | DOUBLE | Count of TMQ JSON metadata releases | |
| tmq_free_json_meta_success | DOUBLE | Count of successful TMQ JSON metadata releases | |
| taos_fetch_whitelist_a_total | DOUBLE | Count of async whitelist fetches | |
| taos_fetch_whitelist_a_success | DOUBLE | Count of successful async whitelist fetches | |
| taos_fetch_whitelist_a_callback_total | DOUBLE | Count of async whitelist callbacks | |
| taos_fetch_whitelist_a_callback_success | DOUBLE | Count of successful async whitelist callbacks | |
| taos_fetch_whitelist_a_callback_fail | DOUBLE | Count of failed async whitelist callbacks | |
| taos_fetch_rows_a_total | DOUBLE | Count of async row fetches | |
| taos_fetch_rows_a_success | DOUBLE | Count of successful async row fetches | |
| taos_fetch_rows_a_callback_total | DOUBLE | Count of async row callbacks | |
| taos_fetch_rows_a_callback_success | DOUBLE | Count of successful async row callbacks | |
| taos_fetch_rows_a_callback_fail | DOUBLE | Count of failed async row callbacks | |
| taos_fetch_raw_block_a_total | DOUBLE | Count of async raw block fetches | |
| taos_fetch_raw_block_a_success | DOUBLE | Count of successful async raw block fetches | |
| taos_fetch_raw_block_a_callback_total | DOUBLE | Count of async raw block callbacks | |
| taos_fetch_raw_block_a_callback_success | DOUBLE | Count of successful async raw block callbacks | |
| taos_fetch_raw_block_a_callback_fail | DOUBLE | Count of failed async raw block callbacks | |
| tmq_get_raw_total | DOUBLE | Count of raw data fetches | |
| tmq_get_raw_success | DOUBLE | Count of successful raw data fetches | |
| tmq_get_raw_fail | DOUBLE | Count of failed raw data fetches | |
| tmq_free_raw_total | DOUBLE | Count of raw data releases | |
| tmq_free_raw_success | DOUBLE | Count of successful raw data releases | |
| tmq_consumer_new_total | DOUBLE | Count of new consumer creations | |
| tmq_consumer_new_success | DOUBLE | Count of successful new consumer creations | |
| tmq_consumer_new_fail | DOUBLE | Count of failed new consumer creations | |
| tmq_consumer_close_total | DOUBLE | Count of consumer closes | |
| tmq_consumer_close_success | DOUBLE | Count of successful consumer closes | |
| tmq_consumer_close_fail | DOUBLE | Count of failed consumer closes | |
| tmq_subscribe_total | DOUBLE | Count of topic subscriptions | |
| tmq_subscribe_success | DOUBLE | Count of successful topic subscriptions | |
| tmq_subscribe_fail | DOUBLE | Count of failed topic subscriptions | |
| tmq_unsubscribe_total | DOUBLE | Count of unsubscriptions | |
| tmq_unsubscribe_success | DOUBLE | Count of successful unsubscriptions | |
| tmq_unsubscribe_fail | DOUBLE | Count of failed unsubscriptions | |
| tmq_list_new_total | DOUBLE | Count of new topic list creations | |
| tmq_list_new_success | DOUBLE | Count of successful new topic list creations | |
| tmq_list_new_fail | DOUBLE | Count of failed new topic list creations | |
| tmq_list_destroy_total | DOUBLE | Count of topic list destructions | |
| tmq_list_destroy_success | DOUBLE | Count of successful topic list destructions | |
| tmq_conf_new_total | DOUBLE | Count of TMQ new config creations | |
| tmq_conf_new_success | DOUBLE | Count of successful TMQ new config creations | |
| tmq_conf_new_fail | DOUBLE | Count of failed TMQ new config creations | |
| tmq_conf_destroy_total | DOUBLE | Count of TMQ config destructions | |
| tmq_conf_destroy_success | DOUBLE | Count of successful TMQ config destructions | |
| taos_stmt2_prepare_total | DOUBLE | Count of stmt2 prepares | |
| taos_stmt2_prepare_success | DOUBLE | Count of successful stmt2 prepares | |
| taos_stmt2_prepare_fail | DOUBLE | Count of failed stmt2 prepares | |
| taos_stmt2_is_insert_total | DOUBLE | Count of insert checks | |
| taos_stmt2_is_insert_success | DOUBLE | Count of successful insert checks | |
| taos_stmt2_is_insert_fail | DOUBLE | Count of failed insert checks | |
| taos_stmt2_bind_param_total | DOUBLE | Count of stmt2 parameter bindings | |
| taos_stmt2_bind_param_success | DOUBLE | Count of successful stmt2 parameter bindings | |
| taos_stmt2_bind_param_fail | DOUBLE | Count of failed stmt2 parameter bindings | |
| taos_stmt2_exec_total | DOUBLE | Count of stmt2 executions | |
| taos_stmt2_exec_success | DOUBLE | Count of successful stmt2 executions | |
| taos_stmt2_exec_fail | DOUBLE | Count of failed stmt2 executions | |
| taos_stmt2_error_total | DOUBLE | Count of stmt2 error checks | |
| taos_stmt2_error_success | DOUBLE | Count of successful stmt2 error checks | |
| taos_fetch_row_total | DOUBLE | Count of sync row fetches | |
| taos_fetch_row_success | DOUBLE | Count of successful sync row fetches | |
| taos_is_update_query_total | DOUBLE | Count of update statement checks | |
| taos_is_update_query_success | DOUBLE | Count of successful update statement checks | |
| taos_affected_rows_total | DOUBLE | Count of SQL affected rows fetches | |
| taos_affected_rows_success | DOUBLE | Count of successful SQL affected rows fetches | |
| taos_num_fields_total | DOUBLE | Count of field count fetches | |
| taos_num_fields_success | DOUBLE | Count of successful field count fetches | |
| taos_fetch_fields_e_total | DOUBLE | Count of extended field info fetches | |
| taos_fetch_fields_e_success | DOUBLE | Count of successful extended field info fetches | |
| taos_fetch_fields_e_fail | DOUBLE | Count of failed extended field info fetches | |
| taos_result_precision_total | DOUBLE | Count of precision fetches | |
| taos_result_precision_success | DOUBLE | Count of successful precision fetches | |
| taos_get_raw_block_total | DOUBLE | Count of raw block fetches | |
| taos_get_raw_block_success | DOUBLE | Count of successful raw block fetches | |
| taos_fetch_raw_block_total | DOUBLE | Count of raw block pulls | |
| taos_fetch_raw_block_success | DOUBLE | Count of successful raw block pulls | |
| taos_fetch_raw_block_fail | DOUBLE | Count of failed raw block pulls | |
| taos_fetch_lengths_total | DOUBLE | Count of field length fetches | |
| taos_fetch_lengths_success | DOUBLE | Count of successful field length fetches | |
| taos_write_raw_block_with_reqid_total | DOUBLE | Count of request ID raw block writes | |
| taos_write_raw_block_with_reqid_success | DOUBLE | Count of successful request ID raw block writes | |
| taos_write_raw_block_with_reqid_fail | DOUBLE | Count of failed request ID raw block writes | |
| taos_write_raw_block_with_fields_with_reqid_total | DOUBLE | Count of request ID field raw block writes | |
| taos_write_raw_block_with_fields_with_reqid_success | DOUBLE | Count of successful request ID field raw block writes | |
| taos_write_raw_block_with_fields_with_reqid_fail | DOUBLE | Count of failed request ID field raw block writes | |
| tmq_write_raw_total | DOUBLE | Count of TMQ raw data writes | |
| tmq_write_raw_success | DOUBLE | Count of successful TMQ raw data writes | |
| tmq_write_raw_fail | DOUBLE | Count of failed TMQ raw data writes | |
| taos_stmt_prepare_total | DOUBLE | Count of stmt prepares | |
| taos_stmt_prepare_success | DOUBLE | Count of successful stmt prepares | |
| taos_stmt_prepare_fail | DOUBLE | Count of failed stmt prepares | |
| taos_stmt_is_insert_total | DOUBLE | Count of stmt insert checks | |
| taos_stmt_is_insert_success | DOUBLE | Count of successful stmt insert checks | |
| taos_stmt_is_insert_fail | DOUBLE | Count of failed stmt insert checks | |
| taos_stmt_set_tbname_total | DOUBLE | Count of stmt table name sets | |
| taos_stmt_set_tbname_success | DOUBLE | Count of successful stmt table name sets | |
| taos_stmt_set_tbname_fail | DOUBLE | Count of failed stmt table name sets | |
| taos_stmt_set_tags_total | DOUBLE | Count of stmt tag sets | |
| taos_stmt_set_tags_success | DOUBLE | Count of successful stmt tag sets | |
| taos_stmt_set_tags_fail | DOUBLE | Count of failed stmt tag sets | |
| taos_stmt_bind_param_batch_total | DOUBLE | Count of stmt batch parameter bindings | |
| taos_stmt_bind_param_batch_success | DOUBLE | Count of successful stmt batch parameter bindings | |
| taos_stmt_bind_param_batch_fail | DOUBLE | Count of failed stmt batch parameter bindings | |
| taos_stmt_add_batch_total | DOUBLE | Count of stmt batch additions | |
| taos_stmt_add_batch_success | DOUBLE | Count of successful stmt batch additions | |
| taos_stmt_add_batch_fail | DOUBLE | Count of failed stmt batch additions | |
| taos_stmt_execute_total | DOUBLE | Count of stmt executions | |
| taos_stmt_execute_success | DOUBLE | Count of successful stmt executions | |
| taos_stmt_execute_fail | DOUBLE | Count of failed stmt executions | |
| taos_stmt_num_params_total | DOUBLE | Count of stmt parameter count fetches | |
| taos_stmt_num_params_success | DOUBLE | Count of successful stmt parameter count fetches | |
| taos_stmt_num_params_fail | DOUBLE | Count of failed stmt parameter count fetches | |
| taos_stmt_get_param_total | DOUBLE | Count of stmt parameter fetches | |
| taos_stmt_get_param_success | DOUBLE | Count of successful stmt parameter fetches | |
| taos_stmt_get_param_fail | DOUBLE | Count of failed stmt parameter fetches | |
| taos_stmt_errstr_total | DOUBLE | Count of stmt error info fetches | |
| taos_stmt_errstr_success | DOUBLE | Count of successful stmt error info fetches | |
| taos_stmt_affected_rows_once_total | DOUBLE | Count of stmt affected rows fetches | |
| taos_stmt_affected_rows_once_success | DOUBLE | Count of successful stmt affected rows fetches | |
| taos_stmt_use_result_total | DOUBLE | Count of stmt result set uses | |
| taos_stmt_use_result_success | DOUBLE | Count of successful stmt result set uses | |
| taos_stmt_use_result_fail | DOUBLE | Count of failed stmt result set uses | |
| taos_select_db_total | DOUBLE | Count of database selections | |
| taos_select_db_success | DOUBLE | Count of successful database selections | |
| taos_select_db_fail | DOUBLE | Count of failed database selections | |
| taos_get_tables_vgId_total | DOUBLE | Count of table vgroup ID fetches | |
| taos_get_tables_vgId_success | DOUBLE | Count of successful table vgroup ID fetches | |
| taos_get_tables_vgId_fail | DOUBLE | Count of failed table vgroup ID fetches | |
| taos_options_connection_total | DOUBLE | Count of connection option sets | |
| taos_options_connection_success | DOUBLE | Count of successful connection option sets | |
| taos_options_connection_fail | DOUBLE | Count of failed connection option sets | |
| taos_validate_sql_total | DOUBLE | Count of SQL validations | |
| taos_validate_sql_success | DOUBLE | Count of successful SQL validations | |
| taos_validate_sql_fail | DOUBLE | Count of failed SQL validations | |
| taos_check_server_status_total | DOUBLE | Count of server status checks | |
| taos_check_server_status_success | DOUBLE | Count of successful server status checks | |
| taos_get_current_db_total | DOUBLE | Count of current database fetches | |
| taos_get_current_db_success | DOUBLE | Count of successful current database fetches | |
| taos_get_current_db_fail | DOUBLE | Count of failed current database fetches | |
| taos_get_server_info_total | DOUBLE | Count of server info fetches | |
| taos_get_server_info_success | DOUBLE | Count of successful server info fetches | |
| taos_options_total | DOUBLE | Count of option sets | |
| taos_options_success | DOUBLE | Count of successful option sets | |
| taos_options_fail | DOUBLE | Count of failed option sets | |
| taos_set_conn_mode_total | DOUBLE | Count of connection mode sets | |
| taos_set_conn_mode_success | DOUBLE | Count of successful connection mode sets | |
| taos_set_conn_mode_fail | DOUBLE | Count of failed connection mode sets | |
| taos_reset_current_db_total | DOUBLE | Count of current database resets | |
| taos_reset_current_db_success | DOUBLE | Count of successful current database resets | |
| taos_set_notify_cb_total | DOUBLE | Count of notification callback sets | |
| taos_set_notify_cb_success | DOUBLE | Count of successful notification callback sets | |
| taos_set_notify_cb_fail | DOUBLE | Count of failed notification callback sets | |
| taos_errno_total | DOUBLE | Count of error code fetches | |
| taos_errno_success | DOUBLE | Count of successful error code fetches | |
| taos_errstr_total | DOUBLE | Count of error message fetches | |
| taos_errstr_success | DOUBLE | Count of successful error message fetches | |
| tmq_consumer_poll_total | DOUBLE | Count of TMQ consumer polls | |
| tmq_consumer_poll_success | DOUBLE | Count of successful TMQ consumer polls | |
| tmq_consumer_poll_fail | DOUBLE | Count of failed TMQ consumer polls | |
| tmq_subscription_total | DOUBLE | Count of TMQ subscription info fetches | |
| tmq_subscription_success | DOUBLE | Count of successful TMQ subscription info fetches | |
| tmq_subscription_fail | DOUBLE | Count of failed TMQ subscription info fetches | |
| tmq_list_append_total | DOUBLE | Count of TMQ list appends | |
| tmq_list_append_success | DOUBLE | Count of successful TMQ list appends | |
| tmq_list_append_fail | DOUBLE | Count of failed TMQ list appends | |
| tmq_list_get_size_total | DOUBLE | Count of TMQ list size fetches | |
| tmq_list_get_size_success | DOUBLE | Count of successful TMQ list size fetches | |
| tmq_err2str_total | DOUBLE | Count of TMQ error code to string conversions | |
| tmq_err2str_success | DOUBLE | Count of successful TMQ error code to string conversions | |
| tmq_conf_set_total | DOUBLE | Count of TMQ config sets | |
| tmq_conf_set_success | DOUBLE | Count of successful TMQ config sets | |
| tmq_conf_set_fail | DOUBLE | Count of failed TMQ config sets | |
| tmq_get_res_type_total | DOUBLE | Count of TMQ resource type fetches | |
| tmq_get_res_type_success | DOUBLE | Count of successful TMQ resource type fetches | |
| tmq_get_topic_name_total | DOUBLE | Count of TMQ topic name fetches | |
| tmq_get_topic_name_success | DOUBLE | Count of successful TMQ topic name fetches | |
| tmq_get_vgroup_id_total | DOUBLE | Count of TMQ vgroup ID fetches | |
| tmq_get_vgroup_id_success | DOUBLE | Count of successful TMQ vgroup ID fetches | |
| tmq_get_vgroup_offset_total | DOUBLE | Count of TMQ vgroup offset fetches | |
| tmq_get_vgroup_offset_success | DOUBLE | Count of successful TMQ vgroup offset fetches | |
| tmq_get_db_name_total | DOUBLE | Count of TMQ database name fetches | |
| tmq_get_db_name_success | DOUBLE | Count of successful TMQ database name fetches | |
| tmq_get_table_name_total | DOUBLE | Count of TMQ table name fetches | |
| tmq_get_table_name_success | DOUBLE | Count of successful TMQ table name fetches | |
| tmq_get_connect_total | DOUBLE | Count of TMQ connection fetches | |
| tmq_get_connect_success | DOUBLE | Count of successful TMQ connection fetches | |
| tmq_commit_sync_total | DOUBLE | Count of TMQ sync commits | |
| tmq_commit_sync_success | DOUBLE | Count of successful TMQ sync commits | |
| tmq_commit_sync_fail | DOUBLE | Count of failed TMQ sync commits | |
| tmq_fetch_raw_block_total | DOUBLE | Count of TMQ raw block fetches | |
| tmq_fetch_raw_block_success | DOUBLE | Count of successful TMQ raw block fetches | |
| tmq_fetch_raw_block_fail | DOUBLE | Count of failed TMQ raw block fetches | |
| tmq_get_topic_assignment_total | DOUBLE | Count of TMQ topic assignment fetches | |
| tmq_get_topic_assignment_success | DOUBLE | Count of successful TMQ topic assignment fetches | |
| tmq_get_topic_assignment_fail | DOUBLE | Count of failed TMQ topic assignment fetches | |
| tmq_offset_seek_total | DOUBLE | Count of TMQ offset seeks | |
| tmq_offset_seek_success | DOUBLE | Count of successful TMQ offset seeks | |
| tmq_offset_seek_fail | DOUBLE | Count of failed TMQ offset seeks | |
| tmq_committed_total | DOUBLE | Count of TMQ committed offset fetches | |
| tmq_committed_success | DOUBLE | Count of successful TMQ committed offset fetches | |
| tmq_commit_offset_sync_fail | DOUBLE | Count of failed TMQ sync offset commits | |
| tmq_position_total | DOUBLE | Count of TMQ current position fetches | |
| tmq_position_success | DOUBLE | Count of successful TMQ current position fetches | |
| tmq_commit_offset_sync_total | DOUBLE | Count of TMQ sync offset commits | |
| tmq_commit_offset_sync_success | DOUBLE | Count of successful TMQ sync offset commits | |
| taos_connect_totp_total | DOUBLE | Count of total TOTP authentication attempts (Available since v3.4.0.0) | |
| taos_connect_totp_success | DOUBLE | Count of successful TOTP authentication attempts (Available since v3.4.0.0) | |
| taos_connect_totp_fail | DOUBLE | Count of failed TOTP authentication attempts (Available since v3.4.0.0) | |
| taos_connect_token_total | DOUBLE | Count of total token authentication attempts (Available since v3.4.0.0) | |
| taos_connect_token_success | DOUBLE | Count of successful token authentication attempts (Available since v3.4.0.0) | |
| taos_connect_token_fail | DOUBLE | Count of failed token authentication attempts (Available since v3.4.0.0) | |
| taos_get_connection_info_total | DOUBLE | Count of get_connection_info calls (Available since v3.4.0.0) | |
| taos_get_connection_info_success | DOUBLE | Count of successful get_connection_info calls (Available since v3.4.0.0) | |
| taos_get_connection_info_fail | DOUBLE | Count of failed get_connection_info calls (Available since v3.4.0.0) | |
| endpoint | NCHAR | TAG | Request endpoint |
Starting from version 3.3.6.29/3.3.8.3, the adapter_request_limit table has been added to record taosAdapter query request throttling metrics:
| field | type | is_tag | comment |
|---|---|---|---|
| _ts | TIMESTAMP | Data collection timestamp | |
| query_limit | DOUBLE | Maximum concurrency of query requests allowed to execute simultaneously | |
| query_max_wait | DOUBLE | Maximum number of queries allowed to wait in the queue for execution | |
| query_inflight | DOUBLE | Current number of queries being executed that are subject to concurrency limits | |
| query_wait_count | DOUBLE | Current number of queries waiting in the queue for execution | |
| query_count | DOUBLE | Total number of query requests subject to concurrency limits received in this collection cycle | |
| query_wait_fail_count | DOUBLE | Number of query requests that failed due to waiting timeout or exceeding the maximum waiting queue length in this collection cycle | |
| endpoint | NCHAR | TAG | Request endpoint |
| user | NCHAR | TAG | Authenticated username that initiated the query request |
Starting from version 3.4.0.0, the adapter_input_json table has been added to record taosAdapter JSON write input metrics:
| field | type | is_tag | comment |
|---|---|---|---|
| _ts | TIMESTAMP | Data collection timestamp | |
| total_rows | DOUBLE | Total number of rows in JSON write | |
| success_rows | DOUBLE | Number of successfully written rows | |
| fail_rows | DOUBLE | Number of failed rows in JSON write | |
| inflight_rows | DOUBLE | Number of rows currently being written | |
| affected_rows | DOUBLE | Number of rows affected by JSON write | |
| url_endpoint | NCHAR | TAG | URL endpoint of the JSON write request |
| endpoint | NCHAR | TAG | Request taosAdapter endpoint |
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service(httpd). As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
| # | embedded httpd | taosAdapter | comment |
|---|---|---|---|
| 1 | httpEnableRecordSql | --logLevel=debug | |
| 2 | httpMaxThreads | n/a | taosAdapter automatically manages the thread pool, this parameter is not needed |
| 3 | telegrafUseFieldNum | Please refer to taosAdapter telegraf configuration methods | |
| 4 | restfulRowLimit | restfulRowLimit | The embedded httpd defaults to outputting 10240 rows of data, with a maximum allowable value of 102400. taosAdapter also provides restfulRowLimit but does not impose a limit by default. You can configure it according to actual scenario needs. |
| 5 | httpDebugFlag | Not applicable | httpdDebugFlag does not affect taosAdapter |
| 6 | httpDBNameMandatory | Not applicable | taosAdapter requires the database name to be specified in the URL |