content/kapacitor/v1/troubleshooting/frequently-asked-questions.md
This page addresses frequent sources of confusion or important things to know related to Kapacitor. Where applicable, it links to outstanding issues on Github.
Administration
TICKscript
Performance
Kapacitor will remember the last level of an alert, but other state-like data, such as data buffered in a window, will be lost.
There are a few ways to determine whether or not Kapacitor is receiving data from InfluxDB.
The kapacitor stats ingress command
outputs InfluxDB measurements stored in the Kapacitor database as well as the number
of data points that pass through the Kapacitor server.
$ kapacitor stats ingress
Database Retention Policy Measurement Points Received
_internal monitor cq 5274
_internal monitor database 52740
_internal monitor httpd 5274
_internal monitor queryExecutor 5274
_internal monitor runtime 5274
_internal monitor shard 300976
# ...
You can also use Kapacitor's /debug/vars API endpoint
to view and monitor ingest rates.
Using this endpoint and Telegraf's Kapacitor input plugin,
you can create visualizations to monitor Kapacitor ingest rates.
Below are example queries that use Kapacitor data written into InfluxDB using
Telegraf's Kapacitor input plugin:
Kapacitor ingest rate (points/sec)
SELECT sum(points_received_rate) FROM (SELECT non_negative_derivative(first("points_received"),1s) as points_received_rate FROM "_kapacitor"."autogen"."ingress" WHERE time > :dashboardTime: GROUP BY "database", "retention_policy", "measurement", time(1m)) WHERE time > :dashboardTime: GROUP BY time(1m)
Kapacitor ingest by task (points/sec)
SELECT non_negative_derivative("collected",1s) FROM "_kapacitor"."autogen"."edges" WHERE time > now() - 15m AND ("parent"='stream' OR "parent"='batch') GROUP BY task
Make sure port 9092 is open to inbound connections.
Streams are a PUSH'd to port 9092 so it must be allowed through the firewall.
There is no software limit, but it will be limited by available server resources.
If data is ingested at irregular intervals and you see unexpected results with the same timestamp, use the log node when ingesting data in your TICKscript to debug issues. This surfaces issues, for example, duplicate data hidden by httpOut.
Taking things to the extreme, best-case is one task that consumes all the data and does all the work since there is added overhead when managing multiple tasks. However, significant effort has gone into reducing the overhead of each task. Use tasks in a way that makes logical sense for your project and organization. If you run into performance issues with multiple tasks, let us know. As a last resort, merge tasks into more complex tasks.
Templates are just an ease-of-use tool and make no difference in regards to performance.
If Kapacitor is unable to ingest and process incoming data before it receives new data, Kapacitor queues incoming data in memory and processes it when able. Memory requirements of queued data depend on the ingest rate and shape of the incoming data. Once Kapacitor is able to process all queued data, it slowly releases memory as the internal garbage collector reclaims memory.
Extended periods of high data ingestion can overwhelm available system resources
forcing the operating system to stop the kapacitord process.
The primary means for avoiding this issue are:
{{% note %}} As Kapacitor processes data in the queue, it may consume other system resources such as CPU, disk and network IO, etc., which will affect the overall performance of your Kapacitor server. {{% /note %}}
As you optimize Kapacitor tasks, consider the following:
batch queries data from InfluxDB in batches.
As long as Kapacitor is able to process a batch before the next batch is queried,
it won't need to queue anything.
stream mirrors all InfluxDB writes to
Kapacitor in real time and is more prone to queueing.
If using stream, segment incoming data into time-based batches using
window.