content/kapacitor/v1/reference/about_the_project/release-notes.md
[!Warning]
Python 2-based UDFs are deprecated as of Kapacitor 1.7.7 and are removed in this release. If you are using Python 2 with your User-Defined Functions (UDFs), upgrade them to be Python 3-compatible before installing this version of Kapacitor. This required change aligns with modern security practices and ensures your custom functions will continue to work after upgrading.
JWT library to 4.5.2[!Warning]
Python 2 UDFs deprecated
Python 2-based UDFs are deprecated** as of Kapacitor 1.7.7 and will be removed in Kapacitor 1.8.0.
In preparation for Kapacitor 1.8.0, update your User-Defined Functions (UDFs) to be Python 3-compatible. This required change aligns with modern security practices and ensures your custom functions will continue to work after upgrading.
aws-sdk-go to 1.51.12.golang.org/x/net from 0.17.0 to 0.23.0.google.golang.org/protobuf to 1.33.0github.com/docker/docker to 24.0.9google.golang.org/grpc to 1.56.3github.com/docker/docker to 24.0.7[auth] meta-internal-shared-secret
configuration parameter.WritePointsPrivilegedTICKScript lambdas.InfluxQL for v.1.9.x compatibility.Kafka client to fix a bug regarding write latency.SASL support to Kafka alerts.Flux injected dependencies so that large data sets can be downloaded without issue.attributes field in Alerta event handler.host and attribute options to BigPanda event handler:
host: Identifies the main object that caused the alert.attribute: Adds additional attribute(s) to the alert payload.auto-attributes configuration option to BigPanda node.env var config.Topic queue length is now configurable. This allows you to set a topic-buffer-length parameter in the Kapacitor config file in the
alert section. The default is 5000. Minimum length
is 1000.address template to email alert. Email addresses no longer need to be hardcoded; can be derived directly from data.missing flux data. This error is generated when issues occur when running a Flux query within a batch TICKscript.template-id property to the GET /kapacitor/v1/tasks request response. Adding this property helps to identify tasks that were created from a template.json (terminated by a new line character) by compacting json in templates. To do this, replace {{ json . }} with {{ jsonCompact . }} in your templates. (This change also compacts Big Panda alert details to avoid Panda service error.)expvar string json encoding to correctly handle special characters in measurement strings, thanks @prashanthjbabu!disable-subscriptions is set to true in the InfluxDB section of the Kapacitor configuration file. If InfluxDB is not available, Kapacitor does not start.jwt dependencies and switch to github.com/golang-jwt/jwt to remediate the CVE-2020-26160 vulnerability.exec alert handler on a shared machine).DeleteGroupMessage with GroupInfo interface.{{% warn %}} Kapacitor 1.6.0 includes a defect that could result in a memory leak and expose sensitive information. If you installed this release, upgrade to Kapacitor v1.6.1. {{% /warn %}}
Kapacitor 1.6 introduces Flux task support. Use the Flux task to schedule and run Flux tasks against InfluxDB 1.x databases or offload the Flux query load from InfluxDB (1.x, 2.x, and Cloud). For more information, see Use Flux tasks.
User authentication and authorization (previously only supported in Kapacitor Enterprise) is now available in Kapacitor 1.6. Require user authentication for interactions with the Kapacitor API.
{{% warn %}}
Kapacitor 1.6+ no longer supports 32-bit operating systems. If you are using a 32-bit operating system, continue using Kapacitor 1.5.x. {{% /warn %}}
kapacitor CLI.correlate option in the Alerta event handler, thanks @nermolaev!details option to the OpsGenie v2 event handler; set this option to true to use the Kapacitor alert details as OpsGenie description text, thanks @JamesClonk!SideloadNode configuration, thanks @jregovic!subscription-path option to allow Kapacitor to run behind a reverse proxy, thanks @aspring!
For more information, see the example in Kapacitor to InfluxDB TLS configuration over HTTP API.gzip by default. Although, this default configuration does not appear in the Kapacitor configuration file, you can add compression = "none" to the InfluxDB section of your Kapacitor configuration file.GroupIDs to increase performance by reducing allocations.eventAction is resolve, thanks @asvinours!influx gzip are completely written to InfluxDB.ServiceNow handler to camelcase.JoinNode and UnionNode.kapacitor.conf) via http.darwin/386 builds (Go no longer supports).duration() to alertDuration() to avoid name collision with the type conversion function of the same name.{{% warn %}} If you’ve installed this release, please roll back to v1.5.7 as soon as possible. This release introduced a defect wherein large batch tasks will not completely write all points back to InfluxDB. This primarily affects downsampling tasks where information is written to another retention policy. If the source retention policy is short there is the potential for the source data to age out and the downsample to have never been fully written. {{% /warn %}}
.recoveryaction() method to support overriding the OpsGenieV2 alert recovery action in a TICKscript, thanks @zabullet!httpPost node and alert node. To set up an template:
alert node, see alert templates.http post node, see row templates.github.com/gorhill/cronexpr, thanks @wuguanyu!.Details to the alert template.scraper_test package to fix discovery service lost configuration (discovery.Config), thanks @flisky!systemd for Amazon Linux 2.go vet invocation in .hooks/pre-commit file that caused the hook to fail, thanks @mattnotmitt!build.py to support arm64, thanks @povlhp!pushover().userKey('') TICKScript operation.go vet issues.{{% warn %}}
If using Kapacitor v1.5.3 or newer and InfluxDB with authentication enabled,
set the [http].shared-secret option in your kapacitor.conf to the shared secret of your InfluxDB instances.
# ...
[http]
# ...
shared-secret = "youramazingsharedsecret"
If not set, set to an empty string, or does not match InfluxDB's shared-secret, the integration with InfluxDB will fail and Kapacitor will not start. Kapacitor will output an error similar to:
kapacitord[4313]: run: open server: open service *influxdb.Service: failed to link subscription on startup: signature is invalid
{{% /warn %}}
pagerduty2 should use routingKey rather than serviceKey.opsgenie configuration uses the recovery_url option, for opsgenie2 you will need to change it to the recovery_action option.
This is because the new v2 API is not structured with static URLs, and so only the action can be defined and not the entire URL..quiet to all nodes to silence any errors reported by the node.Kapacitor v1.4.0 adds many new features, highlighted here:
dir.The Combine and Flatten nodes previously operated (erroneously) across batch boundaries: this has been fixed.
dir.dbrp expressions were added to TICKscript.BarrierNode to emit BarrierMessage periodically.Previous state.alert.post and https_post timeouts to ensure cleanup of hung connections.QueryNode.bools field types to UDFs.now() function to get the current local time.logfmt support and refactor logging.{{ .Duration }} on Alert Message property.Sideload that allows loading data from files into the stream of data. Data can be loaded using a hierarchy.WARN level logs to INFO level.MQTT.toml configuration generation..yml file extensions in define-topic-handler.root.This release has two major features.
Here is a quick example of how to configure Kapacitor to scrape discovered targets. First, configure a discoverer, here we use the file-discovery discoverer. Next, configure a scraper to use that discoverer.
# Configure file discoverer
[[file-discovery]]
enabled = true
id = "discover_files"
refresh-interval = "10s"
##### This will look for prometheus json files
##### File format is here https://prometheus.io/docs/operating/configuration/#%3Cfile_sd_config%3E
files = ["/tmp/prom/*.json"]
# Configure scraper
[[scraper]]
enabled = true
name = "node_exporter"
discoverer-id = "discover_files"
discoverer-service = "file-discovery"
db = "prometheus"
rp = "autogen"
type = "prometheus"
scheme = "http"
metrics-path = "/metrics"
scrape-interval = "2s"
scrape-timeout = "10s"
Add the above snippet to your kapacitor.conf file.
Create the below snippet as the file /tmp/prom/localhost.json:
[{
"targets": ["localhost:9100"]
}]
Start the Prometheues node_exporter locally.
Now, startup Kapacitor and it will discover the localhost:9100 node_exporter target and begin scrapping it for metrics.
For more details on the scraping and discovery systems, see the full documentation here.
The second major feature with this release are changes to the alert topic system. The previous release introduced this new system as a technical preview and with this release the alerting service has been simplified. Alert handlers now only have a single action and belong to a single topic.
The handler definition has been simplified as a result. Here are some example alert handlers using the new structure:
id: my_handler
kind: pagerDuty
options:
serviceKey: XXX
id: aggregate_by_1m
kind: aggregate
options:
interval: 1m
topic: aggregated
id: publish_to_system
kind: publish
options:
topics: [ system ]
To define a handler now you must specify which topic the handler belongs to. For example, to define the above aggregate handler on the system topic, use this command:
kapacitor define-handler system aggregate_by_1m.yaml
For more details on the alerting system, see the full documentation here.
The alert handlers Alerta, Log, OpsGenie, PagerDuty, Post and VictorOps allow extra opaque data to beattached to alert notifications.
That opaque data was inconsistent and this change fixes that.
Depending on how that data was consumed this could result in a breaking change, since the original behavior
was inconsistent we decided it would be best to fix the issue now and make it consistent for all future builds.
Specifically in the JSON result data the old key `Series` is always `series`, and the old key `Err` is now
always `error` instead of for only some of the outputs.
The change is completely breaking for the technical preview alerting service, a.k.a. the new alert topic
handler features. The change boils down to simplifying how you define and interact with topics.
Alert handlers now only ever have a single action and belong to a single topic.
An automatic migration from old to new handler definitions will be performed during startup.
See the updated API docs.
Renamed `query_errors` to `errors` in batch node.
Renamed `eval_errors` to `errors` in eval node.
The changes now make it so that the agent package is self contained.
The behavior of the node changes slightly in order to provide a consistent fix to the bug.
The breaking change is that now, the time of the points returned are from the right hand or current point time,
instead of the left hand or previous point time.
isPresent operator for verifying whether a value is present (part of #1284).groupBy exclude and added dropOriginalFieldName to flatten.working_cardinality stat to each node type that tracks the number of groups per node.parseMode value.A new system for working with alerts has been introduced. This alerting system allows you to configure topics for alert events and then configure handlers for various topics. This way alert generation is decoupled from alert handling.
Existing TICKscripts will continue to work without modification.
To use this new alerting system remove any explicit alert handlers from your TICKscript and specify a topic. Then configure the handlers for the topic.
stream
|from()
.measurement('cpu')
.groupBy('host')
|alert()
// Specify the topic for the alert
.topic('cpu')
.info(lambda: "value" > 60)
.warn(lambda: "value" > 70)
.crit(lambda: "value" > 80)
// No handlers are configured in the script, they are instead defined on the topic via the API.
The API exposes endpoints to query the state of each alert and endpoints for configuring alert handlers. See the API docs for more details. The kapacitor CLI has been updated with commands for defining alert handlers.
This release introduces a new feature where you can window based off the number of points instead of their time. For example:
stream
|from()
.measurement('my-measurement')
// Emit window for every 10 points with 100 points per window.
|window()
.periodCount(100)
.everyCount(10)
|mean('value')
|alert()
.crit(lambda: "mean" > 100)
.slack()
.channel('#alerts')
With this change alert nodes will have an anonymous topic created for them. This topic is managed like all other topics preserving state etc. across restarts. As a result existing alert nodes will now remember the state of alerts after restarts and disiabling/enabling a task.
NOTE: The new alerting features are being released under technical preview. This means breaking changes may be made in later releases until the feature is considered complete. See the API docs on technical preview for specifics of how this effects the API.
No changes to Kapacitor, only upgrading to GoLang 1.7.4 for security patches.
New K8sAutoscale node that allows you to automatically scale Kubernetes deployments driven by any metrics Kapacitor consumes.
For example, to scale a deployment myapp based off requests per second:
// The target requests per second per host
var target = 100.0
stream
|from()
.measurement('requests')
.where(lambda: "deployment" == 'myapp')
// Compute the moving average of the last 5 minutes
|movingAverage('requests', 5*60)
.as('mean_requests_per_second')
|k8sAutoscale()
.resourceName('app')
.kind('deployments')
.min(4)
.max(100)
// Compute the desired number of replicas based on target.
.replicas(lambda: int(ceil("mean_requests_per_second" / target)))
New API endpoints have been added to be able to configure InfluxDB clusters and alert handlers dynamically without needing to restart the Kapacitor daemon. Along with the ability to dynamically configure a service, API endpoints have been added to test the configurable services. See the API docs for more details.
NOTE: The
connect_errorsstat from the query node was removed since the client changed, all errors are now counted in thequery_errorsstat.
.create property to InfluxDBOut node, which when set will create the database and retention policy on task start.First release of Kapacitor v1.0.0.