content/shared/influxdb3-plugins/_index.md
Use the Processing Engine in {{% product-name %}} to extend your database with custom Python code. Trigger your code on write, on a schedule, or on demand to automate workflows, transform data, and create API endpoints.
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure triggers to run your Python plugin code in response to:
You can use the Processing Engine's in-memory cache to manage state between executions and build stateful applications directly in your database.
This guide walks you through setting up the Processing Engine, creating your first plugin, and configuring triggers that execute your code on specific events.
Ensure you have:
Once you have all the prerequisites in place, follow these steps to implement the Processing Engine for your data automation needs.
The Processing Engine activates when --plugin-dir or INFLUXDB3_PLUGIN_DIR is configured.
{{% show-in "enterprise" %}}
In a cluster, this automatically adds process mode to the node.
{{% /show-in %}}
| Deployment | Default state | Configuration |
|---|---|---|
| Docker images | Enabled | INFLUXDB3_PLUGIN_DIR=/plugins |
| DEB/RPM packages | Enabled | plugin-dir="/var/lib/influxdb3/plugins" |
| Binary/source | Disabled | No plugin-dir configured |
If you installed {{% product-name %}} using Docker or a DEB/RPM package, the Processing Engine is already enabled—skip to Add a Processing Engine plugin. To disable the Processing Engine, see Enable and disable the Processing Engine.
To activate the Processing Engine when running from a binary or source build, start your {{% product-name %}} server with the --plugin-dir flag. This flag tells InfluxDB where to load your plugin files.
[!Important]
Keep the influxdb3 binary with its python directory
The influxdb3 binary requires the adjacent
python/directory to function. If you manually extract from tar.gz, keep them in the same parent directory:your-install-location/ ├── influxdb3 └── python/Add the parent directory to your PATH; do not move the binary out of this directory.
influxdb3 serve \
--NODE_ID \
--object-store OBJECT_STORE_TYPE \
--plugin-dir PLUGIN_DIR
In the example above, replace the following:
NODE_ID{{% /code-placeholder-key %}}: Unique identifier for your instanceOBJECT_STORE_TYPE{{% /code-placeholder-key %}}: Type of object store (for example, file or s3)PLUGIN_DIR{{% /code-placeholder-key %}}: Absolute path to the directory where plugin files are stored. Store all plugin files in this directory or its subdirectories.[!Note]
Use custom plugin repositories
By default, plugins referenced with the
gh:prefix are fetched from the official influxdata/influxdb3_plugins repository. To use a custom repository, add the--plugin-repoflag when starting the server. See Use a custom plugin repository for details.
When running {{% product-name %}} in a distributed setup, follow these steps to configure the Processing Engine:
[!Note]
Provide plugins to nodes that run them
Configure your plugin directory on the same system as the nodes that run the triggers and plugins.
{{% show-in "enterprise" %}} For more information about configuring distributed environments, see the Distributed cluster considerations section. {{% /show-in %}}
A plugin is a Python script that defines a function with a trigger-compatible (trigger spec) signature. When the specified event occurs, InfluxDB runs the plugin.
You have two main options for adding plugins to your InfluxDB instance:
InfluxData maintains a repository of official and community plugins that you can use immediately in your Processing Engine setup.
Browse the plugin library to find examples and InfluxData official plugins for:
For community contributions, see the influxdb3_plugins repository on GitHub.
You have two options for using plugins from the repository:
Clone the influxdata/influxdb3_plugins repository and copy plugins to your configured plugin directory:
# Clone the repository
git clone https://github.com/influxdata/influxdb3_plugins.git
# Copy a plugin to your configured plugin directory
cp influxdb3_plugins/influxdata/system_metrics/system_metrics.py /path/to/plugins/
Skip downloading plugins by referencing them directly from GitHub using the gh: prefix:
# Create a trigger using a plugin from GitHub
influxdb3 create trigger \
--trigger-spec "every:1m" \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--database my_database \
system_metrics
This approach:
For organizations that maintain their own plugin repositories or need to use private/internal plugins, configure a custom plugin repository URL:
# Start the server with a custom plugin repository
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins \
--plugin-repo "https://internal.company.com/influxdb-plugins/"
Then reference plugins from your custom repository using the gh: prefix:
# Fetches from: https://internal.company.com/influxdb-plugins/myorg/custom_plugin.py
influxdb3 create trigger \
--trigger-spec "every:5m" \
--path "gh:myorg/custom_plugin.py" \
--database my_database \
custom_trigger
Use cases for custom repositories:
The --plugin-repo option accepts any HTTP/HTTPS URL that serves raw plugin files.
See the plugin-repo configuration option for more details.
Plugins have various functions such as:
args) passed from trigger arguments configurationsinfluxdb3_local shared API to write data, query data, and managing state between executionsFor more information about available functions, arguments, and how plugins interact with InfluxDB, see how to Extend plugins.
To build custom functionality, you can create your own Processing Engine plugin.
Before you begin, make sure:
--plugin-dir where plugin files are stored.Choose a plugin type based on your automation goals:
| Plugin Type | Best For |
|---|---|
| Data write | Processing data as it arrives |
| Scheduled | Running code at specific intervals or times |
| HTTP request | Running code on demand via API endpoints |
Plugins now support both single-file and multifile architectures:
Single-file plugins:
.py file in your plugins directoryMultifile plugins:
__init__.py file as the entry point (required).py filesmy_plugin/
├── __init__.py # Required - entry point with trigger function
├── utils.py # Supporting module
├── processors.py # Data processing functions
└── config.py # Configuration helpers
The __init__.py file must contain your trigger function:
# my_plugin/__init__.py
from .processors import process_data
from .config import get_settings
def process_writes(influxdb3_local, table_batches, args=None):
settings = get_settings()
for table_batch in table_batches:
process_data(influxdb3_local, table_batch, settings)
Supporting modules can contain helper functions:
# my_plugin/processors.py
def process_data(influxdb3_local, table_batch, settings):
# Processing logic here
pass
After writing your plugin, create a trigger to connect it to a database event and define when it runs.
Use a data write plugin to process data as it's written to the database. These plugins use table: or all_tables: trigger specifications. Ideal use cases include:
def process_writes(influxdb3_local, table_batches, args=None):
# Process data as it's written to the database
for table_batch in table_batches:
table_name = table_batch["table_name"]
rows = table_batch["rows"]
# Log information about the write
influxdb3_local.info(f"Processing {len(rows)} rows from {table_name}")
# Write derived data back to the database
line = LineBuilder("processed_data")
line.tag("source_table", table_name)
line.int64_field("row_count", len(rows))
influxdb3_local.write(line)
Scheduled plugins run at defined intervals using every: or cron: trigger specifications. Use them for:
def process_scheduled_call(influxdb3_local, call_time, args=None):
# Run code on a schedule
# Query recent data
results = influxdb3_local.query("SELECT * FROM metrics WHERE time > now() - INTERVAL '1 hour'")
# Process the results
if results:
influxdb3_local.info(f"Found {len(results)} recent metrics")
else:
influxdb3_local.warn("No recent metrics found")
HTTP request plugins respond to API calls using request: trigger specifications. Use them for:
def process_request(influxdb3_local, query_parameters, request_headers, request_body, args=None):
# Handle HTTP requests to a custom endpoint
# Log the request parameters
influxdb3_local.info(f"Received request with parameters: {query_parameters}")
# Process the request body
if request_body:
import json
data = json.loads(request_body)
influxdb3_local.info(f"Request data: {data}")
# Return a response (automatically converted to JSON)
return {"status": "success", "message": "Request processed"}
After writing your plugin:
For local development and testing, you can upload plugin files directly from your machine when creating triggers. This eliminates the need to manually copy files to the server's plugin directory.
Use the --upload flag with --path to transfer local files or directories:
# Upload single-file plugin
influxdb3 create trigger \
--trigger-spec "every:10s" \
--path "/local/path/to/plugin.py" \
--upload \
--database metrics \
my_trigger
# Upload multifile plugin directory
influxdb3 create trigger \
--trigger-spec "every:30s" \
--path "/local/path/to/plugin-dir" \
--upload \
--database metrics \
complex_trigger
For more information, see the influxdb3 create trigger CLI reference.
To upload a plugin file using the HTTP API, send a PUT request to the /api/v3/plugins/files endpoint:
{{% api-endpoint method="PUT" endpoint="{{< influxdb/host >}}/api/v3/plugins/files" api-ref="/influxdb3/version/api/v3/#operation/PutPluginFile" %}}
Include the following in your request:
Authorization: Bearer with your admin tokenContent-Type: application/octet-streampath (string, required): Path to the plugin file relative to the plugin directory# Upload a single-file plugin
curl -X PUT "{{< influxdb/host >}}/api/v3/plugins/files?path=plugin.py" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@/local/path/to/plugin.py"
Replace {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "admin" "admin" %}}
[!Important]
Admin privileges required
Plugin uploads require an admin token. This security measure prevents unauthorized code execution on the server.
When to use plugin upload:
Modify plugin code for running triggers without recreating them. This allows you to iterate on plugin development while preserving trigger configuration and history.
Use the influxdb3 update trigger command:
# Update single-file plugin
influxdb3 update trigger \
--database metrics \
--trigger-name my_trigger \
--path "/path/to/updated/plugin.py"
# Update multifile plugin
influxdb3 update trigger \
--database metrics \
--trigger-name complex_trigger \
--path "/path/to/updated/plugin-dir"
For complete reference, see influxdb3 update trigger.
To update a plugin file using the HTTP API, send a PUT request to the /api/v3/plugins/files endpoint:
{{% api-endpoint method="PUT" endpoint="{{< influxdb/host >}}/api/v3/plugins/files" api-ref="/influxdb3/version/api/v3/#operation/PutPluginFile" %}}
Include the following in your request:
Authorization: Bearer with your admin tokenContent-Type: application/octet-streampath (string, required): Path to the plugin file relative to the plugin directory# Update a plugin file
curl -X PUT "{{< influxdb/host >}}/api/v3/plugins/files?path=plugin.py" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@/path/to/updated/plugin.py"
Replace {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "admin" "admin" %}}
The update operation:
Monitor which plugins are loaded in your system for operational visibility and troubleshooting.
Option 1: Use the CLI command
# List all plugins
influxdb3 show plugins --token $ADMIN_TOKEN
# JSON format for programmatic access
influxdb3 show plugins --format json --token $ADMIN_TOKEN
Option 2: Query the system table
The system.plugin_files table in the _internal database provides detailed plugin file information:
influxdb3 query \
-d _internal \
"SELECT * FROM system.plugin_files ORDER BY plugin_name" \
--token $ADMIN_TOKEN
Available columns:
plugin_name (String): Trigger namefile_name (String): Plugin file namefile_path (String): Full server pathsize_bytes (Int64): File sizelast_modified (Int64): Modification timestamp (milliseconds)Example queries:
-- Find plugins by name
SELECT * FROM system.plugin_files WHERE plugin_name = 'my_trigger';
-- Find large plugins
SELECT plugin_name, size_bytes
FROM system.plugin_files
WHERE size_bytes > 10000;
-- Check modification times
SELECT plugin_name, file_name, last_modified
FROM system.plugin_files
ORDER BY last_modified DESC;
For more information, see the influxdb3 show plugins reference and Query system data.
A trigger connects your plugin code to database events. When the specified event occurs, the processing engine executes your plugin.
| Plugin Type | Trigger Specification | When Plugin Runs |
|---|---|---|
| Data write | table:<TABLE_NAME> or all_tables | When data is written to tables |
| Scheduled | every:<DURATION> or cron:<EXPRESSION> | At specified time intervals |
| HTTP request | request:<REQUEST_PATH> | When HTTP requests are received |
Use the influxdb3 create trigger command with the appropriate trigger specification:
influxdb3 create trigger \
--trigger-spec SPECIFICATION \
--path PLUGIN_FILE \
--database DATABASE_NAME \
TRIGGER_NAME
In the example above, replace the following:
SPECIFICATION{{% /code-placeholder-key %}}: Trigger specificationPLUGIN_FILE{{% /code-placeholder-key %}}: Plugin filename relative to your configured plugin directoryDATABASE_NAME{{% /code-placeholder-key %}}: Name of the databaseTRIGGER_NAME{{% /code-placeholder-key %}}: Name of the new trigger[!Note]
Plugin paths
- For single-file plugins, provide just the
.pyfilename to--path(for example,test_plugin.py).- For multi-file plugins, provide the directory name containing
__init__.py.When not using
--upload, the server resolves paths relative to the configured--plugin-dir. For details about multi-file plugin structure, see Create your plugin file.
For complete reference, see influxdb3 create trigger.
To create a trigger using the HTTP API, send a POST request to the /api/v3/configure/processing_engine_trigger endpoint:
{{% api-endpoint method="POST" endpoint="{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" api-ref="/influxdb3/version/api/v3/#operation/PostConfigureProcessingEngineTrigger" %}}
Include the following in your request:
Authorization: Bearer with your authentication tokenContent-Type: application/jsondb (string, required): Database nametrigger_name (string, required): Trigger nameplugin_filename (string, required): Plugin filename relative to the plugin directorytrigger_specification (string, required): When the plugin runs (see trigger types)trigger_settings (object, required): Configuration for error handling and execution
run_async (boolean): Whether to run asynchronously (default: false)error_behavior (string): How to handle errors: Log, Retry, or Disable (default: Log)disabled (boolean, required): Whether the trigger is disabledtrigger_arguments (object, optional): Arguments passed to the plugin# Create a basic trigger
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "TRIGGER_NAME",
"plugin_filename": "PLUGIN_FILE",
"trigger_specification": "TRIGGER_SPEC",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
In the example above, replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: Name of the databaseTRIGGER_NAME{{% /code-placeholder-key %}}: Name of the new triggerPLUGIN_FILE{{% /code-placeholder-key %}}: Plugin filename relative to your configured plugin directoryTRIGGER_SPEC{{% /code-placeholder-key %}}: Trigger specification (see examples)AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}The following examples demonstrate how to create triggers for different event types.
{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
# Trigger on writes to a specific table
# The plugin file must be in your configured plugin directory
influxdb3 create trigger \
--trigger-spec "table:sensor_data" \
--path "process_sensors.py" \
--database my_database \
sensor_processor
# Trigger on writes to all tables
influxdb3 create trigger \
--trigger-spec "all_tables" \
--path "process_all_data.py" \
--database my_database \
all_data_processor
{{% /code-tab-content %}} {{% code-tab-content %}}
# Trigger on writes to a specific table
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "sensor_processor",
"plugin_filename": "process_sensors.py",
"trigger_specification": "table:sensor_data",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
# Trigger on writes to all tables
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "all_data_processor",
"plugin_filename": "process_all_data.py",
"trigger_specification": "all_tables",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
The trigger runs when the database flushes ingested data for the specified tables to the Write-Ahead Log (WAL) in the Object store (default is every second).
The plugin receives the written data and table information.
If you want to use a single trigger for all tables but exclude specific tables, you can use trigger arguments and your plugin code to filter out unwanted tables--for example:
influxdb3 create trigger \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--path processor.py \
--trigger-spec "all_tables" \
--trigger-arguments "exclude_tables=temp_data,debug_info,system_logs" \
data_processor
Replace the following:
Then, in your plugin:
# processor.py
def on_write(self, database, table_name, batch):
# Get excluded tables from trigger arguments
excluded_tables = set(self.args.get('exclude_tables', '').split(','))
if table_name in excluded_tables:
return
# Process allowed tables
self.process_data(database, table_name, batch)
{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
# Run every 5 minutes
influxdb3 create trigger \
--trigger-spec "every:5m" \
--path "periodic_check.py" \
--database my_database \
regular_check
# Run on a cron schedule (8am daily)
# Supports extended cron format with seconds
influxdb3 create trigger \
--trigger-spec "cron:0 0 8 * * *" \
--path "daily_report.py" \
--database my_database \
daily_report
{{% /code-tab-content %}} {{% code-tab-content %}}
# Run every 5 minutes
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "regular_check",
"plugin_filename": "periodic_check.py",
"trigger_specification": "every:5m",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
# Run on a cron schedule (8am daily)
# Supports extended cron format with seconds
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "daily_report",
"plugin_filename": "daily_report.py",
"trigger_specification": "cron:0 0 8 * * *",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
The plugin receives the scheduled call time.
{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
# Create an endpoint at /api/v3/engine/webhook
influxdb3 create trigger \
--trigger-spec "request:webhook" \
--path "webhook_handler.py" \
--database my_database \
webhook_processor
{{% /code-tab-content %}} {{% code-tab-content %}}
# Create an endpoint at /api/v3/engine/webhook
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "webhook_processor",
"plugin_filename": "webhook_handler.py",
"trigger_specification": "request:webhook",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
Access your endpoint at /api/v3/engine/{REQUEST_PATH} (in this example, /api/v3/engine/webhook).
The trigger is enabled by default and runs when an HTTP request is received at the specified path.
To run the plugin, send a GET or POST request to the endpoint--for example:
curl http://{{% influxdb/host %}}/api/v3/engine/webhook
The plugin receives the HTTP request object with methods, headers, and body.
To view triggers associated with a database, use the influxdb3 show summary command:
influxdb3 show summary --database my_database --token AUTH_TOKEN
Use trigger arguments to pass configuration from a trigger to the plugin it runs. You can use this for:
{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
influxdb3 create trigger \
--trigger-spec "every:1h" \
--path "threshold_check.py" \
--trigger-arguments threshold=90,[email protected] \
--database my_database \
threshold_monitor
{{% /code-tab-content %}} {{% code-tab-content %}}
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "threshold_monitor",
"plugin_filename": "threshold_check.py",
"trigger_specification": "every:1h",
"trigger_settings": {
"run_async": false,
"error_behavior": "Log"
},
"trigger_arguments": {
"threshold": "90",
"notify_email": "[email protected]"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
The arguments are passed to the plugin as a Dict[str, str] where the key is the argument name and the value is the argument value:
def process_scheduled_call(influxdb3_local, call_time, args=None):
if args and "threshold" in args:
threshold = float(args["threshold"])
email = args.get("notify_email", "[email protected]")
# Use the arguments in your logic
influxdb3_local.info(f"Checking threshold {threshold}, will notify {email}")
By default, triggers run synchronously—each instance waits for previous instances to complete before executing.
To allow multiple instances of the same trigger to run simultaneously, configure triggers to run asynchronously:
{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
# Allow multiple trigger instances to run simultaneously
influxdb3 create trigger \
--trigger-spec "table:metrics" \
--path "heavy_process.py" \
--run-asynchronous \
--database my_database \
async_processor
{{% /code-tab-content %}} {{% code-tab-content %}}
# Allow multiple trigger instances to run simultaneously
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "async_processor",
"plugin_filename": "heavy_process.py",
"trigger_specification": "table:metrics",
"trigger_settings": {
"run_async": true,
"error_behavior": "Log"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
To configure error handling behavior for a trigger, specify one of the following values:
log (default): Log all plugin errors to stdout and the system.processing_engine_logs table in the trigger's database.retry: Attempt to run the plugin again immediately after an error.disable: Automatically disable the plugin when an error occurs (can be re-enabled later).{{< code-tabs-wrapper >}} {{% code-tabs %}} influxdb3 CLI HTTP API {{% /code-tabs %}} {{% code-tab-content %}}
For more information, see how to Query trigger logs.
# Automatically retry on error
influxdb3 create trigger \
--trigger-spec "table:important_data" \
--path "critical_process.py" \
--error-behavior retry \
--database my_database \
critical_processor
# Disable the trigger on error
influxdb3 create trigger \
--trigger-spec "request:webhook" \
--path "webhook_handler.py" \
--error-behavior disable \
--database my_database \
auto_disable_processor
{{% /code-tab-content %}} {{% code-tab-content %}}
# Automatically retry on error
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "critical_processor",
"plugin_filename": "critical_process.py",
"trigger_specification": "table:important_data",
"trigger_settings": {
"run_async": false,
"error_behavior": "Retry"
},
"disabled": false
}'
# Disable the trigger on error
curl -X POST "{{< influxdb/host >}}/api/v3/configure/processing_engine_trigger" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"db": "DATABASE_NAME",
"trigger_name": "auto_disable_processor",
"plugin_filename": "webhook_handler.py",
"trigger_specification": "request:webhook",
"trigger_settings": {
"run_async": false,
"error_behavior": "Disable"
},
"disabled": false
}'
Replace the following:
DATABASE_NAME{{% /code-placeholder-key %}}: the name of the databaseAUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}{{% /code-tab-content %}} {{< /code-tabs-wrapper >}}
Use the influxdb3 install package command to add third-party libraries (like pandas, requests, or influxdb3-python) to your plugin environment.
This installs packages into the Processing Engine’s embedded Python environment to ensure compatibility with your InfluxDB instance.
{{< code-tabs-wrapper >}}
{{% code-tabs %}} influxdb3 CLI Docker HTTP API {{% /code-tabs %}}
{{% code-tab-content %}}
# Use the CLI to install a Python package
influxdb3 install package pandas
{{% /code-tab-content %}}
{{% code-tab-content %}}
# Use the CLI to install a Python package in a Docker container
docker exec -it CONTAINER_NAME influxdb3 install package pandas
{{% /code-tab-content %}}
{{% code-tab-content %}}
# Use the HTTP API to install Python packages
curl -X POST "{{< influxdb/host >}}/api/v3/configure/plugin_environment/install_packages" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"packages": ["pandas", "requests", "numpy"]
}'
Replace {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "admin" "admin" %}}
For complete reference, see Install plugin packages.
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
These examples install the specified Python packages (for example, pandas) into the Processing Engine's embedded virtual environment.
[!Important]
Use bundled Python for plugins
When you start the server with the
--plugin-diroption, InfluxDB 3 creates a Python virtual environment (<PLUGIN_DIR>/venv) for your plugins. If you need to create a custom virtual environment, use the Python interpreter bundled with InfluxDB 3. Don't use the system Python. Creating a virtual environment with the system Python (for example, usingpython -m venv) can lead to runtime errors and plugin failures.For more information, see the processing engine README.
InfluxDB creates a Python virtual environment in your plugins directory with the specified packages installed.
For air-gapped deployments or environments with strict security requirements, you can disable Python package installation while maintaining Processing Engine functionality.
Start the server with --package-manager disabled:
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins \
--package-manager disabled
When package installation is disabled:
Pre-install required dependencies:
Before disabling the package manager, install all required Python packages:
# Install packages first
influxdb3 install package pandas requests numpy
# Then start with disabled package manager
influxdb3 serve \
--plugin-dir ~/.plugins \
--package-manager disabled
Use cases for disabled package management:
For more configuration options, see --package-manager.
The Processing Engine includes security features to protect your {{% product-name %}} instance from unauthorized code execution and file system attacks.
All plugin file paths are validated to prevent directory traversal attacks. The system blocks:
../, ../../)/etc/passwd, /usr/bin/script.py)When creating or updating triggers, plugin paths must resolve within the configured --plugin-dir.
Example of blocked paths:
# These will be rejected
influxdb3 create trigger \
--path "../../../etc/passwd" \ # Blocked: parent directory traversal
...
influxdb3 create trigger \
--path "/tmp/malicious.py" \ # Blocked: absolute path
...
Valid plugin paths:
# These are allowed
influxdb3 create trigger \
--path "myapp/plugin.py" \ # Relative to plugin-dir
...
influxdb3 create trigger \
--path "transforms/data.py" \ # Subdirectory in plugin-dir
...
Plugin upload and update operations require admin tokens to prevent unauthorized code deployment:
--upload flag requires admin privilegesupdate trigger command requires admin tokenThis security model ensures only administrators can introduce or modify executable code in your database.
For development:
--upload flag to deploy plugins during developmentFor production:
--package-manager disabled) in locked-down environmentssystem.plugin_files tableFor more security configuration options, see Configuration options.
{{% show-in "enterprise" %}}
When you deploy {{% product-name %}} in a multi-node environment, configure each node based on its role and the plugins it runs.
Each plugin must run on a node that supports its trigger type:
| Plugin type | Trigger spec | Runs on |
|---|---|---|
| WAL rows | table: or all_tables | Ingester nodes |
| Scheduled | every: or cron: | Any node with scheduler |
| HTTP request | request: | Nodes that serve API traffic |
For example:
Place all plugin files in the --plugin-dir directory configured for each node.
[!Note] Triggers fail if the plugin file isn’t available on the node where it runs.
External tools—such as Grafana, custom dashboards, or REST clients—must connect to querier nodes in your InfluxDB Enterprise deployment.
https://querier.example.com:8086POST /api/v3/query/sql or similar endpoints must target a querier node.{{% /show-in %}}