docs/server/quick-start/whatsnew.md
Features
Changes / Improvements
For breaking changes and deprecation notices, see the upgrade guide.
KurrentDB 26.0 adds user-defined secondary indexes, which advance the secondary indexes added in 25.1. Users can now define custom secondary indexes from record content for fast, field-based reads, subscriptions, and UI queries (e.g. “orders-by-country”). Indexes follow the log and store their data separately on each node, so you get targeted access without increasing log size.
Previously, only Amazon S3 could be used as the blob storage for archived chunks. Now Azure and GCP can be used.
Previously, read requests were distributed round-robin across multiple reader queues, with the number of queues controlled by the ReaderThreadsCount setting. It was possible for a fast request to be delayed behind a slow one if both landed in the same queue, even while other queues were idle.
Now all read requests go through a single virtual queue, and ReaderThreadsCount controls the number of reads that can be executed concurrently. A fast request can no longer be delayed behind a slow one unless the maximum number of reads that can be executed concurrently are already in progress.
The WorkerThreads setting is now deprecated and has no effect. Similarly to the readers, it was possible for a fast work item to be delayed behind a slow one even while another worker was idle. The work done by the workers is now executed without a concurrency limit. Work items for the same TCP connection are still executed in order.
The length of the thread pool queue in seconds is now included as a metric, similarly to the other queues.
The length of the thread pool queue in items is now included in the stats files, similarly to the other queues.
The primary yaml config file now takes precedence over the json files (logconfig.json, kestrelsettings.json, metricsconfig.json and any custom json in the configuration directory).
This allows the main config file to override the defaults provided in those .json files that ship with the product, without having to edit those files.
KurrentDB logs when message handling takes more than an expected threshold. Historically, these thresholds have been somewhat arbitrary, resulting in unnecessary noise on some deployments.
The thresholds can now be configured per queue or bus in metricsconfig.json (which is now populated with the previous defaults), or can be overridden in the main yaml config file as follows:
Metrics:
SlowMessageMilliseconds:
MainBus: 48
PersistentSubscriptionsBus: 50
ProjectionManagerInputBus: 48
ProjectionManagerOutputBus: 48
ProjectionWorkerInputBus: 48
ProjectionWorkerOutputBus: 48
StorageReaderBus: 200
StorageWriterBus: 500
SubscriptionsBus: 50
WorkerBus: 200
Use -1 to indicate that slow messages should not be logged.
Under the hood, KurrentDB 26.0 uses the latest dotnet runtime: .NET 10.
The Kafka Source Connector consumes messages from Kafka topics and appends them to KurrentDB streams, enabling seamless integration between the two platforms.
The connector supports consuming from multiple partitions concurrently and offers flexible routing options to control which KurrentDB streams receive the messages.
Refer to the documentation for instructions on setting up a Kafka source connector.
The SQL Sink Connector writes events from KurrentDB to SQL databases (Microsoft SQL Server and PostgreSQL) by executing configurable SQL statements.
You can define custom SQL statement templates with parameter placeholders and JavaScript functions to extract values from event data and map them to SQL parameters.
Refer to the documentation for instructions on setting up a SQL sink connector.
The position tracker in connectors has been updated to correctly handle duplicate track calls. Previously, duplicate calls to track the same position would cause an IndexOutOfRangeException. This fix ensures that duplicate track calls are safely ignored, improving connector reliability.
Improved connector performance through a new position tracker and optimized asynchronous operations. Memory allocations and overhead are reduced, particularly when operations complete synchronously.
These are the new features and important changes and in KurrentDB 25.1:
Features
Changes / Improvements
For breaking changes and deprecation notices, see the upgrade guide.
In addition to the default index, KurrentDB v25.1 introduces default secondary indexes for categories and event types. These indexes provide functionality that is similar to the existing $by-category and $by-event-type system projections, but with significant performance and storage efficiency improvements.
Future releases will add support for custom secondary indexes, aiming to mitigate the need to use custom projections that produce link records.
Learn more about secondary indexes.
The server now supports appending to multiple streams atomically in one write request.
Events e1, ..., eN can be appended to stream s1 and events f1, ..., fN being appended to stream s2, and so on with other streams, all in one atomic operation.
An optimistic concurrency check can be provided for each stream, and the write operation will only succeed if all the checks are successful.
Using this feature requires the latest client libraries that support it.
Historically events have been appended with optional bytes for event metadata. The server now supports receiving this data in a structured way. The primary goal is to allow adding and retrieving event properties without serialization and deserialization overhead. In KurrentDB, log record (event) properties are stored as key-value pairs, where keys are strings and values can be of various types (string, int, bool, etc). This makes properties close to the Headers concept that is known in HTTP, messaging systems, and other similar technologies.
In client libraries, log record properties are surfaced as a dictionary-like structure that allows adding, retrieving, and removing properties by key. Using this feature requires the latest client libraries that support it.
The embedded Web UI now includes a Database Stats page showing detailed statistics about database content, such as number of streams, events, etc. This feature only works with secondary indexes enabled.
The embedded Web UI now includes a Queries page allowing you to run ad-hoc SQL queries against event data stored in KurrentDB. This feature only works with secondary indexes enabled. Learn more about the Queries UI.
KurrentDB can now be run as a Windows Service. See the documentation for more information.
The OpenTelemetry Integration can now be used to export logs as well as metrics.
The .NET runtime Server Garbage Collection is now enabled by default, increasing the performance of the server. See the documentation for more information.
StreamInfoCacheCapacity is now 100,000 by default rather than 0 (dynamically sized).
StreamInfoCache dynamic sizing was introduced introduced in v21.10 and enabled by default. It allows the StreamInfoCache to grow much larger, according to the amount of free memory which is reevaluated periodically. This is desirable for some workloads, but it comes with a tradeoff of very significantly increased managed memory usage, which in turn causes additional GC pressure and can lead to more frequent elections. On balance we have decided that a default of 100,000 (which is the value used before v21.10) is a better default, favouring its predictability and stability.
Users wishing to keep dynamic sizing can enable it by setting StreamInfoCacheCapacity to 0. Additional can be found in the StreamInfoCache documentation
The Apache Pulsar sink connector writes events from your KurrentDB stream to a specified Pulsar topic.
Refer to the documentation for instructions on setting up a Pulsar sink.
Since connectors now run only on the leader node, leases are no longer needed and have been disabled, reducing the number of events written to the database.
Header keys now retain their original casing when delivered to connector destinations. The default headers esdb-record-partition-key and esdb-record-is-transformed are no longer added to outgoing messages. You can also choose whether to include system headers in sink metadata.
The headers of remote chunks are now only read on demand and not on startup, improving startup times when there are a large number of remote chunks.
Several new metrics have been added to track important properties of projections.
kurrentdb_projection_state_size contains the state size of projections and their state partitions that are over 50% of the state size limit (MaxProjectionStateSize).
This helps to show if any projections are in danger of reaching the limit.
kurrentdb_projection_state_size_bound contains the projection state size LIMIT (driven by MaxProjectionStateSize) and the THRESHOLD for displaying a projection or partition in kurrentdb_projection_state_size (50% of the limit).
This makes it easy to graph what the limit is and how close projections are to it.
kurrentdb_projection_state_serialization_duration_max_seconds contains the recent maximum time that each custom projection has taken to serialize its state.
kurrentdb_projection_execution_duration_max_seconds contains the recent maximum time that each custom projection has taken to execute an event.
kurrentdb_projection_execution_duration_seconds_bucket creates a histogram for each (Projection x Function) pair showing, for example, the distribution of how long each custom projection takes to process each event type.
This creates a lot of timeseries and is off by default. It can be enabled by setting ProjectionExecutionByFunction to true in metricsconfig.json.
Typically this would only be enabled in development environments.
See the documentation for more information.
kurrentdb_persistent_sub_parked_message_replays counts the number of messages that have been parked by stream, group, and reason. Reason can be client-nak meaning that the client naked the message, or max-retries meaning that the server retried sending it to the clients until the maximum attempts was reached.
kurrentdb_persistent_sub_park_message_requests counts the number of requests to replay parked messages by stream and group.
See the documentation for more information.
Added server configuration option for TCP read expiry
The option is TcpReadTimeoutMs and it defaults to 10000 (10s, which matches the previous behavior).
It applies to reads received via the TCP client API. When a read has been in the server queue for longer than this, it will be discarded without being executed. If your TCP clients are configured to timeout after X milliseconds, it is advisable to set this server option to be the same, so that the server will not execute reads that the client is no longer waiting for.
For gRPC clients, the server-side discarding is already driven by the deadline on the read itself without requiring server configuration.
Log output to Seq
You can now configure a Seq log output by adding the following to logconfig.json:
"Serilog": {
"WriteTo": [
{
"Name": "Seq",
"Args": {
"serverUrl": "http://localhost:5341",
"restrictedToMinimumLevel": "Information"
}
}
]
}
Added logging for significant garbage collections
This makes it clear from the logs if slow messages or leader elections are attributable to Garbage Collection (GC).
Execution engine (EE) suspensions longer than 48ms are logged as Information. Execution engine suspensions longer than 600ms are logged as Warnings. Full compacting GC start/end are logged as Information.
Note that the Start/End log messages may both be logged AFTER the execution engine pause has completed.
These will be logged even if the node shortly goes offline for truncation, which would likely prevent the EE suspension from appearing in the metrics.
If GC is determined as the cause of a leader election, a sensible course of action could be to reduce the Stream Info Cache Capacity (say, to the 100k traditional value) and/or consider enabling ServerGC.
Example logs:
[34144,13,11:03:05.307,INF] Start of full blocking garbage collection at 06/06/2025 10:02:49. GC: #210548. Generation: 2. Reason: LargeObjectHeapAllocation. Type: BlockingOutsideBackgroundGC.
[34144,13,11:03:05.307,INF] End of full blocking garbage collection at 06/06/2025 10:03:05. GC: #210548. Took: 15,727ms
[34144,13,11:03:05.307,WRN] Garbage collection: Very long Execution Engine Suspension. Reason: GarbageCollection. Took: 15,727ms
Lower Scavenge API GET calls to Verbose
The auto-scavenge checks on the status of in-progress scavenges frequently, which was producing unnecessary logs.
Added extra logging when UnwrapEnvelopeMessage is slow
When UnwrapEnvelopeMessage triggers a SLOW QUEUE MESSAGE log, it now includes the name of the action it was unwrapping.
These are the new features in KurrentDB 25.0:
KurrentDB 25.0 introduces the initial release of Archiving: a new major feature to reduce costs and increase scalability of a KurrentDB cluster.
With the new Archiving feature, data is uploaded to cheaper storage such as Amazon S3 and then can be removed from the volumes attached to the cluster nodes. The volumes can be correspondingly smaller and cheaper. The nodes are all able to read the archive, and when a read request from a client requires data that is stored in the archive, the node retrieves that data from the archive transparently to the client.
Refer to the documentation for more information about archiving and instructions on how to set it up.
The Elasticsearch sink pulls messages from a KurrentDB stream and stores them in an Elasticsearch index. The records will be serialized into JSON documents, compatible with Elasticsearch's document structure.
We've introduced a comprehensive data protection system to enhance the security of your sensitive connector configurations.
All connectors now use envelope encryption to automatically protect sensitive data such as passwords and tokens using industry-standard encryption techniques. This ensures your credentials remain secure during transmission.
Setup is straightforward with token-based protection requiring minimal configuration. You can provide tokens directly in your configuration or via separate files for enhanced security in production environments.
We've integrated a native Surge key vault that stores encryption keys directly within KurrentDB system streams but we will support more key vaults in the future.
See the Data Protection documentation for complete setup instructions.
Event Store – the company and the product – are rebranding as Kurrent.
As part of this rebrand, EventStoreDB has been renamed to KurrentDB, with the first release of KurrentDB being version 25.0.
Read more about the rebrand in the rebrand FAQ.
The KurrentDB packages are still hosted on Cloudsmith. Refer to the upgrade guide to see what's changed between EventStoreDB and KurrentDB, or the installation guide for updated installation instructions.
In the new embedded Web UI you can see at a glance:
We are changing the version scheme with the first official release of KurrentDB.
As before, there will be two categories of release:
The version number will now reflect whether a release is an LTS or feature release, rather than being based on the year and month. LTS releases will have even major numbers, and STS releases will have odd major numbers.
The new scheme is Major.Minor.Patch where:
Major
Minor
0 for LTS releases, but may be incremented in rare cases.Patch for bug fixes.The release schedule will be changing with the versioning scheme, given that the version numbers are no longer tied to the year and month:
Packages for KurrentDB will still be published to Cloudsmith, into the following repositories:
These are the new features that were added in EventStoreDB 24.10:
We have improved and expanded on the Connectors preview introduced in 24.2.0.
The Connectors feature is enabled by default. You can use the HTTP sink without a license, but a license is required for all other connectors.
Refer to the documentation for instructions on setting up and configuring connectors and sinks.
The Elasticsearch sink pulls messages from a KurrentDB stream and stores them in an Elasticsearch index. The records will be serialized into JSON documents, compatible with Elasticsearch's document structure.
Refer to the documentation for instructions on setting up a Elasticsearch sink.
The Kafka sink writes events from EventStoreDB to a Kafka topic.
It can extract the partition key from the record based on specific sources such as the stream ID, headers, or record key and also supports basic authentication and resilience features to handle transient errors.
Refer to the documentation for instructions on setting up a Kafka sink.
The MongoDB sink pulls messages from an EventStoreDB stream and stores the messages to a collection.
It supports data transformation for modifying event data or metadata and the inclusion of additional headers before sending messages to the MongoDB collection. It also supports at-least-once delivery and resilience features to handle transient errors.
Refer to the documentation for instructions on setting up a MongoDB sink.
The RabbitMQ sink pulls messages from EventStoreDB and sends the messages to a RabbitMQ exchange using a specified routing key.
It efficiently handles message delivery by abstracting the complexities of RabbitMQ's exchange and queue management, ensuring that messages are routed to the appropriate destinations based on the provided routing key.
This sink is designed for high reliability and supports graceful error handling and recovery mechanisms to ensure consistent message delivery in a production environment.
Refer to the documentation for instructions on setting up a RabbitMQ sink.
The HTTP sink allows for integration between EventStoreDB and external APIs over HTTP or HTTPS. This connector consumes events from an EventStoreDB stream and converts each event's data into JSON format before sending it in the request body to a specified URL. Events are sent individually as they are consumed from the stream, without batching. The event data is transmitted as the request body, and metadata can be included as HTTP headers. The connector also supports Basic Authentication and Bearer Token Authentication.
Refer to the documentation for instructions on setting up an HTTP sink.
The Serilog sink logs detailed messages about the connector and record details.
Refer to the documentation for instructions on setting up a Serilog sink.
We've introduced a comprehensive data protection system to enhance the security of your sensitive connector configurations.
All connectors now use envelope encryption to automatically protect sensitive data such as passwords and tokens using industry-standard encryption techniques. This ensures your credentials remain secure during transmission.
Setup is straightforward with token-based protection requiring minimal configuration. You can provide tokens directly in your configuration or via separate files for enhanced security in production environments.
We've integrated a native Surge key vault that stores encryption keys directly within KurrentDB system streams but we will support more key vaults in the future.
See the Data Protection documentation for complete setup instructions.
The auto-scavenge feature automatically schedules cluster scavenges which are composed of multiple node scavenges. Only one node scavenge can be executed at a time in the cluster. The auto-scavenge feature allows the scheduling of said cluster scavenges.
The auto-scavenge feature requires a license to use. EventStoreDB will only start auto-scavenging once an administrator has set up a schedule for running cluster scavenges.
Refer to the documentation for instructions on enabling and using this feature.
Define stream access policies in one place based on stream prefixes rather than using stream ACLs.
Stream access policies can be created to grant users or groups read, write, delete, or metadata access. These policies can be applied to streams based on their prefix or to system or user streams.
The Stream Policy feature requires a license to use. Refer to the documentation for more information about using and configuring this feature.
Encrypt EventStoreDB chunks to secure them against attackers with file access to the database.
This feature aims to protect against an attacker who obtains access to the physical disk. In contrast to volume or filesystem encryption, file-level encryption provides some protection for attacks against the live system or remote exploits, as the plaintext data is not directly readable.
The Encryption-at-rest feature requires a license to use and is disabled by default. If Encryption-at-rest is enabled, it is impossible to roll back to an unencrypted database after a new chunk has been created or if a chunk has been scavenged.
Refer to the documentation for more information about using and configuring this feature.
Customers can unlock the enterprise features of EventStoreDB with a license key. This applies to the previous commercial plugins and several of the new features in this release.
You will need to provide a license key if you want to enable or use the following features: