src/config/docs.md
This page is automatically generated by ./risedev generate-example-config
| Config | Description | Default |
|---|---|---|
| distributed_query_limit | This is the max number of queries per sql session. | "" |
| enable_barrier_read | false | |
| enable_spill | Enable the spill out to disk feature for batch queries. | true |
| frontend_compute_runtime_worker_threads | frontend compute runtime worker threads | "" |
| mask_worker_temporary_secs | This is the secs used to mask a worker unavailable temporarily. | 30 |
| max_batch_queries_per_frontend_node | This is the max number of batch queries per frontend node. | "" |
| redact_sql_option_keywords | Keywords on which SQL option redaction is based in the query log. A SQL option with a name containing any of these keywords will be redacted. | ["credential", "key", "password", "private", "secret", "token"] |
| statement_timeout_in_sec | Timeout for a batch query in seconds. | 3600 |
| worker_threads_num | The thread number of the batch task runtime in the compute node. The default value is decided by tokio. | "" |
| Config | Description | Default |
|---|---|---|
| chunk_size | The size of a chunk produced by RowSeqScanExecutor | 1024 |
| compute_client_config | ||
| connector_message_buffer_size | The capacity of the chunks in the channel that connects between ConnectorSource and SourceExecutor. | 16 |
| exchange_connection_pool_size | The number of the connections for batch remote exchange between two nodes. If not specified, the value of server.connection_pool_size will be used. | "" |
| frontend_client_config | ||
| local_execute_buffer_size | 64 | |
| output_channel_size | The size of the channel used for output to exchange/shuffle. | 64 |
| receiver_channel_size | 1000 | |
| root_stage_channel_size | 100 |
| Config | Description | Default |
|---|---|---|
| hba_config | Host-based authentication configuration | |
| max_single_query_size_bytes | A query of size exceeding this threshold will always be rejected due to memory constraints. | 1073741824 |
| max_total_query_size_bytes | Total memory constraints for running queries. | 1073741824 |
| min_single_query_size_bytes | A query of size under this threshold will never be rejected due to memory constraints. | 1048576 |
| Config | Description | Default |
|---|---|---|
| backend | "Mem" | |
| cdc_table_split_init_insert_batch_size | The batch size that the CDC table splits initialization should use when persisting to meta store. | 100 |
| cdc_table_split_init_sleep_duration_millis | The duration that the CDC table splits initialization should yield to avoid overloading upstream system. | 500 |
| cdc_table_split_init_sleep_interval_splits | The interval that the CDC table splits initialization should yield to avoid overloading upstream system. | 1000 |
| compact_task_table_size_partition_threshold_high | The threshold of table size in one compact task to decide whether to partition one table into partition_vnode_count parts, which belongs to default group and materialized view group. Set it max value of 64-bit number to disable this feature. | 536870912 |
| compact_task_table_size_partition_threshold_low | The threshold of table size in one compact task to decide whether to partition one table into hybrid_partition_vnode_count parts, which belongs to default group and materialized view group. Set it max value of 64-bit number to disable this feature. | 134217728 |
| compaction_group_merge_dimension_threshold | The threshold of each dimension of the compaction group after merging. When the dimension * compaction_group_merge_dimension_threshold >= limit, the merging job will be rejected. | 1.2 |
| compaction_task_max_heartbeat_interval_secs | 30 | |
| compaction_task_max_progress_interval_secs | 600 | |
| cut_table_size_limit | 1073741824 | |
| dangerous_max_idle_secs | After specified seconds of idle (no mview or flush), the process will be exited. It is mainly useful for playgrounds. | "" |
| default_parallelism | The default global parallelism for all streaming jobs, if user doesn't specify the parallelism, this value will be used. FULL means use all available parallelism units, otherwise it's a number. | "Full" |
| disable_automatic_parallelism_control | Whether to disable adaptive-scaling feature. | false |
| disable_recovery | Whether to enable fail-on-recovery. Should only be used in e2e tests. | false |
| do_not_config_object_storage_lifecycle | Whether config object storage bucket lifecycle to purge stale data. | false |
| enable_committed_sst_sanity_check | Enable sanity check when SSTs are committed. | false |
| enable_compaction_deterministic | Whether to enable deterministic compaction scheduling, which will disable all auto scheduling of compaction tasks. Should only be used in e2e tests. | false |
| enable_compaction_group_normalize | Whether to normalize overlapping compaction groups before the regular split/merge scheduling. | false |
| enable_dropped_column_reclaim | Whether compactor should rewrite row to remove dropped column. | false |
| enable_hummock_data_archive | If enabled, SSTable object file and version delta will be retained. SSTable object file need to be deleted via full GC. version delta need to be manually deleted. | false |
| enable_legacy_table_migration | Whether to automatically migrate legacy table fragments when meta starts. | true |
| event_log_channel_max_size | Keeps the latest N events per channel. | 10 |
| event_log_enabled | true | |
| full_gc_interval_sec | Interval of automatic hummock full GC. | 3600 |
| full_gc_object_limit | Max number of object per full GC job can fetch. | 100000 |
| gc_history_retention_time_sec | Duration in seconds to retain garbage collection history data. | 21600 |
| hummock_time_travel_snapshot_interval | The interval at which a Hummock version snapshot is taken for time travel. Larger value indicates less storage overhead but worse query performance. | 100 |
| hummock_version_checkpoint_interval_sec | Interval of hummock version checkpoint. | 30 |
| hybrid_partition_vnode_count | Count of partitions of tables in default group and materialized view group. The meta node will decide according to some strategy whether to cut the boundaries of the file according to the vnode alignment. Each partition contains aligned data of vnode_count / hybrid_partition_vnode_count consecutive virtual-nodes of one state table. Set it zero to disable this feature. | 4 |
| iceberg_gc_interval_sec | Interval of invoking iceberg garbage collection, to expire old snapshots. | 3600 |
| max_heartbeat_interval_secs | Maximum allowed heartbeat interval in seconds. | 60 |
| max_inflight_time_travel_query | Max number of inflight time travel query. | 1000 |
| max_normalize_splits_per_round | The maximum number of normalize splits in one scheduler round. Must be greater than 0. | 4 |
| meta_leader_lease_secs | 30 | |
| min_delta_log_num_for_hummock_version_checkpoint | The minimum delta log number a new checkpoint should compact, otherwise the checkpoint attempt is rejected. | 10 |
| min_sst_retention_time_sec | Objects within min_sst_retention_time_sec won't be deleted by hummock full GC, even they are dangling. | 21600 |
| move_table_size_limit | 10737418240 | |
| node_num_monitor_interval_sec | 10 | |
| parallelism_control_batch_size | The number of streaming jobs per scaling operation. | 10 |
| parallelism_control_trigger_first_delay_sec | The first delay of parallelism control. | 30 |
| parallelism_control_trigger_period_sec | The period of parallelism control trigger. | 10 |
| partition_vnode_count | Count of partition in split group. Meta will assign this value to every new group when it splits from default-group by automatically. Each partition contains aligned data of vnode_count / partition_vnode_count consecutive virtual-nodes of one state table. | 16 |
| pause_on_next_bootstrap_offline | Whether meta should request pausing all data sources on the next bootstrap. This allows us to pause the cluster on next bootstrap in an offline way. It's important for standalone or single node deployments. In those cases, meta node, frontend and compute may all be co-located. If the compute node enters an inconsistent state, and continuously crashloops, we may not be able to connect to the cluster to run alter system set pause_on_next_bootstrap = true;. By providing it in the static config, we can have an offline way to trigger the pause on bootstrap. | false |
| periodic_compaction_interval_sec | Schedule Dynamic compaction for all compaction groups with this interval. Groups in cooldown (recently found to have no compaction work) are skipped. | 300 |
| periodic_scheduling_compaction_group_merge_interval_sec | The interval of the periodic scheduling compaction group merge job. | 600 |
| periodic_scheduling_compaction_group_split_interval_sec | The interval of the regular periodic compaction group split job. This does not disable normalize-triggered splits when enable_compaction_group_normalize is enabled. | 10 |
| periodic_space_reclaim_compaction_interval_sec | Schedule space_reclaim compaction for all compaction groups with this interval. | 3600 |
| periodic_tombstone_reclaim_compaction_interval_sec | 600 | |
| periodic_ttl_reclaim_compaction_interval_sec | Schedule ttl_reclaim compaction for all compaction groups with this interval. | 1800 |
| protect_drop_table_with_incoming_sink | Whether to protect dropping a table with incoming sink. | false |
| split_group_size_limit | 68719476736 | |
| split_group_size_ratio | Whether to split the compaction group when the size of the group exceeds the compaction_group_config.max_estimated_group_size() * split_group_size_ratio. | 0.9 |
| table_high_write_throughput_threshold | The threshold of write throughput to trigger a group split. | 16777216 |
| table_low_write_throughput_threshold | The threshold of write throughput to trigger a group merge. | 4194304 |
| table_stat_high_write_throughput_ratio_for_split | To split the compaction group when the high throughput statistics of the group exceeds the threshold. | 0.5 |
| table_stat_low_write_throughput_ratio_for_merge | To merge the compaction group when the low throughput statistics of the group exceeds the threshold. | 0.7 |
| table_stat_throuput_window_seconds_for_merge | The window seconds of table throughput statistic history for merge compaction group. | 240 |
| table_stat_throuput_window_seconds_for_split | The window seconds of table throughput statistic history for split compaction group. | 60 |
| vacuum_interval_sec | Interval of invoking a vacuum job, to remove stale metadata from meta store and objects from object store. | 30 |
| vacuum_spin_interval_ms | The spin interval inside a vacuum job. It avoids the vacuum job monopolizing resources of meta node. | 100 |
| Config | Description | Default |
|---|---|---|
| compaction_filter_mask | 6 | |
| disable_auto_group_scheduling | false | |
| emergency_level0_sst_file_count | 2000 | |
| emergency_level0_sub_level_partition | 256 | |
| enable_emergency_picker | true | |
| enable_optimize_l0_interval_selection | true | |
| level0_max_compact_file_number | 100 | |
| level0_overlapping_sub_level_compact_level_count | 12 | |
| level0_stop_write_threshold_max_size | 322122547200 | |
| level0_stop_write_threshold_max_sst_count | 5000 | |
| level0_stop_write_threshold_sub_level_number | 128 | |
| level0_sub_level_compact_level_count | 3 | |
| level0_tier_compact_file_number | 12 | |
| max_bytes_for_level_base | 536870912 | |
| max_bytes_for_level_multiplier | 10 | |
| max_compaction_bytes | 2147483648 | |
| max_kv_count_for_xor16 | 262144 | |
| max_l0_compact_level_count | 42 | |
| max_level | 6 | |
| max_overlapping_level_size | 268435456 | |
| max_space_reclaim_bytes | 536870912 | |
| max_sub_compaction | 4 | |
| sst_allowed_trivial_move_max_count | 256 | |
| sst_allowed_trivial_move_min_size | 4194304 | |
| sub_level_max_compaction_bytes | 134217728 | |
| target_file_size_base | 33554432 | |
| tombstone_reclaim_ratio | 40 | |
| vnode_aligned_level_size_threshold | "" |
| Config | Description | Default |
|---|---|---|
| actor_cnt_per_worker_parallelism_hard_limit | Max number of actor allowed per parallelism (default = 400). CREATE MV/Table will be rejected when the number of actors exceeds this limit. | 400 |
| actor_cnt_per_worker_parallelism_soft_limit | Max number of actor allowed per parallelism (default = 100). CREATE MV/Table will be noticed when the number of actors exceeds this limit. | 100 |
| cached_traces_memory_limit_bytes | The maximum memory usage in bytes for the tracing collector embedded in the meta node. | 134217728 |
| cached_traces_num | The number of traces to be cached in-memory by the tracing collector embedded in the meta node. | 256 |
| compute_client_config | ||
| enable_check_task_level_overlap | false | |
| enable_trivial_move | Compaction picker config | true |
| frontend_client_config | ||
| hummock_gc_history_insert_batch_size | 1000 | |
| hummock_time_travel_delta_fetch_batch_size | Max number of version deltas fetched from meta store per SELECT, during time travel metadata vacuum. | 100 |
| hummock_time_travel_epoch_version_insert_batch_size | Max number of epoch-to-version inserted into meta store per INSERT, during time travel metadata writing. | 1000 |
| hummock_time_travel_filter_out_objects_batch_size | 1000 | |
| hummock_time_travel_filter_out_objects_list_delta_batch_size | 1000 | |
| hummock_time_travel_filter_out_objects_list_version_batch_size | 10 | |
| hummock_time_travel_filter_out_objects_v1 | false | |
| hummock_time_travel_sst_info_fetch_batch_size | Max number of SSTs fetched from meta store per SELECT, during time travel Hummock version replay. | 10000 |
| hummock_time_travel_sst_info_insert_batch_size | Max number of SSTs inserted into meta store per INSERT, during time travel metadata writing. | 100 |
| max_get_task_probe_times | 5 | |
| max_trivial_move_task_count_per_loop | 256 | |
| stream_client_config | ||
| time_travel_vacuum_interval_sec | 30 | |
| time_travel_vacuum_max_version_count | 100000 |
| Config | Description | Default |
|---|---|---|
| acquire_timeout_sec | Acquire timeout in seconds for a meta store connection. | 30 |
| connection_timeout_sec | Connection timeout in seconds for a meta store connection. | 10 |
| idle_timeout_sec | Idle timeout in seconds for a meta store connection. | 30 |
| max_connections | Maximum number of connections for the meta store connection pool. | 10 |
| min_connections | Minimum number of connections for the meta store connection pool. | 1 |
| Config | Description | Default |
|---|---|---|
| connection_pool_size | The default number of the connections when connecting to a gRPC server. For the connections used in streaming or batch exchange, please refer to the entries in [stream.developer] and [batch.developer] sections. This value will be used if they are not specified. | 16 |
| grpc_max_reset_stream | 200 | |
| heap_profiling | Enable heap profile dump when memory usage is high. | |
| heartbeat_interval_ms | The interval for periodic heartbeat from worker to the meta service. | 1000 |
| metrics_level | Used for control the metrics level, similar to log level. | "Info" |
| telemetry_enabled | true |
| Config | Description | Default |
|---|---|---|
| block_cache_capacity_mb | DEPRECATED: This config will be deprecated in the future version, use storage.cache.block_cache_capacity_mb instead. | "" |
| check_compaction_result | false | |
| compact_iter_recreate_timeout_ms | 600000 | |
| compactor_concurrent_uploading_sst_count | The concurrent uploading number of SSTables of builder | "" |
| compactor_fast_max_compact_delete_ratio | 40 | |
| compactor_fast_max_compact_task_size | 2147483648 | |
| compactor_iter_max_io_retry_times | 8 | |
| compactor_max_overlap_sst_count | 64 | |
| compactor_max_preload_meta_file_count | The maximum number of meta files that can be preloaded. If the number of meta files exceeds this value, the compactor will try to compute parallelism only through SstableInfo, no longer preloading SstableMeta. This is to prevent the compactor from consuming too much memory, but it may cause the compactor to be less efficient. | 32 |
| compactor_max_sst_key_count | 2097152 | |
| compactor_max_sst_size | 536870912 | |
| compactor_max_task_multiplier | Compactor calculates the maximum number of tasks that can be executed on the node based on worker_num and compactor_max_task_multiplier. max_pull_task_count = worker_num * compactor_max_task_multiplier | 3.0 |
| compactor_memory_available_proportion | The percentage of memory available when compactor is deployed separately. non_reserved_memory_bytes = system_memory_available_bytes * compactor_memory_available_proportion | 0.8 |
| compactor_memory_limit_mb | "" | |
| disable_remote_compactor | false | |
| enable_fast_compaction | true | |
| high_priority_ratio_in_percent | DEPRECATED: This config will be deprecated in the future version, use storage.cache.block_cache_eviction.high_priority_ratio_in_percent with storage.cache.block_cache_eviction.algorithm = "Lru" instead. | "" |
| iceberg_compaction_enable_dynamic_size_estimation | Whether to enable dynamic size estimation for iceberg compaction. | true |
| iceberg_compaction_enable_heuristic_output_parallelism | Whether to enable heuristic output parallelism in iceberg compaction. | false |
| iceberg_compaction_enable_validate | false | |
| iceberg_compaction_max_concurrent_closes | Maximum number of concurrent file close operations | 8 |
| iceberg_compaction_max_file_count_per_partition | 32 | |
| iceberg_compaction_max_record_batch_rows | 1024 | |
| iceberg_compaction_min_group_file_count | "" | |
| iceberg_compaction_min_group_size_mb | "" | |
| iceberg_compaction_min_size_per_partition_mb | 1024 | |
| iceberg_compaction_pending_parallelism_budget_multiplier | Multiplier for pending waiting parallelism budget for iceberg compaction task queue. Effective pending budget = ceil(max_task_parallelism * multiplier). Default 4.0. Set < 1.0 to reduce buffering (may increase PullTask RPC frequency); set higher to batch more tasks. | 4.0 |
| iceberg_compaction_size_estimation_smoothing_factor | The smoothing factor for size estimation in iceberg compaction.(default: 0.3) | 0.3 |
| iceberg_compaction_target_binpack_group_size_mb | 102400 | |
| iceberg_compaction_task_parallelism_ratio | The ratio of iceberg compaction max parallelism to the number of CPU cores | 4.0 |
| iceberg_compaction_write_parquet_max_row_group_rows | DEPRECATED: This config will be deprecated in the future version. Use sink config compaction.write_parquet_max_row_group_rows instead. | 102400 |
| imm_merge_threshold | The threshold for the number of immutable memtables to merge to a new imm. | 0 |
| max_cached_recent_versions_number | 60 | |
| max_concurrent_compaction_task_number | 16 | |
| max_prefetch_block_number | max prefetch block number | 16 |
| max_preload_io_retry_times | 3 | |
| max_preload_wait_time_mill | 0 | |
| max_version_pinning_duration_sec | 10800 | |
| mem_table_spill_threshold | The spill threshold for mem table. | 4194304 |
| meta_cache_capacity_mb | DEPRECATED: This config will be deprecated in the future version, use storage.cache.meta_cache_capacity_mb instead. | "" |
| min_sst_size_for_streaming_upload | Whether to enable streaming upload for sstable. | 33554432 |
| min_sstable_size_mb | 32 | |
| object_store | Object storage configuration 1. General configuration 2. Some special configuration of Backend 3. Retry and timeout configuration | |
| prefetch_buffer_capacity_mb | max memory usage for large query | "" |
| share_buffer_compaction_worker_threads_number | Worker threads number of dedicated tokio runtime for share buffer compaction. 0 means use tokio's default value (number of CPU core). | 4 |
| share_buffer_upload_concurrency | Number of tasks shared buffer can upload in parallel. | 8 |
| share_buffers_sync_parallelism | parallelism while syncing share buffers into L0 SST. Should NOT be 0. | 1 |
| shared_buffer_capacity_mb | Configure the maximum shared buffer size in MB explicitly. Writes attempting to exceed the capacity will stall until there is enough space. The overridden value will only be effective if: 1. block_cache_capacity_mb and meta_cache_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory. | "" |
| shared_buffer_flush_ratio | The shared buffer will start flushing data to object when the ratio of memory usage to the shared buffer capacity exceed such ratio. | 0.8 |
| shared_buffer_min_batch_flush_size_mb | The minimum total flush size of shared buffer spill. When a shared buffer spilled is trigger, the total flush size across multiple epochs should be at least higher than this size. | 800 |
| shorten_block_meta_key_threshold | If set, block metadata keys will be shortened when their length exceeds this threshold. This reduces SSTable metadata size by storing only the minimal distinguishing prefix. - None: Disabled (default) - Some(n): Only shorten keys with length >= n bytes | "" |
| sst_skip_bloom_filter_in_serde | sst serde happens when a sst meta is written to meta disk cache. excluding bloom filter from serde can reduce the meta disk cache entry size and reduce the disk io throughput at the cost of making the bloom filter useless | false |
| sstable_id_remote_fetch_number | Number of SST ids fetched from meta per RPC | 10 |
| table_info_statistic_history_times | Deprecated: The window size of table info statistic history. | 240 |
| time_travel_version_cache_capacity | 10 | |
| vector_file_block_size_kb | 1024 | |
| write_conflict_detection_enabled | Whether to enable write conflict detection | true |
| Config | Description | Default |
|---|---|---|
| block_cache_capacity_mb | Configure the capacity of the block cache in MB explicitly. The overridden value will only be effective if: 1. meta_cache_capacity_mb and shared_buffer_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory. | "" |
| block_cache_shard_num | Configure the number of shards in the block cache explicitly. If not set, the shard number will be determined automatically based on cache capacity. | "" |
| meta_cache_capacity_mb | Configure the capacity of the block cache in MB explicitly. The overridden value will only be effective if: 1. block_cache_capacity_mb and shared_buffer_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory. | "" |
| meta_cache_shard_num | Configure the number of shards in the meta cache explicitly. If not set, the shard number will be determined automatically based on cache capacity. | "" |
| vector_block_cache_capacity_mb | 16 | |
| vector_block_cache_shard_num | 16 | |
| vector_meta_cache_capacity_mb | 16 | |
| vector_meta_cache_shard_num | 16 |
| Config | Description | Default |
|---|---|---|
| concurrency | Inflight data cache refill tasks. | 10 |
| data_refill_levels | SSTable levels to refill. | [] |
| meta_refill_concurrency | Inflight meta cache refill tasks limit. 0 for unlimited. | 0 |
| recent_filter_layers | Recent filter layer count. | 6 |
| recent_filter_rotate_interval_ms | Recent filter layer rotate interval. | 10000 |
| recent_filter_shards | Recent filter layer shards. | 16 |
| skip_recent_filter | Skip check recent filter on data refill. This option is suitable for a single compute node or debugging. | false |
| threshold | Data cache refill unit admission ratio. Only unit whose blocks are admitted above the ratio will be refilled. | 0.5 |
| timeout_ms | Cache refill maximum timeout to apply version delta. | 6000 |
| unit | Block count that a data cache refill request fetches. | 64 |
| Config | Description | Default |
|---|---|---|
| blob_index_size_kb | Set the blob index size for each blob. A larger blob index size can hold more blob entries, but it will also increase the io size of each blob part write. NOTE: - The size will be aligned up to a multiplier of 4K. - Modifying this configuration will invalidate all existing file cache data. Default: 16 KiB | 16 |
| capacity_mb | 1024 | |
| compression | "None" | |
| dir | "" | |
| fifo_probation_ratio | 0.1 | |
| file_capacity_mb | 64 | |
| flush_buffer_threshold_mb | "" | |
| flushers | 4 | |
| indexer_shards | 64 | |
| insert_rate_limit_mb | Deprecated soon. Please use throttle to do I/O throttling instead. | 0 |
| reclaimers | 4 | |
| recover_concurrency | 8 | |
| recover_mode | Recover mode. Options: - "None": Do not recover disk cache. - "Quiet": Recover disk cache and skip errors. - "Strict": Recover disk cache and panic on errors. More details, see [RecoverMode::None], [RecoverMode::Quiet] and [RecoverMode::Strict], | "Quiet" |
| runtime_config | ||
| throttle |
| Config | Description | Default |
|---|---|---|
| blob_index_size_kb | Set the blob index size for each blob. A larger blob index size can hold more blob entries, but it will also increase the io size of each blob part write. NOTE: - The size will be aligned up to a multiplier of 4K. - Modifying this configuration will invalidate all existing file cache data. Default: 16 KiB | 16 |
| capacity_mb | 1024 | |
| compression | "None" | |
| dir | "" | |
| fifo_probation_ratio | 0.1 | |
| file_capacity_mb | 64 | |
| flush_buffer_threshold_mb | "" | |
| flushers | 4 | |
| indexer_shards | 64 | |
| insert_rate_limit_mb | Deprecated soon. Please use throttle to do I/O throttling instead. | 0 |
| reclaimers | 4 | |
| recover_concurrency | 8 | |
| recover_mode | Recover mode. Options: - "None": Do not recover disk cache. - "Quiet": Recover disk cache and skip errors. - "Strict": Recover disk cache and panic on errors. More details, see [RecoverMode::None], [RecoverMode::Quiet] and [RecoverMode::Strict], | "Quiet" |
| runtime_config | ||
| throttle |
| Config | Description | Default |
|---|---|---|
| actor_runtime_worker_threads_num | The thread number of the streaming actor runtime in the compute node. The default value is decided by tokio. | "" |
| async_stack_trace | Enable async stack tracing through await-tree for risectl. | "ReleaseVerbose" |
| in_flight_barrier_nums | The maximum number of barriers in-flight in the compute nodes. | 10000 |
| unique_user_stream_errors | Max unique user stream errors per actor | 10 |
| unsafe_disable_strict_consistency | Disable strict stream consistency checks. | false |
| Config | Description | Default |
|---|---|---|
| aggressive_noop_update_elimination | Eliminate unnecessary updates aggressively, even if it impacts performance. Enable this only if it's confirmed that no-op updates are causing significant streaming amplification. | false |
| chunk_size | The maximum size of the chunk produced by executor at a time. | 256 |
| compute_client_config | ||
| connector_message_buffer_size | The capacity of the chunks in the channel that connects between ConnectorSource and SourceExecutor. | 16 |
| default_enable_mem_preload_state_table | Whether by default enable preloading all rows in memory for state table. If true, all capable state tables will preload its state to memory | false |
| dml_channel_initial_permits | The initial permits for a dml channel, i.e., the maximum row count can be buffered in the channel. | 32768 |
| enable_actor_tokio_metrics | Actor tokio metrics is enabled if enable_actor_tokio_metrics is set or metrics level >= Debug. | true |
| enable_arrangement_backfill | Enable arrangement backfill If false, the arrangement backfill will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_arrangement_backfill (default true) | true |
| enable_auto_schema_change | A flag to allow disabling the auto schema change handling | true |
| enable_executor_row_count | Set to true to enable per-executor row count metrics. This will produce a lot of timeseries and might affect the prometheus performance. If you only need actor input and output rows data, see stream_actor_in_record_cnt and stream_actor_out_record_cnt instead. | false |
| enable_explain_analyze_stats | Enable / Disable profiling stats used by EXPLAIN ANALYZE | true |
| enable_shared_source | Enable shared source If false, the shared source will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_shared_source (default true) | true |
| enable_snapshot_backfill | Enable snapshot backfill If false, the snapshot backfill will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_snapshot_backfill (default true) | true |
| enable_state_table_vnode_stats_pruning | When enabled, vnode stats pruning is applied in production. When disabled, vnode stats pruning is in dry-run mode: we still maintain vnode stats and verify that pruning would be correct, but we don't actually use the pruning results — we still use cache and storage to fulfill the read. This is useful for validating the correctness of vnode stats pruning before enabling it in production. | false |
| enable_vnode_key_stats_for_materialize | Whether MaterializeExecutor enables vnode key stats for its state table. | false |
| exchange_batched_permits | The permits that are batched to add back, for reducing the backward AddPermits messages in remote exchange. | 256 |
| exchange_concurrent_barriers | The maximum number of concurrent barriers in an exchange channel. | 1 |
| exchange_concurrent_dispatchers | The concurrency for dispatching messages to different downstream jobs. - 1 means no concurrency, i.e., dispatch messages to downstream jobs one by one. - 0 means unlimited concurrency. | 0 |
| exchange_connection_pool_size | The number of the connections for streaming remote exchange between two nodes. If not specified, the value of server.connection_pool_size will be used. | 1 |
| exchange_initial_permits | The initial permits that a channel holds, i.e., the maximum row count can be buffered in the channel. | 2048 |
| hash_agg_max_dirty_groups_heap_size | The max heap size of dirty groups of HashAggExecutor. | 67108864 |
| hash_join_entry_state_max_rows | Configure the system-wide cache row cardinality of hash join. For example, if this is set to 1000, it means we can have at most 1000 rows in cache. | 30000 |
| high_join_amplification_threshold | If number of hash join matches exceeds this threshold number, it will be logged. | 2048 |
| iceberg_fetch_batch_size | IcebergFetchExecutor: The number of files the executor will fetch concurrently in a batch. | 1024 |
| iceberg_list_interval_sec | IcebergListExecutor: The interval in seconds for Iceberg source to list new files. | 10 |
| iceberg_sink_positional_delete_cache_size | IcebergSink: The size of the cache for positional delete in the sink. | 1024 |
| iceberg_sink_write_parquet_max_row_group_rows | IcebergSink: The maximum number of rows in a row group when writing Parquet files. | 100000 |
| join_encoding_type | Determine which encoding will be used to encode join rows in operator cache. | "memory_optimized" |
| materialize_force_overwrite_on_no_check | When enabled, materialized views using default NoCheck conflict behavior will be forced to use Overwrite. Useful to avoid propagating inconsistent changelog downstream. | false |
| max_barrier_batch_size | The maximum number of consecutive barriers allowed in a message when sent between actors. | 1024 |
| max_concurrent_kv_log_store_historical_read | The maximum number of kv log store readers that can concurrently read historical data (i.e., from the state store) during initialization. A reader is considered "initializing" until it has read at least one row from the historical stream or the stream returns empty. Set to 0 to disable the limit (unlimited concurrency). | 0 |
| mem_preload_state_table_ids_blacklist | The list of state table ids to disable preloading all rows in memory for state table. Only takes effect when default_enable_mem_preload_state_table is true. | [] |
| mem_preload_state_table_ids_whitelist | The list of state table ids to enable preloading all rows in memory for state table. Only takes effect when default_enable_mem_preload_state_table is false. | [] |
| memory_controller_eviction_factor_aggressive | 2.0 | |
| memory_controller_eviction_factor_graceful | 1.5 | |
| memory_controller_eviction_factor_stable | 1.0 | |
| memory_controller_sequence_tls_lag | 32 | |
| memory_controller_sequence_tls_step | 128 | |
| memory_controller_threshold_aggressive | 0.9 | |
| memory_controller_threshold_graceful | 0.81 | |
| memory_controller_threshold_stable | 0.72 | |
| memory_controller_update_interval_ms | 100 | |
| now_progress_ratio | "" | |
| over_window_cache_policy | Cache policy for partition cache in streaming over window. Can be full, recent, recent_first_n or recent_last_n. | "full" |
| refresh_scheduler_interval_sec | The interval in seconds for the refresh scheduler to check and trigger scheduled refreshes. | 60 |
| snapshot_iter_rebuild_interval_secs | The interval in seconds to rebuild snapshot iterators during snapshot backfill. | 600 |
| switch_jdbc_pg_to_native | When true, all jdbc sinks with connector='jdbc' and jdbc.url="jdbc:postgresql://..." will be switched from jdbc postgresql sinks to rust native (connector='postgres') sinks. | false |
| sync_log_store_buffer_size | The max buffer size for sync logstore, before we start flushing. | 2048 |
| sync_log_store_pause_duration_ms | The timeout for reading from the buffer of the sync log store on barrier. Every epoch we will attempt to read the full buffer of the sync log store. If we hit the timeout, we will stop reading and continue. | 64 |
| topn_cache_min_capacity | Minimum cache size for TopN cache per group key. | 10 |
| unsafe_extreme_cache_size | Limit number of the cached entries in an extreme aggregation call. | 10 |
| Config | Description | Default |
|---|---|---|
| adaptive_parallelism_strategy | The strategy for Adaptive Parallelism. | "AUTO" |
| backup_storage_directory | Remote directory for storing snapshots. | "" |
| backup_storage_url | Remote storage url for storing snapshots. | "" |
| barrier_interval_ms | The interval of periodic barrier. | 1000 |
| block_size_kb | Size of each block in bytes in SST. | 64 |
| bloom_false_positive | False positive probability of bloom filter. | 0.001 |
| checkpoint_frequency | There will be a checkpoint for every n barriers. | 1 |
| data_directory | Remote directory for storing data and metadata objects. | "" |
| enable_tracing | Whether to enable distributed tracing. | false |
| enforce_secret | Whether to enforce secret on cloud. | false |
| license_key | The license key to activate enterprise features. | "" |
| max_concurrent_creating_streaming_jobs | Max number of concurrent creating streaming jobs. | 1 |
| parallel_compact_size_mb | The size of parallel task for one compact/flush job. | 512 |
| pause_on_next_bootstrap | Whether to pause all data sources on next bootstrap. | false |
| per_database_isolation | Whether per database isolation is enabled | true |
| sstable_size_mb | Target size of the Sstable. | 256 |
| state_store | URL for the state store | "" |
| time_travel_retention_ms | The data retention period for time travel. | 600000 |
| use_new_object_prefix_strategy | Whether to split object prefix. | "" |
| Config | Description | Default |
|---|---|---|
| enable_embedded_javascript_udf | Allow embedded JS UDFs to be created. | true |
| enable_embedded_python_udf | Allow embedded Python UDFs to be created. | false |
| enable_embedded_wasm_udf | Allow embedded WASM UDFs to be created. | true |