Back to Risingwave

RisingWave System Configurations

src/config/docs.md

2.8.336.1 KB
Original Source

RisingWave System Configurations

This page is automatically generated by ./risedev generate-example-config

batch

ConfigDescriptionDefault
distributed_query_limitThis is the max number of queries per sql session.""
enable_barrier_readfalse
enable_spillEnable the spill out to disk feature for batch queries.true
frontend_compute_runtime_worker_threadsfrontend compute runtime worker threads""
mask_worker_temporary_secsThis is the secs used to mask a worker unavailable temporarily.30
max_batch_queries_per_frontend_nodeThis is the max number of batch queries per frontend node.""
redact_sql_option_keywordsKeywords on which SQL option redaction is based in the query log. A SQL option with a name containing any of these keywords will be redacted.["credential", "key", "password", "private", "secret", "token"]
statement_timeout_in_secTimeout for a batch query in seconds.3600
worker_threads_numThe thread number of the batch task runtime in the compute node. The default value is decided by tokio.""

batch.developer

ConfigDescriptionDefault
chunk_sizeThe size of a chunk produced by RowSeqScanExecutor1024
compute_client_config
connector_message_buffer_sizeThe capacity of the chunks in the channel that connects between ConnectorSource and SourceExecutor.16
exchange_connection_pool_sizeThe number of the connections for batch remote exchange between two nodes. If not specified, the value of server.connection_pool_size will be used.""
frontend_client_config
local_execute_buffer_size64
output_channel_sizeThe size of the channel used for output to exchange/shuffle.64
receiver_channel_size1000
root_stage_channel_size100

frontend

ConfigDescriptionDefault
hba_configHost-based authentication configuration
max_single_query_size_bytesA query of size exceeding this threshold will always be rejected due to memory constraints.1073741824
max_total_query_size_bytesTotal memory constraints for running queries.1073741824
min_single_query_size_bytesA query of size under this threshold will never be rejected due to memory constraints.1048576

meta

ConfigDescriptionDefault
backend"Mem"
cdc_table_split_init_insert_batch_sizeThe batch size that the CDC table splits initialization should use when persisting to meta store.100
cdc_table_split_init_sleep_duration_millisThe duration that the CDC table splits initialization should yield to avoid overloading upstream system.500
cdc_table_split_init_sleep_interval_splitsThe interval that the CDC table splits initialization should yield to avoid overloading upstream system.1000
compact_task_table_size_partition_threshold_highThe threshold of table size in one compact task to decide whether to partition one table into partition_vnode_count parts, which belongs to default group and materialized view group. Set it max value of 64-bit number to disable this feature.536870912
compact_task_table_size_partition_threshold_lowThe threshold of table size in one compact task to decide whether to partition one table into hybrid_partition_vnode_count parts, which belongs to default group and materialized view group. Set it max value of 64-bit number to disable this feature.134217728
compaction_group_merge_dimension_thresholdThe threshold of each dimension of the compaction group after merging. When the dimension * compaction_group_merge_dimension_threshold >= limit, the merging job will be rejected.1.2
compaction_task_max_heartbeat_interval_secs30
compaction_task_max_progress_interval_secs600
cut_table_size_limit1073741824
dangerous_max_idle_secsAfter specified seconds of idle (no mview or flush), the process will be exited. It is mainly useful for playgrounds.""
default_parallelismThe default global parallelism for all streaming jobs, if user doesn't specify the parallelism, this value will be used. FULL means use all available parallelism units, otherwise it's a number."Full"
disable_automatic_parallelism_controlWhether to disable adaptive-scaling feature.false
disable_recoveryWhether to enable fail-on-recovery. Should only be used in e2e tests.false
do_not_config_object_storage_lifecycleWhether config object storage bucket lifecycle to purge stale data.false
enable_committed_sst_sanity_checkEnable sanity check when SSTs are committed.false
enable_compaction_deterministicWhether to enable deterministic compaction scheduling, which will disable all auto scheduling of compaction tasks. Should only be used in e2e tests.false
enable_compaction_group_normalizeWhether to normalize overlapping compaction groups before the regular split/merge scheduling.false
enable_dropped_column_reclaimWhether compactor should rewrite row to remove dropped column.false
enable_hummock_data_archiveIf enabled, SSTable object file and version delta will be retained. SSTable object file need to be deleted via full GC. version delta need to be manually deleted.false
enable_legacy_table_migrationWhether to automatically migrate legacy table fragments when meta starts.true
event_log_channel_max_sizeKeeps the latest N events per channel.10
event_log_enabledtrue
full_gc_interval_secInterval of automatic hummock full GC.3600
full_gc_object_limitMax number of object per full GC job can fetch.100000
gc_history_retention_time_secDuration in seconds to retain garbage collection history data.21600
hummock_time_travel_snapshot_intervalThe interval at which a Hummock version snapshot is taken for time travel. Larger value indicates less storage overhead but worse query performance.100
hummock_version_checkpoint_interval_secInterval of hummock version checkpoint.30
hybrid_partition_vnode_countCount of partitions of tables in default group and materialized view group. The meta node will decide according to some strategy whether to cut the boundaries of the file according to the vnode alignment. Each partition contains aligned data of vnode_count / hybrid_partition_vnode_count consecutive virtual-nodes of one state table. Set it zero to disable this feature.4
iceberg_gc_interval_secInterval of invoking iceberg garbage collection, to expire old snapshots.3600
max_heartbeat_interval_secsMaximum allowed heartbeat interval in seconds.60
max_inflight_time_travel_queryMax number of inflight time travel query.1000
max_normalize_splits_per_roundThe maximum number of normalize splits in one scheduler round. Must be greater than 0.4
meta_leader_lease_secs30
min_delta_log_num_for_hummock_version_checkpointThe minimum delta log number a new checkpoint should compact, otherwise the checkpoint attempt is rejected.10
min_sst_retention_time_secObjects within min_sst_retention_time_sec won't be deleted by hummock full GC, even they are dangling.21600
move_table_size_limit10737418240
node_num_monitor_interval_sec10
parallelism_control_batch_sizeThe number of streaming jobs per scaling operation.10
parallelism_control_trigger_first_delay_secThe first delay of parallelism control.30
parallelism_control_trigger_period_secThe period of parallelism control trigger.10
partition_vnode_countCount of partition in split group. Meta will assign this value to every new group when it splits from default-group by automatically. Each partition contains aligned data of vnode_count / partition_vnode_count consecutive virtual-nodes of one state table.16
pause_on_next_bootstrap_offlineWhether meta should request pausing all data sources on the next bootstrap. This allows us to pause the cluster on next bootstrap in an offline way. It's important for standalone or single node deployments. In those cases, meta node, frontend and compute may all be co-located. If the compute node enters an inconsistent state, and continuously crashloops, we may not be able to connect to the cluster to run alter system set pause_on_next_bootstrap = true;. By providing it in the static config, we can have an offline way to trigger the pause on bootstrap.false
periodic_compaction_interval_secSchedule Dynamic compaction for all compaction groups with this interval. Groups in cooldown (recently found to have no compaction work) are skipped.300
periodic_scheduling_compaction_group_merge_interval_secThe interval of the periodic scheduling compaction group merge job.600
periodic_scheduling_compaction_group_split_interval_secThe interval of the regular periodic compaction group split job. This does not disable normalize-triggered splits when enable_compaction_group_normalize is enabled.10
periodic_space_reclaim_compaction_interval_secSchedule space_reclaim compaction for all compaction groups with this interval.3600
periodic_tombstone_reclaim_compaction_interval_sec600
periodic_ttl_reclaim_compaction_interval_secSchedule ttl_reclaim compaction for all compaction groups with this interval.1800
protect_drop_table_with_incoming_sinkWhether to protect dropping a table with incoming sink.false
split_group_size_limit68719476736
split_group_size_ratioWhether to split the compaction group when the size of the group exceeds the compaction_group_config.max_estimated_group_size() * split_group_size_ratio.0.9
table_high_write_throughput_thresholdThe threshold of write throughput to trigger a group split.16777216
table_low_write_throughput_thresholdThe threshold of write throughput to trigger a group merge.4194304
table_stat_high_write_throughput_ratio_for_splitTo split the compaction group when the high throughput statistics of the group exceeds the threshold.0.5
table_stat_low_write_throughput_ratio_for_mergeTo merge the compaction group when the low throughput statistics of the group exceeds the threshold.0.7
table_stat_throuput_window_seconds_for_mergeThe window seconds of table throughput statistic history for merge compaction group.240
table_stat_throuput_window_seconds_for_splitThe window seconds of table throughput statistic history for split compaction group.60
vacuum_interval_secInterval of invoking a vacuum job, to remove stale metadata from meta store and objects from object store.30
vacuum_spin_interval_msThe spin interval inside a vacuum job. It avoids the vacuum job monopolizing resources of meta node.100

meta.compaction_config

ConfigDescriptionDefault
compaction_filter_mask6
disable_auto_group_schedulingfalse
emergency_level0_sst_file_count2000
emergency_level0_sub_level_partition256
enable_emergency_pickertrue
enable_optimize_l0_interval_selectiontrue
level0_max_compact_file_number100
level0_overlapping_sub_level_compact_level_count12
level0_stop_write_threshold_max_size322122547200
level0_stop_write_threshold_max_sst_count5000
level0_stop_write_threshold_sub_level_number128
level0_sub_level_compact_level_count3
level0_tier_compact_file_number12
max_bytes_for_level_base536870912
max_bytes_for_level_multiplier10
max_compaction_bytes2147483648
max_kv_count_for_xor16262144
max_l0_compact_level_count42
max_level6
max_overlapping_level_size268435456
max_space_reclaim_bytes536870912
max_sub_compaction4
sst_allowed_trivial_move_max_count256
sst_allowed_trivial_move_min_size4194304
sub_level_max_compaction_bytes134217728
target_file_size_base33554432
tombstone_reclaim_ratio40
vnode_aligned_level_size_threshold""

meta.developer

ConfigDescriptionDefault
actor_cnt_per_worker_parallelism_hard_limitMax number of actor allowed per parallelism (default = 400). CREATE MV/Table will be rejected when the number of actors exceeds this limit.400
actor_cnt_per_worker_parallelism_soft_limitMax number of actor allowed per parallelism (default = 100). CREATE MV/Table will be noticed when the number of actors exceeds this limit.100
cached_traces_memory_limit_bytesThe maximum memory usage in bytes for the tracing collector embedded in the meta node.134217728
cached_traces_numThe number of traces to be cached in-memory by the tracing collector embedded in the meta node.256
compute_client_config
enable_check_task_level_overlapfalse
enable_trivial_moveCompaction picker configtrue
frontend_client_config
hummock_gc_history_insert_batch_size1000
hummock_time_travel_delta_fetch_batch_sizeMax number of version deltas fetched from meta store per SELECT, during time travel metadata vacuum.100
hummock_time_travel_epoch_version_insert_batch_sizeMax number of epoch-to-version inserted into meta store per INSERT, during time travel metadata writing.1000
hummock_time_travel_filter_out_objects_batch_size1000
hummock_time_travel_filter_out_objects_list_delta_batch_size1000
hummock_time_travel_filter_out_objects_list_version_batch_size10
hummock_time_travel_filter_out_objects_v1false
hummock_time_travel_sst_info_fetch_batch_sizeMax number of SSTs fetched from meta store per SELECT, during time travel Hummock version replay.10000
hummock_time_travel_sst_info_insert_batch_sizeMax number of SSTs inserted into meta store per INSERT, during time travel metadata writing.100
max_get_task_probe_times5
max_trivial_move_task_count_per_loop256
stream_client_config
time_travel_vacuum_interval_sec30
time_travel_vacuum_max_version_count100000

meta.meta_store_config

ConfigDescriptionDefault
acquire_timeout_secAcquire timeout in seconds for a meta store connection.30
connection_timeout_secConnection timeout in seconds for a meta store connection.10
idle_timeout_secIdle timeout in seconds for a meta store connection.30
max_connectionsMaximum number of connections for the meta store connection pool.10
min_connectionsMinimum number of connections for the meta store connection pool.1

server

ConfigDescriptionDefault
connection_pool_sizeThe default number of the connections when connecting to a gRPC server. For the connections used in streaming or batch exchange, please refer to the entries in [stream.developer] and [batch.developer] sections. This value will be used if they are not specified.16
grpc_max_reset_stream200
heap_profilingEnable heap profile dump when memory usage is high.
heartbeat_interval_msThe interval for periodic heartbeat from worker to the meta service.1000
metrics_levelUsed for control the metrics level, similar to log level."Info"
telemetry_enabledtrue

storage

ConfigDescriptionDefault
block_cache_capacity_mbDEPRECATED: This config will be deprecated in the future version, use storage.cache.block_cache_capacity_mb instead.""
check_compaction_resultfalse
compact_iter_recreate_timeout_ms600000
compactor_concurrent_uploading_sst_countThe concurrent uploading number of SSTables of builder""
compactor_fast_max_compact_delete_ratio40
compactor_fast_max_compact_task_size2147483648
compactor_iter_max_io_retry_times8
compactor_max_overlap_sst_count64
compactor_max_preload_meta_file_countThe maximum number of meta files that can be preloaded. If the number of meta files exceeds this value, the compactor will try to compute parallelism only through SstableInfo, no longer preloading SstableMeta. This is to prevent the compactor from consuming too much memory, but it may cause the compactor to be less efficient.32
compactor_max_sst_key_count2097152
compactor_max_sst_size536870912
compactor_max_task_multiplierCompactor calculates the maximum number of tasks that can be executed on the node based on worker_num and compactor_max_task_multiplier. max_pull_task_count = worker_num * compactor_max_task_multiplier3.0
compactor_memory_available_proportionThe percentage of memory available when compactor is deployed separately. non_reserved_memory_bytes = system_memory_available_bytes * compactor_memory_available_proportion0.8
compactor_memory_limit_mb""
disable_remote_compactorfalse
enable_fast_compactiontrue
high_priority_ratio_in_percentDEPRECATED: This config will be deprecated in the future version, use storage.cache.block_cache_eviction.high_priority_ratio_in_percent with storage.cache.block_cache_eviction.algorithm = "Lru" instead.""
iceberg_compaction_enable_dynamic_size_estimationWhether to enable dynamic size estimation for iceberg compaction.true
iceberg_compaction_enable_heuristic_output_parallelismWhether to enable heuristic output parallelism in iceberg compaction.false
iceberg_compaction_enable_validatefalse
iceberg_compaction_max_concurrent_closesMaximum number of concurrent file close operations8
iceberg_compaction_max_file_count_per_partition32
iceberg_compaction_max_record_batch_rows1024
iceberg_compaction_min_group_file_count""
iceberg_compaction_min_group_size_mb""
iceberg_compaction_min_size_per_partition_mb1024
iceberg_compaction_pending_parallelism_budget_multiplierMultiplier for pending waiting parallelism budget for iceberg compaction task queue. Effective pending budget = ceil(max_task_parallelism * multiplier). Default 4.0. Set < 1.0 to reduce buffering (may increase PullTask RPC frequency); set higher to batch more tasks.4.0
iceberg_compaction_size_estimation_smoothing_factorThe smoothing factor for size estimation in iceberg compaction.(default: 0.3)0.3
iceberg_compaction_target_binpack_group_size_mb102400
iceberg_compaction_task_parallelism_ratioThe ratio of iceberg compaction max parallelism to the number of CPU cores4.0
iceberg_compaction_write_parquet_max_row_group_rowsDEPRECATED: This config will be deprecated in the future version. Use sink config compaction.write_parquet_max_row_group_rows instead.102400
imm_merge_thresholdThe threshold for the number of immutable memtables to merge to a new imm.0
max_cached_recent_versions_number60
max_concurrent_compaction_task_number16
max_prefetch_block_numbermax prefetch block number16
max_preload_io_retry_times3
max_preload_wait_time_mill0
max_version_pinning_duration_sec10800
mem_table_spill_thresholdThe spill threshold for mem table.4194304
meta_cache_capacity_mbDEPRECATED: This config will be deprecated in the future version, use storage.cache.meta_cache_capacity_mb instead.""
min_sst_size_for_streaming_uploadWhether to enable streaming upload for sstable.33554432
min_sstable_size_mb32
object_storeObject storage configuration 1. General configuration 2. Some special configuration of Backend 3. Retry and timeout configuration
prefetch_buffer_capacity_mbmax memory usage for large query""
share_buffer_compaction_worker_threads_numberWorker threads number of dedicated tokio runtime for share buffer compaction. 0 means use tokio's default value (number of CPU core).4
share_buffer_upload_concurrencyNumber of tasks shared buffer can upload in parallel.8
share_buffers_sync_parallelismparallelism while syncing share buffers into L0 SST. Should NOT be 0.1
shared_buffer_capacity_mbConfigure the maximum shared buffer size in MB explicitly. Writes attempting to exceed the capacity will stall until there is enough space. The overridden value will only be effective if: 1. block_cache_capacity_mb and meta_cache_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory.""
shared_buffer_flush_ratioThe shared buffer will start flushing data to object when the ratio of memory usage to the shared buffer capacity exceed such ratio.0.8
shared_buffer_min_batch_flush_size_mbThe minimum total flush size of shared buffer spill. When a shared buffer spilled is trigger, the total flush size across multiple epochs should be at least higher than this size.800
shorten_block_meta_key_thresholdIf set, block metadata keys will be shortened when their length exceeds this threshold. This reduces SSTable metadata size by storing only the minimal distinguishing prefix. - None: Disabled (default) - Some(n): Only shorten keys with length >= n bytes""
sst_skip_bloom_filter_in_serdesst serde happens when a sst meta is written to meta disk cache. excluding bloom filter from serde can reduce the meta disk cache entry size and reduce the disk io throughput at the cost of making the bloom filter uselessfalse
sstable_id_remote_fetch_numberNumber of SST ids fetched from meta per RPC10
table_info_statistic_history_timesDeprecated: The window size of table info statistic history.240
time_travel_version_cache_capacity10
vector_file_block_size_kb1024
write_conflict_detection_enabledWhether to enable write conflict detectiontrue

storage.cache

ConfigDescriptionDefault
block_cache_capacity_mbConfigure the capacity of the block cache in MB explicitly. The overridden value will only be effective if: 1. meta_cache_capacity_mb and shared_buffer_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory.""
block_cache_shard_numConfigure the number of shards in the block cache explicitly. If not set, the shard number will be determined automatically based on cache capacity.""
meta_cache_capacity_mbConfigure the capacity of the block cache in MB explicitly. The overridden value will only be effective if: 1. block_cache_capacity_mb and shared_buffer_capacity_mb are also configured explicitly. 2. block_cache_capacity_mb + meta_cache_capacity_mb + meta_cache_capacity_mb doesn't exceed 0.3 * non-reserved memory.""
meta_cache_shard_numConfigure the number of shards in the meta cache explicitly. If not set, the shard number will be determined automatically based on cache capacity.""
vector_block_cache_capacity_mb16
vector_block_cache_shard_num16
vector_meta_cache_capacity_mb16
vector_meta_cache_shard_num16

storage.cache_refill

ConfigDescriptionDefault
concurrencyInflight data cache refill tasks.10
data_refill_levelsSSTable levels to refill.[]
meta_refill_concurrencyInflight meta cache refill tasks limit. 0 for unlimited.0
recent_filter_layersRecent filter layer count.6
recent_filter_rotate_interval_msRecent filter layer rotate interval.10000
recent_filter_shardsRecent filter layer shards.16
skip_recent_filterSkip check recent filter on data refill. This option is suitable for a single compute node or debugging.false
thresholdData cache refill unit admission ratio. Only unit whose blocks are admitted above the ratio will be refilled.0.5
timeout_msCache refill maximum timeout to apply version delta.6000
unitBlock count that a data cache refill request fetches.64

storage.data_file_cache

ConfigDescriptionDefault
blob_index_size_kbSet the blob index size for each blob. A larger blob index size can hold more blob entries, but it will also increase the io size of each blob part write. NOTE: - The size will be aligned up to a multiplier of 4K. - Modifying this configuration will invalidate all existing file cache data. Default: 16 KiB16
capacity_mb1024
compression"None"
dir""
fifo_probation_ratio0.1
file_capacity_mb64
flush_buffer_threshold_mb""
flushers4
indexer_shards64
insert_rate_limit_mbDeprecated soon. Please use throttle to do I/O throttling instead.0
reclaimers4
recover_concurrency8
recover_modeRecover mode. Options: - "None": Do not recover disk cache. - "Quiet": Recover disk cache and skip errors. - "Strict": Recover disk cache and panic on errors. More details, see [RecoverMode::None], [RecoverMode::Quiet] and [RecoverMode::Strict],"Quiet"
runtime_config
throttle

storage.meta_file_cache

ConfigDescriptionDefault
blob_index_size_kbSet the blob index size for each blob. A larger blob index size can hold more blob entries, but it will also increase the io size of each blob part write. NOTE: - The size will be aligned up to a multiplier of 4K. - Modifying this configuration will invalidate all existing file cache data. Default: 16 KiB16
capacity_mb1024
compression"None"
dir""
fifo_probation_ratio0.1
file_capacity_mb64
flush_buffer_threshold_mb""
flushers4
indexer_shards64
insert_rate_limit_mbDeprecated soon. Please use throttle to do I/O throttling instead.0
reclaimers4
recover_concurrency8
recover_modeRecover mode. Options: - "None": Do not recover disk cache. - "Quiet": Recover disk cache and skip errors. - "Strict": Recover disk cache and panic on errors. More details, see [RecoverMode::None], [RecoverMode::Quiet] and [RecoverMode::Strict],"Quiet"
runtime_config
throttle

streaming

ConfigDescriptionDefault
actor_runtime_worker_threads_numThe thread number of the streaming actor runtime in the compute node. The default value is decided by tokio.""
async_stack_traceEnable async stack tracing through await-tree for risectl."ReleaseVerbose"
in_flight_barrier_numsThe maximum number of barriers in-flight in the compute nodes.10000
unique_user_stream_errorsMax unique user stream errors per actor10
unsafe_disable_strict_consistencyDisable strict stream consistency checks.false

streaming.developer

ConfigDescriptionDefault
aggressive_noop_update_eliminationEliminate unnecessary updates aggressively, even if it impacts performance. Enable this only if it's confirmed that no-op updates are causing significant streaming amplification.false
chunk_sizeThe maximum size of the chunk produced by executor at a time.256
compute_client_config
connector_message_buffer_sizeThe capacity of the chunks in the channel that connects between ConnectorSource and SourceExecutor.16
default_enable_mem_preload_state_tableWhether by default enable preloading all rows in memory for state table. If true, all capable state tables will preload its state to memoryfalse
dml_channel_initial_permitsThe initial permits for a dml channel, i.e., the maximum row count can be buffered in the channel.32768
enable_actor_tokio_metricsActor tokio metrics is enabled if enable_actor_tokio_metrics is set or metrics level >= Debug.true
enable_arrangement_backfillEnable arrangement backfill If false, the arrangement backfill will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_arrangement_backfill (default true)true
enable_auto_schema_changeA flag to allow disabling the auto schema change handlingtrue
enable_executor_row_countSet to true to enable per-executor row count metrics. This will produce a lot of timeseries and might affect the prometheus performance. If you only need actor input and output rows data, see stream_actor_in_record_cnt and stream_actor_out_record_cnt instead.false
enable_explain_analyze_statsEnable / Disable profiling stats used by EXPLAIN ANALYZEtrue
enable_shared_sourceEnable shared source If false, the shared source will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_shared_source (default true)true
enable_snapshot_backfillEnable snapshot backfill If false, the snapshot backfill will be disabled, even if session variable set. If true, it's decided by session variable streaming_use_snapshot_backfill (default true)true
enable_state_table_vnode_stats_pruningWhen enabled, vnode stats pruning is applied in production. When disabled, vnode stats pruning is in dry-run mode: we still maintain vnode stats and verify that pruning would be correct, but we don't actually use the pruning results — we still use cache and storage to fulfill the read. This is useful for validating the correctness of vnode stats pruning before enabling it in production.false
enable_vnode_key_stats_for_materializeWhether MaterializeExecutor enables vnode key stats for its state table.false
exchange_batched_permitsThe permits that are batched to add back, for reducing the backward AddPermits messages in remote exchange.256
exchange_concurrent_barriersThe maximum number of concurrent barriers in an exchange channel.1
exchange_concurrent_dispatchersThe concurrency for dispatching messages to different downstream jobs. - 1 means no concurrency, i.e., dispatch messages to downstream jobs one by one. - 0 means unlimited concurrency.0
exchange_connection_pool_sizeThe number of the connections for streaming remote exchange between two nodes. If not specified, the value of server.connection_pool_size will be used.1
exchange_initial_permitsThe initial permits that a channel holds, i.e., the maximum row count can be buffered in the channel.2048
hash_agg_max_dirty_groups_heap_sizeThe max heap size of dirty groups of HashAggExecutor.67108864
hash_join_entry_state_max_rowsConfigure the system-wide cache row cardinality of hash join. For example, if this is set to 1000, it means we can have at most 1000 rows in cache.30000
high_join_amplification_thresholdIf number of hash join matches exceeds this threshold number, it will be logged.2048
iceberg_fetch_batch_sizeIcebergFetchExecutor: The number of files the executor will fetch concurrently in a batch.1024
iceberg_list_interval_secIcebergListExecutor: The interval in seconds for Iceberg source to list new files.10
iceberg_sink_positional_delete_cache_sizeIcebergSink: The size of the cache for positional delete in the sink.1024
iceberg_sink_write_parquet_max_row_group_rowsIcebergSink: The maximum number of rows in a row group when writing Parquet files.100000
join_encoding_typeDetermine which encoding will be used to encode join rows in operator cache."memory_optimized"
materialize_force_overwrite_on_no_checkWhen enabled, materialized views using default NoCheck conflict behavior will be forced to use Overwrite. Useful to avoid propagating inconsistent changelog downstream.false
max_barrier_batch_sizeThe maximum number of consecutive barriers allowed in a message when sent between actors.1024
max_concurrent_kv_log_store_historical_readThe maximum number of kv log store readers that can concurrently read historical data (i.e., from the state store) during initialization. A reader is considered "initializing" until it has read at least one row from the historical stream or the stream returns empty. Set to 0 to disable the limit (unlimited concurrency).0
mem_preload_state_table_ids_blacklistThe list of state table ids to disable preloading all rows in memory for state table. Only takes effect when default_enable_mem_preload_state_table is true.[]
mem_preload_state_table_ids_whitelistThe list of state table ids to enable preloading all rows in memory for state table. Only takes effect when default_enable_mem_preload_state_table is false.[]
memory_controller_eviction_factor_aggressive2.0
memory_controller_eviction_factor_graceful1.5
memory_controller_eviction_factor_stable1.0
memory_controller_sequence_tls_lag32
memory_controller_sequence_tls_step128
memory_controller_threshold_aggressive0.9
memory_controller_threshold_graceful0.81
memory_controller_threshold_stable0.72
memory_controller_update_interval_ms100
now_progress_ratio""
over_window_cache_policyCache policy for partition cache in streaming over window. Can be full, recent, recent_first_n or recent_last_n."full"
refresh_scheduler_interval_secThe interval in seconds for the refresh scheduler to check and trigger scheduled refreshes.60
snapshot_iter_rebuild_interval_secsThe interval in seconds to rebuild snapshot iterators during snapshot backfill.600
switch_jdbc_pg_to_nativeWhen true, all jdbc sinks with connector='jdbc' and jdbc.url="jdbc:postgresql://..." will be switched from jdbc postgresql sinks to rust native (connector='postgres') sinks.false
sync_log_store_buffer_sizeThe max buffer size for sync logstore, before we start flushing.2048
sync_log_store_pause_duration_msThe timeout for reading from the buffer of the sync log store on barrier. Every epoch we will attempt to read the full buffer of the sync log store. If we hit the timeout, we will stop reading and continue.64
topn_cache_min_capacityMinimum cache size for TopN cache per group key.10
unsafe_extreme_cache_sizeLimit number of the cached entries in an extreme aggregation call.10

system

ConfigDescriptionDefault
adaptive_parallelism_strategyThe strategy for Adaptive Parallelism."AUTO"
backup_storage_directoryRemote directory for storing snapshots.""
backup_storage_urlRemote storage url for storing snapshots.""
barrier_interval_msThe interval of periodic barrier.1000
block_size_kbSize of each block in bytes in SST.64
bloom_false_positiveFalse positive probability of bloom filter.0.001
checkpoint_frequencyThere will be a checkpoint for every n barriers.1
data_directoryRemote directory for storing data and metadata objects.""
enable_tracingWhether to enable distributed tracing.false
enforce_secretWhether to enforce secret on cloud.false
license_keyThe license key to activate enterprise features.""
max_concurrent_creating_streaming_jobsMax number of concurrent creating streaming jobs.1
parallel_compact_size_mbThe size of parallel task for one compact/flush job.512
pause_on_next_bootstrapWhether to pause all data sources on next bootstrap.false
per_database_isolationWhether per database isolation is enabledtrue
sstable_size_mbTarget size of the Sstable.256
state_storeURL for the state store""
time_travel_retention_msThe data retention period for time travel.600000
use_new_object_prefix_strategyWhether to split object prefix.""

udf

ConfigDescriptionDefault
enable_embedded_javascript_udfAllow embedded JS UDFs to be created.true
enable_embedded_python_udfAllow embedded Python UDFs to be created.false
enable_embedded_wasm_udfAllow embedded WASM UDFs to be created.true