docs/en/administration/management/BE_parameters/stats_storage.md
import BEConfigMethod from '../../../_assets/commonMarkdown/BE_config_method.mdx'
import CNConfigMethod from '../../../_assets/commonMarkdown/CN_config_method.mdx'
import PostBEConfig from '../../../_assets/commonMarkdown/BE_dynamic_note.mdx'
import StaticBEConfigNote from '../../../_assets/commonMarkdown/StaticBE_config_note.mdx'
import EditionSpecificBEItem from '../../../_assets/commonMarkdown/Edition_Specific_BE_Item.mdx'
You can view the BE configuration items using the following command:
SELECT * FROM information_schema.be_configs [WHERE NAME LIKE "%<name_pattern>%"]
This topic introduces the following types of FE configurations:
StarRocksMetrics::instance()->metrics()->trigger_hook() and compute derived/system metrics (e.g., push/query bytes/sec, max disk I/O util, max network send/receive rates), log memory breakdowns and run table metrics cleanup. When false, those hooks are executed synchronously inside MetricRegistry::collect at metric collection time, which can increase metric-scrape latency. Requires process restart to take effect.enable_metric_calculator, and JVM metrics initialization is controlled by enable_jvm_metrics. Changing this value requires a restart.true, TabletUpdates::compaction() uses the random compaction strategy (compaction_random) intended for chaos engineering tests. This flag forces compaction to follow a nondeterministic/random policy instead of normal strategies (e.g., size-tiered compaction), and takes precedence during compaction selection for the tablet. It is intended only for controlled testing: enabling it can produce unpredictable compaction order, increased I/O/CPU, and test flakiness. Do not enable in production; use only for fault-injection or chaos-test scenarios.compaction_max_memory_limit, process_mem_limit * compaction_max_memory_limit_percent / 100). If compaction_max_memory_limit is negative (default -1) it falls back to the BE process memory limit derived from mem_limit. The percent value is clamped to [0,100]. If the process memory limit is not set (negative) compaction memory remains unlimited (-1). This computed value is used to initialize the _compaction_mem_tracker. See also compaction_max_memory_limit_percent and compaction_memory_limit_per_worker.compaction_max_memory_limit and (process memory limit × this percent / 100). If this value is < 0 or > 100 it is treated as 100. If compaction_max_memory_limit < 0 the process memory limit is used instead. The calculation also considers the BE process memory derived from mem_limit. Combined with compaction_memory_limit_per_worker (per-worker cap), this setting controls total compaction memory available and therefore affects compaction concurrency and OOM risk.ExecEnv::agent_server()->get_thread_pool(TTaskType::CREATE)->update_max_threads(...). Increase this to raise concurrent tablet creation throughput (useful during bulk load or partition creation); decreasing it throttles concurrent create operations. Raising the value increases CPU, memory and I/O concurrency and may cause contention; the thread pool enforces at least one thread, so values less than 1 have no practical effect.dictionary_encoding_ratio and scans the chunk’s distinct key count; if the distinct count exceeds max_card the writer chooses PLAIN_ENCODING. The check is performed only when the chunk size passes dictionary_speculate_min_chunk_size (and when row_count > dictionary_min_rowcount). Setting the value higher favors dictionary encoding (tolerates more distinct keys); setting it lower causes earlier fallback to plain encoding. A value of 1.0 effectively forces dictionary encoding (distinct count can never exceed row_count).dictionary_min_rowcount, chooses DICT_ENCODING only if distinct_count ≤ max_card; otherwise it falls back to BIT_SHUFFLE. A value of 0 (default) disables non-string dictionary encoding. This parameter is analogous to dictionary_encoding_ratio but applies to non-string columns. Use values in (0,1] — smaller values restrict dictionary encoding to lower-cardinality columns and reduce dictionary memory/IO overhead.PageBuilderOptions::dict_page_size in the BE rowset code and controls how many dictionary entries can be stored in a single dictionary page. Increasing this value can improve compression ratio for dictionary-encoded columns by allowing larger dictionaries, but larger pages consume more memory during write/encode and can increase I/O and latency when reading or materializing pages. Set conservatively for large-memory, write-heavy workloads and avoid excessively large values to prevent runtime performance degradation.download_low_speed_time.download_low_speed_limit_kbps within the time span specified in this configuration item.0 indicates setting the value to the number of CPU cores on the machine where the BE resides.0 indicates half of the CPU cores in the node.true indicates Event-based Compaction Framework is enabled, and false indicates it is disabled. Enabling Event-based Compaction Framework can greatly reduce the overhead of compaction in scenarios where there are many tablets or a single tablet has a large amount of data.true indicates new loading processes will be allowed, and false indicates they will be rejected.true indicates the Size-tiered Compaction strategy is enabled, and false indicates it is disabled.true indicates the Size-tiered Compaction strategy is enabled, and false indicates it is disabled.false (default), BE treats any broken entry in storage_root_path or spill_local_storage_dir as fatal and will abort startup. When true, StarRocks will skip (log a warning and remove) any storage path that fails check_datapath_rw or fails parsing so the BE can continue starting with the remaining healthy paths. Note: if all configured paths are removed, BE will still exit. Enabling this can mask misconfigured or failed disks and cause data on ignored paths to be unavailable; monitor logs and disk health accordingly.enable_new_load_on_memory_limit_exceeded is set to false, and the memory consumption of all loading processes exceeds load_process_max_memory_limit_percent * load_process_max_memory_hard_limit_ratio, new loading processes will be rejected.> 1.0 the strategy records positive feedback. Increasing this value raises the expected compression ratio (making the condition harder to satisfy), while lowering it makes it easier for observed compression to be considered satisfactory. Tune to match typical data compressibility. Valid range: MIN=1, MAX=65537.> 1.0 increments the positive counter (alpha), otherwise the negative counter (beta); this influences whether future data will be compressed. Tune this value to reflect typical LZ4 throughput on your hardware — raising it makes the policy harder to classify a run as "good" (requires higher observed speed), lowering it makes classification easier. Must be a positive finite number.-1 indicates that no limit is imposed on the concurrency. 0 indicates disabling compaction. This parameter is mutable when the Event-based Compaction Framework is enabled.max_queueing_memtable_per_tablet, writers in LocalTabletsChannel and LakeTabletsChannel will block (sleep/retry) before submitting more write work. This reduces simultaneous memtable flush concurrency and peak memory use at the cost of increased latency or RPC timeouts for heavy load. Set higher to allow more concurrent memtables (more memory and I/O burst); set lower to limit memory pressure and increase write throttling.compaction_memory_limit_per_worker.start_bg_threads() spawns _path_scan_thread_callback (calls DataDir::perform_path_scan and perform_tmp_path_scan) and _path_gc_thread_callback (calls DataDir::perform_path_gc_by_tablet, DataDir::perform_path_gc_by_rowsetid, DataDir::perform_delta_column_files_gc, and DataDir::perform_crm_gc). The scan and GC intervals are controlled by path_scan_interval_second and path_gc_check_interval_second; CRM file cleanup uses unused_crm_file_threshold_second. Disable this to prevent automatic path-level cleanup (you must then manage orphaned/temp files manually). Changing this flag requires restarting the process.unused_crm_file_threshold_second). If set to a non-positive value the code forces the interval to 1800 seconds (half hour) and emits a warning. Tune this to control how frequently on-disk temporary or downloaded files are scanned and removed.N * pk_index_compaction_score_ratio.PkIndexShard of this size and maps a tablet ID to a shard via a bitmask. Increasing this value reduces lock contention among tablets that would otherwise share the same shard, at the cost of more mutex objects and slightly higher memory usage. The value must be a power of two because the code relies on bitmask indexing. For sizing guidance see tablet_map_shard_size heuristic: total_num_of_tablets_in_BE / 512.0 means automatically set to half of the number of CPU cores.pk_index_memtable_flush_threadpool_max_threads. Increasing this value permits more memtable flush tasks to be buffered before execution, which can reduce immediate backpressure but increases memory consumed by queued task objects. Decreasing it limits buffered tasks and can cause earlier backpressure or task rejections depending on thread-pool behavior. Tune according to available memory and expected concurrent flush workload.0 means automatically set to half of the number of CPU cores.pk_index_parallel_compaction_threadpool_max_threads; increase this value to avoid task rejections when you expect many concurrent Compaction tasks, but be aware larger queues can increase memory and latency for queued work.0 means automatically set to half of the number of CPU cores.enable_pk_index_eager_build is set to true, the system will eagerly build PK index files only if the data generated during import or compaction exceeds this threshold. Default is 100MB.replication_min_speed_limit_kbps exceeds this value.0 indicates setting the thread number to four times the BE CPU core count.<= size_tiered_max_compaction_level). The value is inclusive and counts the number of distinct size tiers merged (the top level is counted as 1). Effective only when the PK size-tiered compaction strategy is enabled; raising it lets a compaction task include more levels (larger, more I/O- and CPU-intensive merges, potential higher write amplification), while lowering it restricts merges and reduces task size and resource usage.small_dictionary_page_size, the decoder pre-parses all string entries into an in-memory vector (_parsed_datas) to accelerate random access and batch reads. Raising this value causes more pages to be pre-parsed (which can reduce per-access decoding overhead and may increase effective compression for larger dictionaries) but increases memory usage and CPU spent parsing; excessively large values can degrade overall performance. Tune only after measuring memory and access-latency trade-offs.stale_memtable_flush_time_sec seconds will be flushed to reduce memory pressure. This behavior is only considered when memory limits are approaching (limit_exceeded_by_ratio(70) or higher). In LocalTabletsChannel, an additional path at very high memory usage (limit_exceeded_by_ratio(95)) may flush memtables whose size exceeds write_buffer_size / 4. A value of 0 disables this age-based stale-memtable flushing (immutable-partition memtables still flush immediately when idle or on high memory).storage_flood_stage_usage_percent, Load and Restore jobs are rejected. You need to set this item together with the FE configuration item storage_usage_hard_limit_reserve_bytes to allow the configurations to take effect.storage_flood_stage_left_capacity_bytes, Load and Restore jobs are rejected. You need to set this item together with the FE configuration item storage_usage_hard_limit_percent to allow the configurations to take effect.disk_usage(0) and computes the average usage. Any disk whose usage is greater than (average usage + storage_high_usage_disk_protect_ratio) is excluded from the preferential selection pool (it will not participate in the randomized, preferrential shuffle and thus is deferred from being chosen initially). Set to 0 to disable this protection. Values are fractional (typical range 0.0–1.0); larger values make the scheduler more tolerant of higher-than-average disks.${STARROCKS_HOME}/storage/data1,medium:hdd;/data2,medium:ssd.
;).,medium:ssd at the end of the directory.,medium:hdd at the end of the directory.true indicates enabling synchronization, and false indicates disabling it.get_tablet_stats RPC overhead. When disabled, StarRocks uses the approximate num_dels value in rowset metadata to avoid remote I/O, which may slightly overcount rows that were deleted but not yet compacted.tablet_id, version, rowset count, accurate mode, and elapsed time.get_tablet_stats, get_tablet_metadatas).tablet_writer_open_rpc_timeout_sec and half of the overall load timeout (i.e., min(tablet_writer_open_rpc_timeout_sec, load_timeout_sec / 2)). Set this to balance timely failure detection (too small may cause premature open failures) and giving BEs enough time to initialize writers (too large delays error handling).>0 sets a fixed maximum thread count; 0 (the default) makes the pool size equal to the number of CPU cores. The configured value is applied at startup (UpdateManager::init) and can be changed at runtime via the update-config HTTP action, which updates the pool's max threads. Tune this to increase apply concurrency (throughput) or limit CPU/memory contention; min threads and idle timeout are governed by transaction_apply_thread_pool_num_min and transaction_apply_worker_idle_time_ms respectively.0 indicates setting the value to the number of CPU cores on the machine where the BE resides.