docs/en/administration/management/FE_parameters/log_server_meta.md
import FEConfigMethod from '../../../_assets/commonMarkdown/FE_config_method.mdx'
import AdminSetFrontendNote from '../../../_assets/commonMarkdown/FE_config_note.mdx'
import StaticFEConfigNote from '../../../_assets/commonMarkdown/StaticFE_config_note.mdx'
import EditionSpecificFEItem from '../../../_assets/commonMarkdown/Edition_Specific_FE_Item.mdx'
<FEConfigMethod />After your FE is started, you can run the ADMIN SHOW FRONTEND CONFIG command on your MySQL client to check the parameter configurations. If you want to query the configuration of a specific parameter, run the following command:
ADMIN SHOW FRONTEND CONFIG [LIKE "pattern"];
For detailed description of the returned fields, see ADMIN SHOW CONFIG.
:::note You must have administrator privileges to run cluster administration-related commands. :::
You can configure or modify the settings of FE dynamic parameters using ADMIN SET FRONTEND CONFIG.
ADMIN SET FRONTEND CONFIG ("key" = "value");
This topic introduces the following types of FE configurations:
audit_log_delete_age30d specifies that each audit log file can be retained for 30 days. StarRocks checks each audit log file and deletes those that were generated 30 days ago.audit_log_dirStarRocksFE.STARROCKS_HOME_DIR + "/log"audit_log_enable_compressaudit_log_dir, audit_log_roll_interval, audit_roll_maxsize, audit_log_roll_num).audit_log_json_formataudit_log_modulesslow_query, queryslow_query module and the query module. The connection module is supported from v3.0. Separate the module names with a comma (,) and a space.audit_log_roll_intervalDAY and HOUR.
DAY, a suffix in the yyyyMMdd format is added to the names of audit log files.HOUR, a suffix in the yyyyMMddHH format is added to the names of audit log files.audit_log_roll_numaudit_log_roll_interval parameter.bdbje_log_levelcom.sleepycat.je package and to the BDB JE environment file logging level (EnvironmentConfig.FILE_LOGGING_LEVEL). Accepts standard java.util.logging.Level names such as SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, ALL, OFF. Setting to ALL enables all log messages. Increasing verbosity will raise log volume and may impact disk I/O and performance; the value is read when the BDB environment is initialized, so it takes effect only after environment (re)initialization.big_query_log_delete_agefe.big_query.log.*) are retained before automatic deletion. The value is passed to Log4j's deletion policy as the IfLastModified age — any rotated big query log whose last-modified time is older than this value will be removed. Supports suffixes include d (day), h (hour), m (minute), and s (second). Example: 7d (7 days), 10h (10 hours), 60m (60 minutes), and 120s (120 seconds). This item works together with big_query_log_roll_interval and big_query_log_roll_num to determine which files are kept or purged.big_query_log_dirConfig.STARROCKS_HOME_DIR + "/log"fe.big_query.log.*). The Log4j configuration uses this path to create a RollingFile appender for fe.big_query.log and its rotated files. Rotation and retention are governed by big_query_log_roll_interval (time-based suffix), log_roll_size_mb (size trigger), big_query_log_roll_num (max files), and big_query_log_delete_age (age-based deletion). Big query records are logged for queries that exceed user-defined thresholds such as big_query_log_cpu_second_threshold, big_query_log_scan_rows_threshold, or big_query_log_scan_bytes_threshold. Use big_query_log_modules to control which modules log to this file.big_query_log_modules{"query"}query produces big_query.query.big_query_log_roll_interval"DAY"big_query log appender. Valid values (case-insensitive) are DAY (default) and HOUR. DAY produces a daily pattern ("%d{yyyyMMdd}") and HOUR produces an hourly pattern ("%d{yyyyMMddHH}"). The value is combined with size-based rollover (big_query_roll_maxsize) and index-based rollover (big_query_log_roll_num) to form the RollingFile filePattern. An invalid value causes log configuration generation to fail (IOException) and may prevent log initialization or reconfiguration. Use alongside big_query_log_dir, big_query_roll_maxsize, big_query_log_roll_num, and big_query_log_delete_age.big_query_log_roll_numbig_query_log_roll_interval. This value is bound to the RollingFile appender's DefaultRolloverStrategy max attribute for fe.big_query.log; when logs roll (by time or by log_roll_size_mb), StarRocks keeps up to big_query_log_roll_num indexed files (filePattern uses a time suffix plus index). Files older than this count may be removed by rollover, and big_query_log_delete_age can additionally delete files by last-modified age.dump_log_delete_age7d specifies that each dump log file can be retained for 7 days. StarRocks checks each dump log file and deletes those that were generated 7 days ago.dump_log_dirStarRocksFE.STARROCKS_HOME_DIR + "/log"dump_log_modulesdump_log_roll_intervalDAY and HOUR.
DAY, a suffix in the yyyyMMdd format is added to the names of dump log files.HOUR, a suffix in the yyyyMMddHH format is added to the names of dump log files.dump_log_roll_numdump_log_roll_interval parameter.edit_log_write_slow_log_threshold_msedit_log_roll_num and commit-related settings). Metric updates still occur regardless of this threshold.enable_audit_sqltrue, the FE audit subsystem records the SQL text of statements into FE audit logs (fe.audit.log) processed by ConnectProcessor. The stored statement respects other controls: encrypted statements are redacted (AuditEncryptionChecker), sensitive credentials may be redacted or desensitized if enable_sql_desensitize_in_log is set, and digest recording is controlled by enable_sql_digest. When it is set to false, ConnectProcessor replaces the statement text with "?" in audit events — other audit fields (user, host, duration, status, slow-query detection via qe_slow_log_ms, and metrics) are still recorded. Enabling SQL audit increases forensic and troubleshooting visibility but may expose sensitive SQL content and increase log volume and I/O; disabling it improves privacy at the cost of losing full-statement visibility in audit logs.enable_profile_logqueryDetail JSON produced by ProfileManager) to the profile log sink. This logging is performed only if enable_collect_query_detail_info is also enabled; when enable_profile_log_compress is enabled, the JSON may be gzipped before logging. Profile log files are managed by profile_log_dir, profile_log_roll_num, profile_log_roll_interval and rotated/deleted according to profile_log_delete_age (supports formats like 7d, 10h, 60m, 120s). Disabling this feature stops writing profile logs (reducing disk I/O, compression CPU and storage usage). Which queries are logged can be further filtered by profile_log_latency_threshold_ms.enable_qe_slow_logqe_slow_log_ms into the slow-query audit log (AuditLog.getSlowAudit). If disabled, those slow-query entries are suppressed (regular query and connection audit logs are unaffected). The slow-audit entries follow the global audit_log_json_format setting (JSON vs. plain string). Use this flag to control generation of slow-query audit volume independently of regular audit logging; turning it off may reduce log I/O when qe_slow_log_ms is low or workloads produce many long-running queries.enable_sql_desensitize_in_logtrue, the system replaces or hides sensitive SQL content before it is written to logs and query-detail records. Code paths that honor this configuration include ConnectProcessor.formatStmt (audit logs), StmtExecutor.addRunningQueryDetail (query details), and SimpleExecutor.formatSQL (internal executor logs). With the feature enabled, invalid SQLs may be replaced with a fixed desensitized message, credentials (user/password) are hidden, and the SQL formatter is required to produce a sanitized representation (it can also enable digest-style output). This reduces leakage of sensitive literals and credentials in audit/internal logs but also means logs and query details no longer contain the original full SQL text (which can affect replay or debugging).internal_log_delete_ageinternal_log_dir). The value is a duration string. Supported suffixes: d (day), h (hour), m (minute), s (second). Examples: 7d (7 days), 10h (10 hours), 60m (60 minutes), 120s (120 seconds). This item is substituted into the log4j configuration as the <IfLastModified age="..."/> predicate used by the RollingFile Delete policy. Files whose last-modified time is earlier than this duration will be removed during log rollover. Increase this value to free disk space sooner, or decrease it to retain internal materialized view or statistics logs longer.internal_log_dirConfig.STARROCKS_HOME_DIR + "/log"fe.internal.log). This configuration is substituted into the Log4j configuration and determines where the InternalFile appender writes internal/materialized view/statistics logs and where per-module loggers under internal.<module> place their files. Ensure the directory exists, is writable, and has sufficient disk space. Log rotation and retention for files in this directory are controlled by log_roll_size_mb, internal_log_roll_num, internal_log_delete_age, and internal_log_roll_interval. If sys_log_to_console is enabled, internal logs may be written to console instead of this directory.internal_log_json_formattrue, internal statistic/audit entries are written as compact JSON objects to the statistic audit logger. The JSON contains keys "executeType" (InternalType: QUERY or DML), "queryId", "sql", and "time" (elapsed milliseconds). When it is set to false, the same information is logged as a single formatted text line ("statistic execute: ... | QueryId: [...] | SQL: ..."). Enabling JSON improves machine parsing and integration with log processors but also causes raw SQL text to be included in logs, which may expose sensitive information and increase log size.internal_log_modules{"base", "statistic"}internal.<X> with level INFO and additivity="false". Those loggers are routed to the internal appender (written to fe.internal.log) or to console when sys_log_to_console is enabled. Use short names or package fragments as needed — the exact logger name becomes internal. + the configured string. Internal log file rotation and retention follow internal_log_dir, internal_log_roll_num, internal_log_delete_age, internal_log_roll_interval, and log_roll_size_mb. Adding a module causes its runtime messages to be separated into the internal logger stream for easier debugging and audit.internal_log_roll_intervalHOUR and DAY. HOUR produces an hourly file pattern ("%d{yyyyMMddHH}") and DAY produces a daily file pattern ("%d{yyyyMMdd}"), which are used by the RollingFile TimeBasedTriggeringPolicy to name rotated fe.internal.log files. An invalid value causes initialization to fail (an IOException is thrown when building the active Log4j configuration). Roll behavior also depends on related settings such as internal_log_dir, internal_roll_maxsize, internal_log_roll_num, and internal_log_delete_age.internal_log_roll_numfe.internal.log). This value is used as the Log4j DefaultRolloverStrategy max attribute; when rollovers occur, StarRocks keeps up to internal_log_roll_num archived files and removes older ones (also governed by internal_log_delete_age). A lower value reduces disk usage but shortens log history; a higher value preserves more historical internal logs. This item works together with internal_log_dir, internal_log_roll_interval, and internal_roll_maxsize.log_cleaner_audit_log_min_retention_dayslog_cleaner_check_interval_secondlog_cleaner_disk_usage_targetlog_cleaner_disk_usage_thresholdlog_cleaner_disk_util_based_enablelog_plan_cancelled_by_crash_beTExplainLevel.COSTS) as a WARN entry when a query is cancelled due to BE crash or an RpcException. The log entry includes QueryId, SQL and the COSTS plan; in the ExecuteExceptionHandler path, the exception stacktrace is also logged. The logging is skipped when enable_collect_query_detail_info is enabled (the plan is then stored in the query detail) — in code paths, the check is performed by verifying the query detail is null. Note that, in ExecuteExceptionHandler, the plan is logged only on the first retry (retryTime == 0). Enabling this may increase log volume because full COSTS plans can be large.log_register_and_unregister_query_id"register query id = {}" and "deregister query id = {}") from QeProcessorImpl. The log is emitted only when the query has a non-null ConnectContext and either the command is not COM_STMT_EXECUTE or the session variable isAuditExecuteStmt() is true. Because these messages are written for every query lifecycle event, enabling this feature can produce high log volume and become a throughput bottleneck in high concurrency environments. Enable it for debugging or auditing; and disable it to reduce logging overhead and improve performance.log_roll_size_mbproc_profile_file_retained_dayssys_log_dir/proc_profile. The ProcProfileCollector computes a cutoff by subtracting proc_profile_file_retained_days days from the current time (formatted as yyyyMMdd-HHmmss) and deletes profile files whose timestamp portion is lexicographically earlier than that cutoff (that is, timePart.compareTo(timeToDelete) < 0). File deletion also respects the size-based cutoff controlled by proc_profile_file_retained_size_bytes. Profile files use the prefixes cpu-profile- and mem-profile- and are compressed after collection.proc_profile_file_retained_size_bytescpu-profile- and mem-profile-) to keep under the profile directory. When the sum of valid profile files exceeds proc_profile_file_retained_size_bytes, the collector deletes the oldest profile files until the remaining total size is less than or equal to proc_profile_file_retained_size_bytes. Files older than proc_profile_file_retained_days are also removed regardless of size. This setting controls disk usage for profile archives and interacts with proc_profile_file_retained_days to determine deletion order and retention.profile_log_delete_age<IfLastModified age="..."/> policy (via Log4jConfig) and is applied together with rotation settings such as profile_log_roll_interval and profile_log_roll_num. Supported suffixes: d (day), h (hour), m (minute), s (second). For example: 7d (7 days), 10h (10 hours), 60m (60 minutes), 120s (120 seconds).profile_log_dirConfig.STARROCKS_HOME_DIR + "/log"fe.profile.log and fe.features.log under this directory). Rotation and retention for these files are governed by profile_log_roll_size_mb, profile_log_roll_num and profile_log_delete_age; the timestamp suffix format is controlled by profile_log_roll_interval (supports DAY or HOUR). Because the default directory is under STARROCKS_HOME_DIR, ensure the FE process has write and rotation/delete permissions on this directory.profile_log_latency_threshold_msfe.profile.log. Only queries whose execution time is greater than or equal to this value are logged. Set to 0 to log all profiles (no threshold). Use a positive value to reduce log volume by logging only slower queries.profile_log_roll_intervalHOUR and DAY. HOUR produces a pattern of "%d{yyyyMMddHH}" (hourly time bucket) and DAY produces "%d{yyyyMMdd}" (daily time bucket). This value is used when computing profile_file_pattern in the Log4j configuration and only affects the time-based component of rollover file names; size-based rollover is still controlled by profile_log_roll_size_mb and retention by profile_log_roll_num / profile_log_delete_age. Invalid values cause an IOException during logging initialization (error message: "profile_log_roll_interval config error: <value>"). Choose HOUR for high-volume profiling to limit per-file size per hour, or DAY for daily aggregation.profile_log_roll_num${profile_log_roll_num} (e.g. <DefaultRolloverStrategy max="${profile_log_roll_num}" fileIndex="min">). Rotations are triggered by profile_log_roll_size_mb or profile_log_roll_interval; when rotation occurs, Log4j keeps at most these indexed files and older index files become eligible for removal. Actual retention on disk is also affected by profile_log_delete_age and the profile_log_dir location. Lower values reduce disk usage but limit retained history; higher values preserve more historical profile logs.profile_log_roll_size_mbProfileFile appender; when a profile log exceeds profile_log_roll_size_mb it will be rotated. Rotation can also occur by time when profile_log_roll_interval is reached — either condition will trigger rollover. Combined with profile_log_roll_num and profile_log_delete_age, this item controls how many historical profile files are retained and when old files are deleted. Compression of rotated files is controlled by enable_profile_log_compress.qe_slow_log_msslow_lock_log_every_msslow_lock_threshold_ms and will suppress additional warnings until slow_lock_log_every_ms milliseconds have passed since the last logged slow-lock event. Use a larger value to reduce log volume during prolonged contention or a smaller value to get more frequent diagnostics. Changes take effect at runtime for subsequent checks.slow_lock_print_stacklogSlowLockTrace (the "stack" array is populated via LogUtil.getStackTraceToJsonArray with start=0 and max=Short.MAX_VALUE). This configuration controls only the extra stack information for lock owners shown when a lock acquisition exceeds the threshold configured by slow_lock_threshold_ms. Enabling this feature helps debugging by giving precise thread stacks that hold the lock; disabling it reduces log volume and CPU/memory overhead caused by capturing and serializing stack traces in high concurrency environments.slow_lock_threshold_msslow_lock_log_every_ms, slow_lock_print_stack, and slow_lock_stack_trace_reserve_levels.sys_log_delete_age7d specifies that each system log file can be retained for 7 days. StarRocks checks each system log file and deletes those that were generated 7 days ago.sys_log_dirStarRocksFE.STARROCKS_HOME_DIR + "/log"sys_log_enable_compresstrue, the system appends a ".gz" postfix to rotated system log filenames so Log4j will produce gzip-compressed rotated FE system logs (for example, fe.log.*). This value is read during Log4j configuration generation (Log4jConfig.initLogging / generateActiveLog4jXmlConfig) and controls the sys_file_postfix property used in the RollingFile filePattern. Enabling this feature reduces disk usage for retained logs but increases CPU and I/O during rollovers and changes log filenames, so that tools or scripts that read logs must be able to handle .gz files. Note that audit logs use a separate configuration for compression, that is, audit_log_enable_compress.sys_log_format"plaintext" (Default) and "json". The values are case-insensitive. "plaintext" configures PatternLayout with human-readable timestamps, level, thread, class.method:line and stack traces for WARN/ERROR. "json" configures JsonTemplateLayout and emits structured JSON events (UTC timestamps, level, thread id/name, source file/method/line, message, exception stackTrace) suitable for log aggregators (ELK, Splunk). JSON output abides by sys_log_json_max_string_length and sys_log_json_profile_max_string_length for maximum string lengths.sys_log_json_max_string_lengthsys_log_format is set to "json", string-valued fields (for example "message" and stringified exception stack traces) are truncated if their length exceeds this limit. The value is injected into the generated Log4j XML in Log4jConfig.generateActiveLog4jXmlConfig(), and is applied to default, warning, audit, dump and bigquery layouts. The profile layout uses a separate configuration (sys_log_json_profile_max_string_length). Lowering this value reduces log size but can truncate useful information.sys_log_json_profile_max_string_lengthsys_log_format is "json". String field values in JSON-formatted profile logs will be truncated to this byte length; non-string fields are unaffected. This item is applied in Log4jConfig JsonTemplateLayout maxStringLength and is ignored when plaintext logging is used. Keep the value large enough for full messages you need, but note larger values increase log size and I/O.sys_log_levelINFO, WARN, ERROR, and FATAL.sys_log_roll_intervalDAY and HOUR.
DAY, a suffix in the yyyyMMdd format is added to the names of system log files.HOUR, a suffix in the yyyyMMddHH format is added to the names of system log files.sys_log_roll_numsys_log_roll_interval parameter.sys_log_to_consoleSYS_LOG_TO_CONSOLE is set to "1")true, the system configures Log4j to send all logs to the console (ConsoleErr appender) instead of the file-based appenders. This value is read when generating the active Log4j XML configuration (which affects the root logger and per-module logger appender selection). Its value is captured from the SYS_LOG_TO_CONSOLE environment variable at process startup. Changing it at runtime has no effect. This configuration is commonly used in containerized or CI environments where stdout/stderr log collection is preferred over writing log files.sys_log_verbose_modulesorg.apache.starrocks.catalog, StarRocks generates system logs only for the catalog module. Separate the module names with a comma (,) and a space.sys_log_warn_modulesfe.warn.log file. Entries are inserted into the generated Log4j configuration (alongside builtin warn modules such as org.apache.kafka, org.apache.hudi, and org.apache.hadoop.io.compress) and produce logger elements like <Logger name="... " level="WARN"><AppenderRef ref="SysWF"/></Logger>. Fully-qualified package and class prefixes (for example, "com.example.lib") are recommended to suppress noisy INFO/DEBUG output into the regular log and to allow warnings to be captured separately.brpc_idle_wait_max_timebrpc_inner_reuse_poolbrpc_inner_reuse_pool in BrpcProxy when constructing RpcClientOptions (via rpcOptions.setInnerResuePool(...)). When enabled (true) the RPC client reuses internal pools to reduce per-call connection creation, lowering connection churn, memory and file-descriptor usage for FE-to-BE / LakeService RPCs. When disabled (false) the client may create more isolated pools (increasing concurrency isolation at the cost of higher resource usage). Changing this value requires restarting the process to take effect.brpc_min_evictable_idle_time_msBrpcProxy (via RpcClientOptions.setMinEvictableIdleTime). Raise this value to keep idle connections longer (reducing reconnect churn); lower it to free unused sockets faster (reducing resource usage). Tune together with brpc_connection_pool_size and brpc_idle_wait_max_time to balance connection reuse, pool growth, and eviction behavior.brpc_reuse_addrbrpc_connection_pool_size and brpc_short_connection because it affects how rapidly client sockets can be rebound and reused.brpc_connection_pool_retry_wait_time_msChannelPool.getChannel() throws a NoSuchElementException (directly or wrapped in a RuntimeException), the retry logic sleeps for this duration before attempting to reconnect.cluster_nameTitle on the web page.dns_cache_ttl_secondsnetworkaddress.cache.ttl which controls how long the JVM caches successful DNS lookups. Set this item to -1 to allow the system to always cache the infomration, or 0 to disable caching. This is particularly useful in environments where IP addresses change frequently, such as Kubernetes deployments or when dynamic DNS is used.enable_http_async_handlerenable_http_validate_headersHttpServer (see UseLocations). Default is false for backward compatibility because newer netty versions enforce stricter header rules (https://github.com/netty/netty/pull/12760). Set to true to enforce RFC-compliant header checks; doing so may cause malformed or nonconforming requests from legacy clients or proxies to be rejected. Change requires a restart of the HTTP server to take effect.enable_httpsfrontend_addresshttp_async_threads_nummax_http_sql_service_task_threads_num.http_backlog_numhttp_max_chunk_sizehttp_max_initial_line_length, http_max_header_size, and enable_http_validate_headers.http_max_header_sizeHttpServerCodec. StarRocks passes this value to HttpServerCodec (as Config.http_max_header_size); if an incoming request's headers (names and values combined) exceed this limit, the codec will reject the request (decoder exception) and the connection/request will fail. Increase only when clients legitimately send very large headers (large cookies or many custom headers); larger values increase per-connection memory use. Tune in conjunction with http_max_initial_line_length and http_max_chunk_size. Changes require FE restart.http_max_initial_line_lengthHttpServerCodec used in HttpServer. The value is passed to Netty's decoder and requests with an initial line longer than this will be rejected (TooLongFrameException). Increase this only when you must support very long request URIs; larger values increase memory use and may raise exposure to malformed/request-abuse. Tune together with http_max_header_size and http_max_chunk_size.http_porthttp_web_page_display_hardwaregetent passwd), which can surface sensitive system data. If you require stricter security or want to avoid executing those indirect commands on the host, set this configuration to false to disable collection and display of hardware details on the web UI.http_worker_threads_numhttps_portmax_mysql_service_task_threads_nummax_task_runs_threads_nummemory_tracker_enablememory_tracker_enable is set to true, MemoryUsageTracker periodically scans registered metadata modules, updates the in-memory MemoryUsageTracker.MEMORY_USAGE map, logs totals, and causes MetricRepo to expose memory usage and object-count gauges in metrics output. Use memory_tracker_interval_seconds to control the sampling interval. Enabling this feature helps monitoring and debugging memory consumption but introduces CPU and I/O overhead and additional metric cardinality.memory_tracker_interval_secondsMemoryUsageTracker daemon to poll and record memory usage of the FE process and registered MemoryTrackable modules. When memory_tracker_enable is set to true, the tracker runs on this cadence, updates MEMORY_USAGE, and logs aggregated JVM and tracked-module usage.mysql_nio_backlog_nummysql_server_versionselect version();version (show variables like 'version';)mysql_service_io_threads_nummysql_service_kill_after_disconnecttrue, the server immediately kills any running query for that connection and performs immediate cleanup. If it is false, the server does not kill running queries on disconnection and only performs cleanup when there are no pending request tasks, allowing long-running queries to continue after client disconnects. Note: despite a brief comment suggesting TCP keep‑alive, this parameter specifically governs post-disconnection killing behavior and should be set according to whether you want orphaned queries terminated (recommended behind unreliable/load‑balanced clients) or allowed to finish.mysql_service_nio_enable_keep_alivenet_use_ipv6_when_priority_networks_emptypriority_networks is not specified. true indicates to allow the system to use an IPv6 address preferentially when the server that hosts the node has both IPv4 and IPv6 addresses and priority_networks is not specified.priority_networksnet_use_ipv6_when_priority_networks_empty to true.proc_profile_cpu_enabletrue, the background ProcProfileCollector will collect CPU profiles using AsyncProfiler and write HTML reports under sys_log_dir/proc_profile. Each collection run records CPU stacks for the duration configured by proc_profile_collect_time_s and uses proc_profile_jstack_depth for Java stack depth. Generated profiles are compressed and old files are pruned according to proc_profile_file_retained_days and proc_profile_file_retained_size_bytes. AsyncProfiler requires the native library (libasyncProfiler.so); one.profiler.extractPath is set to STARROCKS_HOME_DIR/bin to avoid noexec issues on /tmp.qe_max_connection1024 to 4096.query_portrpc_portslow_lock_stack_trace_reserve_levelsLogUtil.getStackTraceToJsonArray by QueryableReentrantReadWriteLock when producing JSON for the exclusive lock owner, current thread, and oldest/shared readers. Increasing this value provides more context for diagnosing slow-lock or deadlock issues at the cost of larger JSON payloads and slightly higher CPU/memory for stack capture; decreasing it reduces overhead. Note: reader entries can be filtered by slow_lock_threshold_ms when only logging slow locks.ssl_cipher_blacklistssl_cipher_whitelisttask_runs_concurrencyTaskRunScheduler stops scheduling new runs when current running count is greater than or equal to task_runs_concurrency, so this value caps parallel TaskRun execution across the scheduler. It is also used by MVPCTRefreshPartitioner to compute per-TaskRun partition refresh granularity. Increasing the value raises parallelism and resource usage; decreasing it reduces concurrency and makes partition refreshes larger per run. Do not set to 0 or negative unless intentionally disabling scheduling: 0 (or negative) will effectively prevent new TaskRuns from being scheduled by TaskRunScheduler.task_runs_queue_lengthTaskRunManager checks the current pending count and rejects new submissions when valid pending TaskRun count is greater than or equal to task_runs_queue_length. The same limit is rechecked before merged/accepted TaskRuns are added. Tune this value to balance memory and scheduling backlog: set higher for large bursty workloads to avoid rejects, or lower to bound memory and reduce pending backlog.thrift_backlog_numthrift_client_timeout_msthrift_rpc_max_body_sizeThriftServer). A value of -1 disables the limit (unbounded). Setting a positive value enforces an upper bound so that messages larger than this are rejected by the Thrift layer, which helps limit memory usage and mitigate oversized-request or DoS risks. Set this to a size large enough for expected payloads (large structs or batched data) to avoid rejecting legitimate requests.thrift_server_max_worker_threadsthrift_server_queue_sizethrift_server_max_worker_threads, new requests are added to the pending queue.alter_max_worker_queue_sizeThreadPoolManager.newDaemonCacheThreadPool in AlterHandler together with alter_max_worker_threads. When the number of pending alter tasks exceeds alter_max_worker_queue_size, new submissions will be rejected and a RejectedExecutionException can be thrown (see AlterHandler.handleFinishAlterTask). Tune this value to balance memory usage and the amount of backlog you permit for concurrent alter tasks.alter_max_worker_threadsAlterReplicaTask via handleFinishAlterTask). This value bounds concurrent execution of alter operations; raising it increases parallelism and resource usage, lowering it limits concurrent alters and may become a bottleneck. The executor is created together with alter_max_worker_queue_size, and the handler scheduling uses alter_scheduler_interval_millisecond.automated_cluster_snapshot_interval_secondsbackground_refresh_metadata_interval_millisbackground_refresh_metadata_time_secs_since_last_access_secsbdbje_cleaner_threadsBDBEnvironment.initConfigs and applied to EnvironmentConfig.CLEANER_THREADS using Config.bdbje_cleaner_threads. It controls parallelism for JE log cleaning and space reclamation; increasing it can speed up cleaning at the cost of additional CPU and I/O interference with foreground operations. Changes take effect only when the BDB environment is (re)initialized, so a frontend restart is required to apply a new value.bdbje_heartbeat_timeout_secondbdbje_lock_timeout_secondbdbje_replay_cost_percentREPLAY_COST_PERCENT and is typically >100 to indicate that replay is usually more expensive than a network restore. When deciding whether to retain cleaned log files for potential replay, the system compares replay cost multiplied by log size against the cost of a network restore; files will be removed if network restore is judged more efficient. A value of 0 disables retention based on this cost comparison. Log files required for replicas within REP_STREAM_TIMEOUT or for any active replication are always retained.bdbje_replica_ack_timeout_secondbdbje_reserved_disk_sizeEnvironmentConfig.RESERVED_DISK in BDBEnvironment; JE's built-in default is 0 (unlimited). The StarRocks default (512 MiB) prevents JE from reserving excessive disk space for unprotected files while allowing safe cleanup of obsolete files. Tune this value on disk-constrained systems: decreasing it lets JE free more files sooner, increasing it lets JE retain more reserved space. Changes require restarting the process to take effect.bdbje_reset_election_groupTRUE, the FE will reset the BDBJE replication group (that is, remove the information of all electable FE nodes) and start as the leader FE. After the reset, this FE will be the only member in the cluster, and other FEs can rejoin this cluster by using ALTER SYSTEM ADD/DROP FOLLOWER/OBSERVER 'xxx'. Use this setting only when no leader FE can be elected because the data of most follower FEs have been damaged. reset_election_group is used to replace metadata_failure_recovery.black_host_connect_failures_within_timeblack_host_history_sec, only if a blacklisted BE node has fewer connection failures than the threshold set in black_host_connect_failures_within_time, it can be removed from the BE Blacklist.black_host_history_secblack_host_history_sec, only if a blacklisted BE node has fewer connection failures than the threshold set in black_host_connect_failures_within_time, it can be removed from the BE Blacklist.brpc_connection_pool_sizesetMaxTotoal and setMaxIdleSize, so it directly limits concurrent outgoing BRPC requests because each request must borrow a connection from the pool. In high concurrency scenarios increase this to avoid request queuing; increasing it raises socket and memory usage and may increase remote server load. When tuning, consider related settings such as brpc_idle_wait_max_time, brpc_short_connection, brpc_inner_reuse_pool, brpc_reuse_addr, and brpc_min_evictable_idle_time_ms. Changing this value is not hot-reloadable and requires a restart.brpc_short_connectiontrue), RpcClientOptions.setShortConnection is set and connections are closed after a request completes, reducing the number of long-lived sockets at the cost of higher connection setup overhead and increased latency. When disabled (false, the default) persistent connections and connection pooling are used. Enabling this option affects connection-pool behavior and should be considered together with brpc_connection_pool_size, brpc_idle_wait_max_time, brpc_min_evictable_idle_time_ms, brpc_reuse_addr, and brpc_inner_reuse_pool. Keep it disabled for typical high-throughput deployments; enable only to limit socket lifetime or when short connections are required by network policy.catalog_try_lock_timeout_mscheckpoint_only_on_leadertrue, the CheckpointController will only select the leader FE as the checkpoint worker; when false, the controller may pick any frontend and prefers nodes with lower heap usage. With false, workers are sorted by recent failure time and heapUsedPercent (the leader is treated as having infinite usage to avoid selecting it). For operations that require cluster snapshot metadata, the controller already forces leader selection regardless of this flag. Enabling true centralizes checkpoint work on the leader (simpler but increases leader CPU/memory and network load); keeping it false distributes checkpoint load to less-loaded FEs. This setting affects worker selection and interaction with timeouts such as checkpoint_timeout_seconds and RPC settings like thrift_rpc_timeout_ms.checkpoint_timeout_secondsCheckpointController during checkpoint creation and does not change the worker's internal checkpointing behavior.db_used_data_quota_update_interval_secsdrop_backend_after_decommissionTRUE indicates that the BE is deleted immediately after it is decommissioned. FALSE indicates that the BE is not deleted after it is decommissioned.edit_log_portedit_log_roll_numedit_log_typeBDB.enable_background_refresh_connector_metadatatrue indicates to enable the Hive metadata cache refresh, and false indicates to disable it.enable_collect_query_detail_infoTRUE, the system collects the profile of the query. If this parameter is set to FALSE, the system does not collect the profile of the query.enable_create_partial_partition_in_batchfalse (default), StarRocks enforces that batch-created range partitions align to the standard time unit boundaries. It will reject non‑aligned ranges to avoid creating holes. Setting this item to true disables that alignment check and allows creating partial (non‑standard) partitions in batch, which can produce gaps or misaligned partition ranges. You should only set it to true when you intentionally need partial batch partitions and accept the associated risks.enable_internal_sqltrue, internal SQL statements executed by internal components (for example, SimpleExecutor) are preserved and written into internal audit or log messages (and can be further desensitized if enable_sql_desensitize_in_log is set). When it is set to false, internal SQL text is suppressed: formatting code (SimpleExecutor.formatSQL) returns "?" and the actual statement is not emitted to internal audit or log messages. This configuration does not change execution semantics of internal statements — it only controls logging and visibility of internal SQL for privacy or security.enable_legacy_compatibility_for_replicationtrue indicates enabling this mode.enable_show_materialized_views_include_all_task_runsfalse, StarRocks returns only the newest TaskRun per task (legacy behavior for compatibility). When it is set to true (default), TaskManager may include additional TaskRuns for the same task only when they share the same start TaskRun ID (for example, belong to the same job), preventing unrelated duplicate runs from appearing while allowing multiple statuses tied to one job to be shown. Set this item to false to restore single-run output or to surface multi-run job history for debugging and monitoring.enable_statistics_collect_profiletrue to allow StarRocks to generate query profiles for queries on system statistics.enable_table_name_case_insensitiveenable_task_history_archivelookupHistory, lookupHistoryByTaskNames, lookupLastJobOfTasks) include archived results. Archiving is performed by the FE leader and is skipped during unit tests (FeConstants.runningUnitTest). When enabled, in-memory expiration and forced-GC paths are bypassed (the code returns early from removeExpiredRuns and forceGC), so retention/eviction is handled by the persistent archive instead of task_runs_ttl_second and task_runs_max_history_number. When disabled, history stays in memory and is pruned by those configurations.enable_task_run_fe_evaluationtask_runs in TaskRunsSystemTable.supportFeEvaluation. FE-side evaluation is only allowed for conjunctive equality predicates comparing a column to a constant and is limited to the columns QUERY_ID and TASK_NAME. Enabling this improves performance for targeted lookups by avoiding broader scans or additional remote processing; disabling it forces the planner to skip FE evaluation for task_runs, which may reduce predicate pruning and affect query latency for those filters.heartbeat_mgr_blocking_queue_sizeheartbeat_mgr_threads_numignore_materialized_view_errortrue to allow FE to ignore the exception.ignore_meta_checkignore_task_run_history_replay_errorinformation_schema.task_runs, a corrupted or invalid JSON row will normally cause deserialization to log a warning and throw a RuntimeException. If this item is set to true, the system will catch deserialization errors, skip the malformed record, and continue processing remaining rows instead of failing the query. This will make information_schema.task_runs queries tolerant of bad entries in the _statistics_.task_run_history table. Note that enabling it will silently drop corrupted history records (potential data loss) instead of surfacing an explicit error.lock_checker_interval_secondlock_checker_enable_deadlock_check (enables deadlock checks) and slow_lock_threshold_ms (defines what constitutes a slow lock).master_sync_policyDefault: SYNC
Type: String
Unit: -
Is mutable: No
Description: The policy based on which the leader FE flushes logs to disk. This parameter is valid only when the current FE is a leader FE. Valid values:
SYNC: When a transaction is committed, a log entry is generated and flushed to disk simultaneously.NO_SYNC: The generation and flushing of a log entry do not occur at the same time when a transaction is committed.WRITE_NO_SYNC: When a transaction is committed, a log entry is generated simultaneously but is not flushed to disk.If you have deployed only one follower FE, we recommend that you set this parameter to SYNC. If you have deployed three or more follower FEs, we recommend that you set this parameter and the replica_sync_policy both to WRITE_NO_SYNC.
Introduced in: -
max_bdbje_clock_delta_msmeta_delay_toleration_secondmeta_dirStarRocksFE.STARROCKS_HOME_DIR + "/meta"metadata_ignore_unknown_operation_typeTRUE, the FE ignores unknown log IDs. If the value is FALSE, the FE exits.profile_info_formatdefault and json. When set to default, Profile is of the default format. When set to json, the system outputs Profile in JSON format.replica_ack_policySIMPLE_MAJORITYSIMPLE_MAJORITY specifies that a log entry is considered valid if a majority of follower FEs return ACK messages.replica_sync_policySYNC: When a transaction is committed, a log entry is generated and flushed to disk simultaneously.NO_SYNC: The generation and flushing of a log entry do not occur at the same time when a transaction is committed.WRITE_NO_SYNC: When a transaction is committed, a log entry is generated simultaneously but is not flushed to disk.start_with_incomplete_metaMetaHelper.checkMetaDir() uses this flag to bypass the safety check that otherwise prevents starting from an image without corresponding BDB logs; starting this way can produce stale or inconsistent metadata and should only be used for emergency recovery. RestoreClusterSnapshotMgr temporarily sets this flag to true while restoring a cluster snapshot and then rolls it back; that component also toggles bdbje_reset_election_group during restore. Do not enable in normal operation — enable only when recovering from corrupted BDB data or when explicitly restoring an image-based snapshot.table_keeper_interval_secondtable_keeper_interval_second changes. Increase to reduce scheduling frequency and load; decrease for faster reaction to missing or stale history tables.task_runs_ttl_secondtask_runs_max_history_number and enable_task_history_archive for predictable retention and storage behavior.task_ttl_secondexpireTime (expireTime = now + task_ttl_second * 1000L). TaskRun also uses this value as an upper bound when computing a run's execute timeout — the effective execute timeout is min(task_runs_timeout_second, task_runs_ttl_second, task_ttl_second). Adjusting this value changes how long manually created tasks remain valid and can indirectly limit the maximum allowed execution time of task runs.thrift_rpc_retry_timesThriftRPCRequestExecutor (and callers such as NodeMgr and VariableMgr) as the loop count for retries — i.e., a value of 3 allows up to three attempts including the initial try. On TTransportException the executor will try to reopen the connection and retry up to this count; it will not retry when the cause is a SocketTimeoutException or when reopen fails. Each attempt is subject to the per-attempt timeout configured by thrift_rpc_timeout_ms. Increasing this value improves resilience to transient connection failures but can increase overall RPC latency and resource usage.thrift_rpc_strict_modetrue (default), the server enforces strict Thrift encoding/version checks and honors the configured thrift_rpc_max_body_size limit; when false, the server accepts non-strict (legacy/lenient) message formats, which can improve compatibility with older clients but may bypass some protocol validations. Use caution changing this on a running cluster because it is not mutable and affects interoperability and parsing safety.thrift_rpc_timeout_msThriftConnectionPool (used by the frontend and backend pools) and is also added to an operation's execution timeout (e.g., ExecTimeout*1000 + thrift_rpc_timeout_ms) when computing RPC call timeouts in places such as ConfigBase, LeaderOpExecutor, GlobalStateMgr, NodeMgr, VariableMgr, and CheckpointWorker. Increasing this value makes RPC calls tolerate longer network or remote processing delays; decreasing it causes faster failover on slow networks. Changing this value affects connection creation and request deadlines across the FE code paths that perform Thrift RPCs.txn_latency_metric_report_groupsstream_load, routine_load, broker_load, insert, and compaction (available only for shared-data clusters). Example: "stream_load,routine_load".txn_rollback_limit