docs/en/sql-reference/System_variable.md
import VariableWarehouse from '../_assets/commonMarkdown/variable_warehouse.mdx'
StarRocks provides many system variables that can be set and modified to suit your requirements. This section describes the variables supported by StarRocks. You can view the settings of these variables by running the SHOW VARIABLES command on your MySQL client. You can also use the SET command to dynamically set or modify variables. You can make these variables take effect globally on the entire system, only in the current session, or only in a single query statement.
The variables in StarRocks refer to the variable sets in MySQL, but some variables are only compatible with the MySQL client protocol and do not function on the MySQL database.
NOTE
Any user has the privilege to run SHOW VARIABLES and make a variable take effect at session level. However, only users with the SYSTEM-level OPERATE privilege can make a variable take effect globally. Globally effective variables take effect on all the future sessions (excluding the current session).
If you want to make a setting change for the current session and also make that setting change apply to all future sessions, you can make the change twice, once without the
GLOBALmodifier and once with it. For example:SQLSET query_mem_limit = 137438953472; -- Apply to the current session. SET GLOBAL query_mem_limit = 137438953472; -- Apply to all future sessions.
StarRocks supports three types (levels) of variables: global variables, session variables, and SET_VAR hints. Their hierarchical relationship is as follows:
SET_VAR hints.SET_VAR hints.SET_VAR hints take effect only on the current query statement.You can view all or some variables by using SHOW VARIABLES [LIKE 'xxx']. Example:
-- Show all variables in the system.
SHOW VARIABLES;
-- Show variables that match a certain pattern.
SHOW VARIABLES LIKE '%time_zone%';
You can set variables to take effect globally or only on the current session. When set to global, the new value will be used for all the future sessions, while the current session still uses the original value. When set to "current session only", the variable will only take effect on the current session.
A variable set by SET <var_name> = xxx; only takes effect for the current session. Example:
SET query_mem_limit = 137438953472;
SET forward_to_master = true;
SET time_zone = "Asia/Shanghai";
A variable set by SET GLOBAL <var_name> = xxx; takes effect globally. Example:
SET GLOBAL query_mem_limit = 137438953472;
The following variables only take effect globally. They cannot take effect for a single session, which means you must use SET GLOBAL <var_name> = xxx; for these variables. If you try to set such a variable for a single session (SET <var_name> = xxx;), an error is returned.
In addition, variable settings also support constant expressions, such as:
SET query_mem_limit = 10 * 1024 * 1024 * 1024;
SET forward_to_master = concat('tr', 'u', 'e');
In some scenarios, you may need to set variables specifically for certain queries. By using the SET_VAR hint, you can set session variables that will take effect only within a single statement.
StarRocks supports using SET_VAR in the following statements;
SET_VAR can only be placed after the above keywords and enclosed in /*+...*/.
Example:
SELECT /*+ SET_VAR(query_mem_limit = 8589934592) */ name FROM people ORDER BY name;
SELECT /*+ SET_VAR(query_timeout = 1) */ sleep(3);
UPDATE /*+ SET_VAR(insert_timeout=100) */ tbl SET c1 = 2 WHERE c1 = 1;
DELETE /*+ SET_VAR(query_mem_limit = 8589934592) */
FROM my_table PARTITION p1
WHERE k1 = 3;
INSERT /*+ SET_VAR(insert_timeout = 10000000) */
INTO insert_wiki_edit
SELECT * FROM FILES(
"path" = "s3://inserttest/parquet/insert_wiki_edit_append.parquet",
"format" = "parquet",
"aws.s3.access_key" = "XXXXXXXXXX",
"aws.s3.secret_key" = "YYYYYYYYYY",
"aws.s3.region" = "us-west-2"
);
You can also set multiple variables in a single statement. Example:
SELECT /*+ SET_VAR
(
exec_mem_limit = 515396075520,
query_timeout=10000000,
batch_size=4096,
parallel_fragment_exec_instance_num=32
)
*/ * FROM TABLE;
You can set session variables as user properties using the ALTER USER. This feature is supported from v3.3.3.
Example:
-- Set the session variable `query_timeout` to `600` for the user jack.
ALTER USER 'jack' SET PROPERTIES ('session.query_timeout' = '600');
The variables are described in alphabetical order. Variables with the global label can only take effect globally. Other variables can take effect either globally or for a single session.
true), all roles of the user are activated at user login. This takes precedence over the roles set by SET DEFAULT ROLE.false), the roles set by SET DEFAULT ROLE are activated.If you want to activate the roles assigned to you in a session, use the SET ROLE command.
DecodeCollector) may define dictionary columns and apply dictionary decoding to expressions whose type is VARCHAR or ARRAY<VARCHAR>. When disabled, only scalar VARCHAR columns are eligible and ARRAY<VARCHAR> types are ignored by those low-cardinality optimizations.authentication_policy. In the current codebase snapshot the value is stored as an opaque comma-separated string and there is no other usage or internal parsing of this field in FE (it is not referenced elsewhere). It is related to the default_authentication_plugin session variable: authentication_policy represents per-session authentication policy data while default_authentication_plugin indicates the default authentication plugin. Administrators or clients can set this variable per session (for example with SET authentication_policy = '...') to preserve compatibility with MySQL clients or tooling.*,,Used for MySQL client compatibility. No practical usage.
Description: Used to set the threshold for big queries. When the session variable enable_profile is set to false and the amount of time taken by a query exceeds the threshold specified by the variable big_query_profile_threshold, a profile is generated for that query.
Note: In versions v3.1.5 to v3.1.7, as well as v3.2.0 to v3.2.2, we introduced the big_query_profile_second_threshold for setting the threshold for big queries. In versions v3.1.8, v3.2.3, and subsequent releases, this parameter has been replaced by big_query_profile_threshold to offer more flexible configuration options.
Default: 0
Unit: Second
Data type: String
Introduced in: v3.1
EnforceAndCostTask) treats a candidate as ineligible for broadcast when the right table's row count exceeds this limit or when combined size/scale checks indicate the right table is not sufficiently smaller than the left. A non‑positive value disables this limit. PushDownAggregateCollector also uses this variable to identify "small broadcast" joins for aggregate push‑down: it requires the rightRows less than or equal to this limit and less than or equal to cbo_push_down_aggregate_on_broadcast_join_row_count_limit before allowing push‑down. It interacts with broadcast_right_table_scale_factor and the number of BE nodes when comparing left/right sizes. For example, when leftOutputSize is less than rightOutputSize * beNum * broadcast_right_table_scale_factor and rightRowCount is greater than broadcast_row_limit, broadcast is rejected.150000000 disables the force-reuse optimization. This variable has a getter/setter in SessionVariable and is applied per session.2000prefer_cte_rewrite hint. When disabled, CTE-based rewrite is not allowed and the planner will attempt the multi-function rewrite; if a query requires CTE (for example, multi-column DISTINCT or functions that cannot be handled by multi-function rewrite) the planner will raise a user error. Note: the effective setting checked by the optimizer is the logical AND of this flag and the pipeline engine flag — i.e. isCboCteReuse() returns this variable AND enablePipelineEngine, so CTE reuse is only effective when enablePipelineEngine is on.truev3.2.0RuleType enum value and only rules whose names start with TF_ (transformation rules) or GP_ (group-combination rules) may be disabled. The session variable is stored on SessionVariable (getCboDisabledRules / setCboDisabledRules) and is applied by the optimizer via OptimizerOptions.applyDisableRuleFromSessionVariable(), which parses the list and clears the corresponding rule switches so those rules are skipped during planning. When set through a SET statement, values are validated and the server will reject unknown names or names not starting with TF_/GP_ with clear error messages (e.g. "Unknown rule name(s): ..." or "Only TF_ ... and GP_ ... can be disabled"). At planner runtime, unknown rule names are ignored with a warning (logged as "Ignoring unknown rule name: ... (may be from different version)"). Names must match enum identifiers exactly (case-sensitive). Whitespace around names is trimmed; empty entries are ignored."" (no disabled rules)trueDECIMAL, and VARCHAR is also a valid value. This variable takes effect only for = and != comparison.get_json_*) can be rewritten to direct access of Flat JSON subcolumns, enabling predicate pushdown, column pruning, and dictionary optimization.multiJoinNode.getAtoms().size() <= cbo_max_reorder_node_use_dp and cbo_enable_dp_join_reorder is enabled. Used in JoinReorderFactory.createJoinReorderAdaptive (to add JoinReorderDP to the candidate algorithms) and in ReorderJoinRule.transform/rewrite (to decide whether to execute JoinReorderDP when copying plans into the memo). Default value 10 reflects a practical performance cutoff (comment in code: "10 table join reorder takes more than 100ms"). Tune this to trade optimizer runtime (DP is expensive) versus potential plan quality for larger multi-join queries. Interacts with cbo_enable_dp_join_reorder and the greedy threshold cbo_max_reorder_node_use_greedy. The comparison is inclusive (<=).10v3.2.0SPMOptimizer, QueryOptimizer) and can be set at the session level via setMaxTransformReorderJoins.4cbo_enable_greedy_join_reorder) when building the list of candidate reorder algorithms: if multiJoinNode.getAtoms().size() is less than or equal to this value, a JoinReorderGreedy instance will be added and executed. This variable is used by JoinReorderFactory.createJoinReorderAdaptive() and ReorderJoinRule to gate greedy reordering during join-reorder phases; it applies per session and affects whether greedy reordering is attempted (if statistics are available and greedy is enabled). Adjust this to control optimizer time/complexity trade-offs for queries with many joined relations.16cbo_prune_subfield optimization; both work together to produce get_json_xxx(...) or cast-wrapped calls so BE can apply flat JSON optimizations. Enabling cbo_prune_json_subfield without backend support for flat json may degrade performance; disable it if the BE does not support flat JSON path pushdown.truetrueutf8) is supported.SessionVariable as collationConnection and exposed with the show-name collation_connection. The variable is surfaced in metadata and SHOW outputs (for example, used when building SHOW CREATE VIEW rows in ShowExecutor and returned as a constant for information_schema.views via ViewsSystemTable.CONSTANT_MAP). It represents the collation reported by the server for the connection (paired with character_set_client and related character-set variables) but does not by itself indicate runtime collation enforcement beyond what other components implement.utf8_general_ci@VariableMgr.VarAttr) and lives alongside other charset/collation session variables such as character_set_client, collation_connection and collation_server. The value is serialized when session variables are exported (e.g., included in the JSON produced by SessionVariable#getJsonString and in the session variable machinery), and is used to report the session's database collation. Changing this variable updates the session's reported database collation name; engine-level or object-level collation settings (server/table/column) may still take precedence for actual comparison/ordering behavior depending on context.utf8_general_ciutf8_general_ci) that FE reports to clients and that is associated with character_set_server / collation_connection / collation_database. It is persisted in the session variable JSON (see SessionVariable#getJsonString / replayFromJson) and is exposed via the variable manager (@VarAttr(name = COLLATION_SERVER)), so it appears in SHOW VARIABLES and can be changed per-session. The value is stored as a plain String in SessionVariable and typically holds a standard MySQL collation name (e.g. utf8_general_ci, utf8mb4_unicode_ci); the code does not enforce a fixed enum or perform additional validation here, so the effective behavior depends on downstream components that interpret the collation name for comparisons, ordering and other collation-sensitive operations.utf8_general_civ3.2.0compute_nodes_only — schedule fragments only on compute nodes (default).all_nodes — allow scheduling on both compute nodes and traditional backend nodes.
The variable is backed by the enum SessionVariableConstants.ComputationFragmentSchedulingPolicy. When set, the value is validated (upper-cased) against the enum; invalid values cause an error (IllegalArgumentException when set via API, SemanticException when used in a SET statement). The getter returns the corresponding enum value and falls back to COMPUTE_NODES_ONLY if unset or unrecognized. This setting affects how the FE chooses target nodes for fragment placement at planning/deployment time.COMPUTE_NODES_ONLYenable_connector_adaptive_io_tasks, which is enabled by default.compression_codec property does not exist in the Hive tables.write.parquet.compression-codec properties do not exist in the Iceberg tables.compression property is not set for INSERT INTO FILES.uncompressed, snappy, lz4, zstd, and gzip.enable_distinct_column_bucketization is set to true.SET SESSION custom_query_id = 'my-query-id'; before executing a query. The value is reset after query is finished. This value can be passed to KILL QUERY 'my-query-id'. Value can be found in audit logs as a customQueryId field.DOUBLE) instead of widening to larger decimal types or failing. Concretely, in DecimalV3 arithmetic (see DecimalV3FunctionAnalyzer), if a multiplication's computed precision exceeds the engine's max decimal precision but its return scale is within the maximum, the session flag decimal_overflow_to_double = true causes the return type and operand target types to be set to DOUBLE. This yields an approximate (lossy) numeric result but avoids decimal precision overflow errors or forced use of larger decimal types. When false (default), the planner will keep decimal semantics (attempt decimal128/256 or throw on unrepresentable scale/precision).falsemysql_native_password, caching_sha2_password) supported by the server. This variable affects session behavior only; persistent user account authentication configuration is managed separately. See related session variable authentication_policy.mysql_native_passwordUsed to set the default storage format used by the storage engine of the computing node. The currently supported storage formats are alpha and beta.
default_storage_engine (see SessionVariable VarAttr). It exists for MySQL 8.0 compatibility and to satisfy MySQL clients/libraries that query the session default storage engine. The variable is stored per-session in the SessionVariable object and is returned to clients (e.g., via SHOW VARIABLES). It is informational for compatibility; changing it adjusts the session-reported value but does not imply StarRocks will change internal storage implementation.InnoDBDescription: The default compression algorithm for table storage. Supported compression algorithms are snappy, lz4, zlib, zstd.
Note that if you specified the compression property in a CREATE TABLE statement, the compression algorithm specified by compression takes effect.
Default: lz4_frame
Introduced in: v3.0
CREATE TEMPORARY TABLE and internal/implicit temporary tables created by the engine). Declared in SessionVariable.java with a @VariableMgr.VarAttr annotation, it exists primarily for MySQL 8.0 compatibility so clients and tools expecting MySQL-like behavior can observe or change the temporary-table engine per session. Changing this value affects how temporary table data is stored/managed on storage layers that honor different engines (for example, choosing between memory-backed vs. disk-backed engines).InnoDBfalse, meaning the feature is enabled. When this feature is disabled, query planning will not attempt to execute Colocation Join.isDisableColocateSet / setDisableColocateSet).falsefalse (default) the optimizer may apply join-reorder transformations (e.g. ReorderJoinRule, join transformation and outer-join transformation rules) during logical optimization in the new planner paths (seen in SPMOptimizer and QueryOptimizer). When true, join reordering and related outer-join reorder rules are skipped, preventing the optimizer from changing join order. This is useful to reduce optimization time, to obtain stable/reproducible join ordering, or to work around cases where CBO reordering produces suboptimal plans. This setting interacts with other CBO/session controls such as cbo_max_reorder_node, cbo_max_reorder_node_use_exhaustive, and enable_outer_join_reorder.falsetrue for the session, FE will instruct BE to disable spilling to local disk and instead rely on remote storage spill (if remote spill is configured). This flag is only meaningful when enable_spill = true, enable_spill_to_remote_storage = true, and a valid spill_storage_volume is provided and found by FE. The value is serialized into TSpillToRemoteStorageOptions (sent to BE) as disable_spill_to_local_disk. If remote spill is not configured or the named storage volume cannot be resolved, this setting has no effect. Use with caution: disabling local-disk spill can increase network I/O and latency and requires reliable, performant remote storage.Used for MySQL client compatibility. No practical usage.
true: Enables Dynamic Overwrite.false: Disables Dynamic Overwrite and uses the default semantic.pipeline_dop. For a newly deployed v2.5 StarRocks cluster, the value is true by default. For a v2.5 cluster upgraded from v2.4, the value is false.enable_rbo_table_prune and lets the Cost-Based Optimizer try to remove unnecessary tables or inputs from join processing to reduce planning and execution complexity. Default is off because pruning can change plan shape; enable it only after validating on representative workloads.falsetrue), StmtExecutor passes the session setting into the explain/profile pipeline (via calls to ExplainAnalyzer) so explain, EXPLAIN ANALYZE and analyze-profile outputs contain colored highlighting for readability in ANSI-capable terminals. When disabled (false), the output is produced without ANSI sequences (plain text), which is appropriate for logging, clients that do not support ANSI, or when piping output to files. This is a per-session toggle and does not change execution semantics—only the presentation of explain/profile text.truetrue. If this feature is not enabled, you can manually set the number of concurrent I/O tasks using the variable connector_io_tasks_per_scan_operator.PruneAggregateNodeRule to merge or prune split aggregate nodes when beneficial (that is, reducing unnecessary serialization or deserialization). Note that the effective check in code is gated by new_planner_agg_stage — the helper isEnableCostBasedMultiStageAgg() returns true only when new_planner_agg_stage is set to AUTO and this parameter is set to true; if new_planner_agg_stage is non-AUTO, this parameter will not enable cost-based multi-stage behavior. Disabling this flag forces the planner to prefer the simpler 3-stage transformation for distinct aggregations and skips cost-driven candidate generation and certain aggregate-node merges.truetrue enables the feature. When this feature is enabled, the system automatically routes some cache requests to remote storage when the disk I/O load is high, reducing disk pressure.true enables the feature. Cache Sharing is used to support accessing cache data from other nodes through the network, which can help to reduce performance jitter caused by cache invalidation during cluster scaling. This variable takes effect only when the FE parameter enable_trace_historical_node is set to true.true, the default) QueryOptimizer.invoke convertDistinctAggOverWindowToNullSafeEqualJoin will:
false) the optimizer skips this transformation and leaves DISTINCT aggregates over windows unchanged. This setting is session-scoped and affects only the optimizer rewrite phase (see QueryOptimizer.convertDistinctAggOverWindowToNullSafeEqualJoin).trueDescription: Whether to enable bucketization for the COUNT DISTINCT colum in a group-by-count-distinct query. Use the select a, count(distinct b) from t group by a; query as an example. If the GROUP BY colum a is a low-cardinality column and the COUNT DISTINCT column b is a high-cardinality column which has severe data skew, performance bottleneck will occur. In this situation, you can split data in the COUNT DISTINCT column into multiple buckets to balance data and prevent data skew. You must use this variable with the variable count_distinct_column_buckets.
You can also enable bucketization for the COUNT DISTINCT column by adding the skew hint to your query, for example, select a,count(distinct [skew] b) from t group by a;.
Default: false, which means this feature is disabled.
Introduced in: v2.5
trueisEnableFilterUnusedColumnsInScanStage() and the enable/disable helpers enableTrimOnlyFilteredColumnsInScanStage() / disableTrimOnlyFilteredColumnsInScanStage() in SessionVariable.trueWhether to enable global runtime filter (RF for short). RF filters data at runtime. Data filtering often occurs in the Join stage. During multi-table joins, optimizations such as predicate pushdown are used to filter data, in order to reduce the number of scanned rows for Join and the I/O in the Shuffle stage, thereby speeding up the query.
StarRocks offers two types of RF: Local RF and Global RF. Local RF is suitable for Broadcast Hash Join and Global RF is suitable for Shuffle Join.
Default value: true, which means global RF is enabled. If this feature is disabled, global RF does not take effect. Local RF can still work.
true and false.MaterializedViewAnalyzer.planMVQuery will proceed for create-MV statements whose refresh scheme is an IncrementalRefreshSchemeDesc: it builds the logical and physical plan for the view query and sets the session enableMVPlanner flag (setMVPlanner(true)). When disabled, planning for incremental-refresh MVs is skipped. Accessible via isEnableIncrementalRefreshMV() and setEnableIncrementalRefreshMv(boolean) in SessionVariable.falsetrue (default), if an INSERT statement specifies only a subset of columns (fewer than the number of all non-generated columns in the table), the system performs a Partial Update to update only the specified columns while preserving existing values in other columns. When set to false, the system uses default values for unspecified columns instead of preserving existing values. This feature is particularly useful for updating specific columns in Primary Key tables without affecting other column values.true and false (Default). When strict mode is enabled, the system loads only qualified rows. It filters out unqualified rows and returns details about the unqualified rows. For more information, see Strict mode. In versions earlier than v3.4.0, when enable_insert_strict is set to true, the INSERT jobs fails when there is an unqualified rows.PushDownPredicateProjectRule consults this flag: when true (default) the rule may push predicates through Project operators even if those projects contain LambdaFunctionOperator expressions; when false the rule inspects the project's expressions and aborts the pushdown if any lambda is present (the rule returns no transformation). This affects only the optimizer transformation phase (planning) and can be changed per session via the SessionVariable getter/setter (getEnableLambdaPushDown / setEnableLambdaPushdown).truelarge_in_predicate_threshold into a special LargeInPredicate (handled in AstBuilder). The optimizer rule LargeInPredicateToJoinRule then converts that predicate into a LEFT_SEMI_JOIN (for IN) or NULL_AWARE_LEFT_ANTI_JOIN (for NOT IN) against a RawValues constant table, reducing FE memory and planning cost for very large IN lists by avoiding one expression node per constant. The transformation has correctness restrictions (no OR compound predicates, only one large-IN per query); if these restrictions or other conditions cause the optimization to fail, the planner throws LargeInPredicateException and upper layers (via StmtExecutor / ConnectProcessor) retry the query from the parser stage with enable_large_in_predicate disabled so the query falls back to the traditional expression-based IN handling. Use large_in_predicate_threshold to control the minimum literal count that triggers this behavior.true1 to 1048576.TQueryOptions.enable_profile = true and passes load_profile_collect_second (from stream_load_profile_collect_threshold_second) to backends; the coordinator then conditionally calls profile collection (see StreamLoadTask.collectProfile()). The effective behavior is the logical OR of this session variable and the table-level property enable_load_profile on the destination table; collection is further gated by load_profile_collect_interval_second (FE-side sampling interval) to avoid frequent collection. The session flag is read via SessionVariable.isEnableLoadProfile() and can be set per-connection with setEnableLoadProfile(...).falsePruneShuffleDistributionNodeRule and EnforceAndCostTask),CostModel).
The replacement is only considered when enable_pipeline_engine is enabled and the cluster is a single backend+compute node. The planner still rejects local-shuffle conversion in unsafe cases (e.g., DISTINCT aggregates, detected data skew, missing/unknown column statistics, multi-input operators like joins, or other semantic restrictions). Some code paths (INSERT/UPDATE/DELETE planners and MaterializedViewOptimizer) temporarily disable this session flag because non-query sinks or certain rewrites require per-driver scan assignment that local-shuffle cannot use.truetrue, aggregate functions will be pushed down to Scan Operator during query execution and rewritten by the materialized view before the Join Operator is executed. This will relieve the data expansion caused by Join and thereby improve the query performance. For detailed information about the scenarios and limitations of this feature, see Aggregation pushdown.true, the system seeks to compensate the predicates using UNION ALL when the predicates in the materialized view cannot satisfy the query's predicates.Whether to enable multi-column global runtime filter. Default value: false, which means multi-column global RF is disabled.
If a Join (other than Broadcast Join and Replicated Join) has multiple equi-join conditions:
multi-column in the partition by clause.context.getRuleSet().addRealtimeMVRules() instead of the regular join implementation rules (QueryOptimizer).StreamImplementationRule.check which returns true only when MV planner is on).LogicalBinlogScanOperator for native tables/materialized views when MV planner is enabled).SplitMultiPhaseAggRule.check returns false when MV planner is on).
Materialized view planning code (MaterializedViewAnalyzer) sets this flag around MV planning work (sets to true before planning and resets to false afterward), so it is primarily intended for MV plan generation and testing. Setting this session variable affects only the current session’s optimizer behavior.falsetrue, the optimizer enables the query-rewrite based skew join optimization: QueryOptimizer checks sessionVariable.isEnableOptimizerSkewJoinByQueryRewrite() after join-expression pushdown and, if enabled and enableOptimizerSkewJoinByBroadCastSkewValues is disabled, invokes skewJoinOptimize(...) which applies SkewJoinOptimizeRule. If isEnableStatsToOptimizeSkewJoin is enabled, skewJoinOptimize first merges projects and computes statistics (Utils.calculateStatistics) before applying the rule. The session setters enforce mutual exclusivity between enableOptimizerSkewJoinByQueryRewrite and enableOptimizerSkewJoinByBroadCastSkewValues (setting one flips the other), so only one skew strategy is active at a time.trueenable_optimize_skew_join_v1 off and vice versa. This is a session-level variable intended to switch optimizer behavior for queries that benefit from using broadcasted skew statistics instead of query-rewrite transformations. Usage locations: QueryOptimizer.dynamicRewrite(...) and the main optimization flow in QueryOptimizer where skew-join optimization is applied.falsetrue (Default): Enable Bloom Filter optimization when reading Parquet files.false: Disable Bloom Filter optimization when reading Parquet files.true (Default): Enable Page Index optimization when reading Parquet files.false: Disable Page Index optimization when reading Parquet files.true indicates enabled and false indicates the opposite. Default value: true.Utils.extractConjuncts, estimates each conjunct's selectivity with DefaultPredicateSelectivityEstimator, and reorders the conjuncts in ascending order of estimated selectivity (less restrictive first) to form a new CompoundPredicateOperator (AND). The rule only runs when the operator has a CompoundPredicateOperator with more than one conjunct. Statistics are gathered from child OptExpression statistics when available; for PhysicalOlapScanOperator it will fetch column statistics from GlobalStateMgr.getCurrentState().getStatisticStorage(). If child statistics are missing and the scan is not an OLAP scan, the rule skips reordering. The session variable is exposed via SessionVariable.isEnablePredicateReorder(), with enablePredicateReorder() and disablePredicateReorder() helper methods.Description: Specifies whether to send the profile of a query for analysis. The default value is false, which means no profile is required.
By default, a profile is sent to the FE only when a query error occurs in the BE. Profile sending causes network overhead and therefore affects high concurrency.
If you need to analyze the profile of a query, you can set this variable to true. After the query is completed, the profile can be viewed on the web page of the currently connected FE (address: fe_host:fe_http_port/query). This page displays the profiles of the latest 100 queries with enable_profile turned on.
Default: false
true specifies to enable this feature, and false specifies to disable this feature. When this feature is enabled, it works only for queries that meet the conditions specified in the application scenarios of Query Cache.Description: Boolean value to control whether to direct multiple queries against the same tablet to a fixed replica.
In scenarios where the table to query has a large number of tablets, this feature significantly improves query performance because the meta information and data of the tablet can be cached in memory more quickly.
However, if there are some hotspot tablets, this feature may degrade the query performance because it directs the queries to the same BE, making it unable to fully use the resources of multiple BEs in high concurrency scenarios.
Default: false, which means the system selects a replica for each query.
Introduced in: v2.5.6, v3.0.8, v3.1.4, and v3.2.0.
UniquenessBasedTablePruneRule, join reorder, and RboTablePruneRule) to remove unnecessary table scan alternatives and reduce scanned partitions/rows in joins. Enabling this option also disables join-equivalence derivation (context.setEnableJoinEquivalenceDerive(false)) while logical rule rewrite is running to avoid conflicting transformations. The pruning flow may additionally run PrimaryKeyUpdateTableRule for update statements if enable_table_prune_on_update is set. The rule is only executed when a query contains prunable joins (checked via Utils.hasPrunableJoin(tree)).falsecbo_push_down_grouping_set is also set to true, the optimizer may additionally attempt push-down of grouping-set aggregates after or instead of this rewrite.falseenable_pipeline_engine is true. Enabling it causes fragments to call enableAdaptiveDop() during plan construction, and has runtime implications: join probes may wait for all build phases to complete (which conflicts with group_execution behavior), and enabling runtime adaptive DOP will disable pipeline-level multi-partitioned runtime filters (the setter clears enablePipelineLevelMultiPartitionedRf). The flag is recorded in the query profile and can be toggled per-session.falseenable_scan_block_cache.TExecPlanFragmentParams.enable_shared_scan) so the BE can perform shared scanning (see .../TFragmentInstanceFactory.java:153-159). However, the FE currently does not honor user changes: SessionVariable.isEnableSharedScan() always returns false (see .../SessionVariable.java:4176-4180) and the feature has been disabled in FE since later versions due to incompatibility with event-based scheduling. As a result, setting this variable in a session has no effect in current releases.falsefalse. If it is set to true, when the query meets the criteria (to evaluate whether the query is a point query): the conditional columns in the WHERE clause include all primary key columns, and the operators in the WHERE clause are = or IN, the query takes the short circuit.true indicates sorted streaming is enabled to sort data in data streams.false. If it is set to true, StarRocks spills the intermediate results to disk to reduce the memory usage when processing aggregate, sort, or join operators in queries.true, StarRocks spills the intermediate results to the storage volume specified in spill_storage_volume after the capacity limit of the local disk is reached. For more information, see Spill to object storage.SplitTopNAggregateRule.java). When enabled, the optimizer can rewrite a plan that has a TopN on top of an aggregation over an OLAP scan into a plan that:
split_topn_agg_limit (session splitTopNAggLimit, default 10000).trueenable_split_topn_agg is on, the optimizer's SplitTopNAggregateRule will skip the split transformation if the TopN operator's LIMIT is equal to the default unlimited value or greater than this threshold. This variable only governs the numeric cutoff (number of rows); the rule still enforces other correctness and cost checks (projections, predicates, column/type constraints and scan statistics) before applying the transformation.10000TRUE, an error is reported for such a query pattern: Duplicate alias is used in different expressions of the query and this alias is also a sorting field in ORDER BY, for example, select distinct t1.* from tbl1 t1 order by t1.k1;. The logic is the same as that in v2.3 and earlier. When this variable is set to FALSE, a loose deduplication mechanism is used, which processes such queries as valid SQL queries.PrimaryKeyUpdateTableRule to rewrite/prune update plans — potentially improving pruning for primary-key update patterns. This flag is only effective when rule-based/CBO table pruning is active (see enable_rbo_table_prune). It is disabled by default because the transformation can change data-layout/plan shape (e.g., bucket-shuffle layout for OlapTableSink) and may cause correctness or performance regressions for concurrent updates.falsefkConstraint.isOrderByFK),max_ukfk_join_reorder_scale_ratio and max_ukfk_join_reorder_fk_rows).
If UK/FK conditions are not met, join ordering falls back to the default smaller-table-as-right-child heuristic (rowCount comparison). The variable is accessed via SessionVariable getters/setters and used in the JoinOrder optimizer rule.falseUKFKConstraintsCollector to collect unique and foreign-key constraints bottom‑up and attach them to plan nodes (OptExpressions). The collected constraints are consumed by transformation rules such as PruneUKFKJoinRule (which can prune the UK-side of joins, rewrite predicates from UK to FK columns and add IS NULL checks for outer-join cases) and PruneUKFKGroupByKeysRule (which can remove redundant GROUP BY keys derived from UK/FK relationships). The collected UK/FK information is also used in StatisticsCalculator to produce tighter join cardinality estimates for UK‑FK joins and may replace default estimates when more precise. Default is conservative (false) because these optimizations rely on declared schema constraints and can change plan shape and predicate placement.falsetrue, the logical view is used as a unified node to rewrite the queries against itself for better performance. If this item is set to false, the system transcribes the queries against logical views into queries against physical tables or materialized views and then rewrites them.Used for MySQL client compatibility. No practical usage.
HDFSBackendSelector receives this flag from the session and calls its computeForceScheduleLocalAssignment(...) path to: 1) select backends whose hostnames match scan-range locations, 2) choose among those local backends using reBalanceScanRangeForComputeNode(...) (which considers per-node assigned bytes and a max imbalance ratio), and 3) return remaining ranges for remote assignment via consistent hashing. HiveConnectorScanRangeSource also reads this session variable and, when set, emits block-level scan ranges (one per block) to enable local placement. Use this to improve data locality and reduce network I/O. Note it can increase skew (assignment imbalance) and may cause more rebalancing in heavy-locality scenarios.falseUsed to specify whether some commands will be forwarded to the leader FE for execution. Alias: forward_to_master. The default value is false, meaning not forwarding to the leader FE. There are multiple FEs in a StarRocks cluster, one of which is the leader FE. Normally, users can connect to any FE for full-featured operations. However, some information is only available on the leader FE.
For example, if the SHOW BACKENDS command is not forwarded to the leader FE, only basic information (for example, whether the node is alive) can be viewed. Forwarding to the leader FE can get more detailed information including the node start time and last heartbeat time.
The commands affected by this variable are as follows:
SHOW FRONTENDS: Forwarding to the leader FE allows users to view the last heartbeat message.
SHOW BACKENDS: Forwarding to the leader FE allows users to view the boot time, last heartbeat information, and disk capacity information.
SHOW BROKER: Forwarding to the leader FE allows users to view the boot time and last heartbeat information.
SHOW TABLET
ADMIN SHOW REPLICA DISTRIBUTION
ADMIN SHOW REPLICA STATUS: Forwarding to the leader FE allows users to view the tablet information stored in the metadata of the leader FE. Normally, the tablet information should be the same in the metadata of different FEs. If an error occurs, you can use this method to compare the metadata of the current FE and the leader FE.
Show PROC: Forwarding to the leader FE allows users to view the PROC information stored in the metadata. This is mainly used for metadata comparison.
true indicates the operation is allowed and the system decides whether the left table can be filtered. false indicates the operation is disabled. The default value is true.Used for MySQL client compatibility. No practical usage.
innodbReadOnly in SessionVariable.java and is accessible via isInnodbReadOnly() and setInnodbReadOnly(boolean). The SessionVariable class only holds the flag; any enforcement (preventing write/DDL to InnoDB tables or altering transaction behavior) must be implemented by the transaction/storage/authorization layers which should read this session flag. Use this variable to convey client intent for read-only behavior within the current session for components that respect it.trueinsert_timeout applies to operations involved INSERT (for example, UPDATE, DELETE, CTAS, materialized view refresh, statistics collection, and PIPE), replacing query_timeout.Used for MySQL client compatibility. No practical usage.
1: The system adaptively enables JIT compilation for compilable expressions.-1: JIT compilation is enabled for all compilable, non-constant expressions.0: JIT compilation is disabled. You can disable it manually if any error is returned for this feature.true, PlanFragmentBuilder reads the session flag and calls joinNode.setEnableLateMaterialization(...) so join execution can defer full row/payload materialization until after join/key-based filtering. This reduces memory usage and I/O for joins with large payload columns or highly selective join predicates by carrying only join keys through the join and materializing payloads for matched rows. The flag is defined in SessionVariable.java as JOIN_LATE_MATERIALIZATION and defaults to false. Enabling this may interact with column-trimming and scan-stage pruning optimizations (for example enable_filter_unused_columns_in_scan_stage) and can change join runtime behavior; test queries for correctness and performance before enabling broadly.falsebalance: Distributes buckets evenly across worker nodes to achieve balanced workload and better performance.elastic: Uses consistent hashing to assign buckets to worker nodes, which can provide better load distribution in elastic environments.Used for MySQL client compatibility. No practical usage.
enable_large_in_predicate is true and an IN-list (string or integer) contains >= this many literals, the parser (AstBuilder) creates a LargeInPredicate containing the raw text and a compact value list (and keeps only a minimal representative Expr in the AST). For integer lists the parser additionally verifies all literals can be parsed as integral values and will fall back to a normal InPredicate if parsing fails. LargeInPredicate is later transformed into a left semi/anti join for execution, improving parse/analyze/deploy performance for queries with very large constant IN lists. The variable is exposed as a session-level VariableMgr.VarAttr and can be get/set per session.100000Specifies the memory limit for the import operation. The default value is 0, meaning that this variable is not used and query_mem_limit is used instead.
This variable is only used for the INSERT operation which involves both query and import. If the user does not set this variable, the memory limit for both query and import will be set as exec_mem_limit. Otherwise, the memory limit for query will be set as exec_mem_limit and the memory limit for import will be as load_mem_limit.
Other import methods such as BROKER LOAD, STREAM LOAD still use exec_mem_limit for memory limit.
Specifies the maximum number of unqualified data rows that can be logged. Valid values: 0, -1, and any non-zero positive integer. Default value: 0.
0 specifies that data rows that are filtered out will not be logged.-1 specifies that all data rows that are filtered out will be logged.n specifies that up to n data rows that are filtered out can be logged on each BE.true (Default): Enable low cardinality optimization on data lake queries.false: Disable low cardinality optimization on data lake queries.true the optimizer will attempt the V2 rewrite strategy implemented by LowCardinalityRewriteRule (uses DecodeCollector / DecodeRewriter to encode/decode low-cardinality VARCHAR columns). When false the optimizer falls back to the legacy rewrite strategy implemented by AddDecodeNodeForDictStringRule. Execution of either rewrite also requires enableLowCardinalityOptimize to be enabled; if enableLowCardinalityOptimize is disabled, no low-cardinality rewrite is performed. The variable is checked in optimizer tree-rewrite paths to choose the appropriate transformation.trueUsed for MySQL client compatibility. No practical usage. Table names in StarRocks are case-sensitive.
Specifies the query rewrite mode of asynchronous materialized views. Valid values:
disable: Disable automatic query rewrite of asynchronous materialized views.default (Default value): Enable automatic query rewrite of asynchronous materialized views, and allow the optimizer to decide whether a query can be rewritten using the materialized view based on the cost. If the query cannot be rewritten, it directly scans the data in the base table.default_or_error: Enable automatic query rewrite of asynchronous materialized views, and allow the optimizer to decide whether a query can be rewritten using the materialized view based on the cost. If the query cannot be rewritten, an error is returned.force: Enable automatic query rewrite of asynchronous materialized views, and the optimizer prioritizes query rewrite using the materialized view. If the query cannot be rewritten, it directly scans the data in the base table.force_or_error: Enable automatic query rewrite of asynchronous materialized views, and the optimizer prioritizes query rewrite using the materialized view. If the query cannot be rewritten, an error is returned.TOlapScanNode.max_parallel_scan_instance_num) and is included in query/load runtime profiles. It is declared with @VariableMgr.VarAttr and can be read/set via the session variable APIs (getMaxParallelScanInstanceNum / setMaxParallelScanInstanceNum). Use this to limit scan parallelism per session for resource control or debugging. When left at the default -1, the planner/system default (resource- or configuration-derived) parallelism is used.-1enable_pipeline_engine is enabled and pipeline_dop is not explicitly set (greater than 0). If pipeline_dop greater than 0 this variable is ignored and pipeline_dop is used directly.pipeline_dop less than or equal to 0 (adaptive/default mode), the effective DOP for execution is computed as min(max_pipeline_dop, the backend default DOP returned by BackendResourceStat). For pipeline sinks the same logic uses the sink default DOP.max_pipeline_dop less than or equal to 0 no additional cap is applied and the backend default DOP is used.64be.conf file is used. If this variable is set to a value greater than 0, the value in be.conf is ignored.be.conf file is used. If this variable is set to a value greater than 0, the value in be.conf is ignored.1 indicates that only materialized views created on base tables can be used for query rewrite.Used for MySQL client compatibility. No practical usage.
Used for MySQL client compatibility. No practical usage.
Used for MySQL client compatibility. No practical usage.
0 (AUTO) — allow the optimizer to choose the aggregation stage selection. When 0 is set, cost-based multi-stage decisions can be enabled via enable_cost_based_multi_stage_agg.1 (ONE_STAGE) — force a single-stage aggregate.2 (TWO_STAGE) — force a two-stage aggregate.3 (THREE_STAGE) — force a three-stage aggregate (only producible for single-column DISTINCT scenarios).4 (FOUR_STAGE) — force a four-stage aggregate (only producible for single-column DISTINCT scenarios).
Setting a forced stage overrides automatic selection logic used by optimizer rules (for example, SplitMultiPhaseAggRule and related cost checks). When 0 is set, the planner may still be constrained by enable_cost_based_multi_stage_agg. The variable is consulted in cost enforcement (EnforceAndCostTask), aggregation-splitting rules, and plan-fragment construction to influence exchange/partitioning and pruning decisions.0false, which means columns in ORC files are read based on their ordinal positions in the Hive table definition. If this variable is set to true, columns are read based on their names.Used to set the number of exchange nodes that an upper-level node uses to receive data from a lower-level node in the execution plan. The default value is -1, meaning the number of exchange nodes is equal to the number of execution instances of the lower-level node. When this variable is set to be greater than 0 but smaller than the number of execution instances of the lower-level node, the number of exchange nodes equals the set value.
In a distributed query execution plan, the upper-level node usually has one or more exchange nodes to receive data from the execution instances of the lower-level node on different BEs. Usually the number of exchange nodes is equal to the number of execution instances of the lower-level node.
In some aggregation query scenarios where the amount of data decreases drastically after aggregation, you can try to modify this variable to a smaller value to reduce the resource overhead. An example would be running aggregation queries using the Duplicate Key table.
Used to set the number of instances used to scan nodes on each BE. The default value is 1.
A query plan typically produces a set of scan ranges. This data is distributed across multiple BE nodes. A BE node will have one or more scan ranges, and by default, each BE node's set of scan ranges is processed by only one execution instance. When machine resources suffice, you can increase this variable to allow more execution instances to process a scan range simultaneously for efficiency purposes.
The number of scan instances determines the number of other execution nodes in the upper level, such as aggregation nodes and join nodes. Therefore, it increases the concurrency of the entire query plan execution. Modifying this variable will help improve efficiency, but larger values will consume more machine resources, such as CPU, memory, and disk IO.
AUTOALWAYSNEVERAUTODescription: Used to control the mode of partial updates. Valid values:
auto (default): The system automatically determines the mode of partial updates by analyzing the UPDATE statement and the columns involved.column: The column mode is used for the partial updates, which is particularly suitable for the partial updates which involve a small number of columns and a large number of rows.For more information, see UPDATE.
Default: auto
Introduced in: v3.1
Used for compatibility with MySQL JDBC versions 8.0.16 and above. No practical usage.
Description: The parallelism of a pipeline instance, which is used to adjust the query concurrency. Default value: 0, indicating the system automatically adjusts the parallelism of each pipeline instance. This variable also controls the parallelism of loading jobs on OLAP tables. You can also set this variable to a value greater than 0. Generally, set the value to half the number of physical CPU cores. From v3.0 onwards, StarRocks adaptively adjusts this variable based on query parallelism.
Default: 0
Data type: Int
Description: Controls the level of the query profile. A query profile often has five layers: Fragment, FragmentInstance, Pipeline, PipelineDriver, and Operator. Different levels provide different details of the profile:
Default: 1
Data type: Int
auto: The system will automatically select the retrieval plan.local: Use the local cache plan.distributed: Use the distributed plan.min, max, null count, row size, and ndv (if a puffin file exists). When this item is set to false, only the row count information will be collected.auto (default): the system automatically caches data selectively based on the population rule.always: Always cache the data.never: Never cache the data.true: indicates that the FEs distribute query execution plans to CN nodes.false: indicates that the FEs do not distribute query execution plans to CN nodes.query_cache_entry_max_bytes or query_cache_entry_max_rows is set to 0, the Passthrough mode is used even when no computation results are generated from the involved tablets.query_cache_entry_max_bytes or query_cache_entry_max_rows, the query is switched to Passthrough mode.query_cache_entry_max_bytes. Default value: .Used for MySQL client compatibility. No practical use.
Used for compatibility with JDBC connection pool C3P0. No practical use.
query_delivery_timeout controls how long the coordinator waits for fragment delivery to complete before timing out. When building the query options sent to the backend, the value is assigned to the query_delivery_timeout field (subject to an internal cap to avoid integer overflow). If enablePhasedScheduler is enabled, the system uses the query_timeout value instead for delivery timeout.300Memory Exceed Limit error happens, you could try to increase this variable. Setting it to 0 indicates no limit is imposed.0. Setting it to 0 indicates no limit is imposed.0. Setting it to 0 indicates no limit is imposed.00. Setting it to 0 indicates no limit is imposed.1024.0. Setting it to 0 indicates no limit is imposed.query_timeout does not apply to operations involved INSERT (for example, UPDATE, DELETE, CTAS, materialized view refresh, statistics collection, and PIPE). * **Description**: The specified resource group of this session
* **Default**: ""
* **Data Type**: String
* **Introduced in**: 3.2.0
Description: Whether to place GRF on Exchange Node after GRF is pushed down across the Exchange operator to a lower-level operator. The default value is false, which means GRF will not be placed on Exchange Node after it is pushed down across the Exchange operator to a lower-level operator. This prevents repetitive use of GRF and reduces the computation time.
However, GRF delivery is a "try-best" process. If the lower-level operator fails to receive the GRF but the GRF is not placed on Exchange Node, data cannot be filtered, which compromises filter performance. true means GRF will still be placed on Exchange Node even after it is pushed down across the Exchange operator to a lower-level operator.
Default: false
skip_disk_cache. When set to true, OlapScanNode.addScanRangeLocations() sets internalRange.setSkip_disk_cache(true) on the created TInternalScanRange objects so downstream BE scan nodes are told to bypass the local disk cache for that scan. It is applied per-session and is evaluated at plan/scan-range construction time. Use this together with skip_page_cache (to control page cache skipping) and data-cache related variables (enable_scan_datacache / enable_populate_datacache) as appropriate.falseOlapScanNode.addScanRangeLocations sets TInternalScanRange.skip_page_cache for each scan range sent to the backend, causing storage reads to skip the OS/page caching layer. Typical use cases: large one-time scans where avoiding page-cache pollution is desired or when a user prefers direct I/O semantics. Do not confuse with skip_local_disk_cache, which controls the storage-layer data cache; fill_data_cache can also influence caching behavior.falsetransmission_encode_level:
1) — enable adaptive encoding;2) — encode integer-like columns with streamvbyte;4) — compress binary/string columns with LZ4.
Example semantics from the related transmission_encode_level comment: 7 enables adaptive encoding for numbers and strings; 6 forces encoding of numbers and strings. Changing this value adjusts CPU vs. disk I/O trade-offs for spills (higher encoding levels increase CPU work but reduce spill size / I/O).
Implemented as the session variable annotated SPILL_ENCODE_LEVEL in SessionVariable.java (getter getSpillEncodeLevel()), and documented adjacent to other spill tunables such as spill_mem_table_size.7The execution mode of intermediate result spilling. Valid values:
auto: Spilling is automatically triggered when the memory usage threshold is reached.force: StarRocks forcibly executes spilling for all relevant operators, regardless of memory usage.This variable takes effect only when the variable enable_spill is set to true.
spill_partitionwise_agg is true (and enable_spill is enabled), the execution engine will partition spilled aggregation data and perform per-partition spill/merge processing. The flag is propagated to execution via TSpillOptions.setSpill_partitionwise_agg. Related session settings that affect its behavior are spill_partitionwise_agg_partition_num (number of partitions created) and spill_partitionwise_agg_skew_elimination (skew handling). This option reduces peak memory usage for large-group aggregations by splitting work across partitions during spill, but may increase I/O and merge overhead.false0Description: The SQL dialect that is used. For example, you can run the set sql_dialect = 'trino'; command to set the SQL dialect to Trino, so you can use Trino-specific SQL syntax and functions in your queries.
NOTICE
After you configure StarRocks to use the Trino dialect, identifiers in queries are not case-sensitive by default. Therefore, you must specify names in lowercase for your databases and tables at database and table creation. If you specify database and table names in uppercase, queries against these databases and tables will fail.
Data type: StarRocks
Introduced in: v3.0
Used to specify the SQL mode to accommodate certain SQL dialects. Valid values include:
PIPES_AS_CONCAT: The pipe symbol | is used to concatenate strings, for example, select 'hello ' || 'world'.ONLY_FULL_GROUP_BY (Default): The SELECT LIST can only contain GROUP BY columns or aggregate functions.ALLOW_THROW_EXCEPTION: returns an error instead of NULL when type conversion fails.FORBID_INVALID_DATE: prohibits invalid dates.MODE_DOUBLE_LITERAL: interprets floating-point types as DOUBLE rather than DECIMAL.SORT_NULLS_LAST: places NULL values at the end after sorting.ERROR_IF_OVERFLOW: returns an error instead of NULL in the case of arithmetic overflow. Currently, only the DECIMAL data type supports this option.GROUP_CONCAT_LEGACY: uses the group_concat syntax of v2.5 and earlier. This option is supported from v3.0.9 and v3.1.6.You can set only one SQL mode, for example:
set sql_mode = 'PIPES_AS_CONCAT';
Or, you can set multiple modes at a time, for example:
set sql_mode = 'PIPES_AS_CONCAT,ERROR_IF_OVERFLOW,GROUP_CONCAT_LEGACY';
Used for MySQL client compatibility. No practical usage.
The types of engines supported by StarRocks:
Used to specify the preaggregation mode for the first phase of GROUP BY. If the preaggregation effect in the first phase is not satisfactory, you can use the streaming mode, which performs simple data serialization before streaming data to the destination. Valid values:
auto: The system first tries local preaggregation. If the effect is not satisfactory, it switches to the streaming mode. This is the default value.force_preaggregation: The system directly performs local preaggregation.force_streaming: The system directly performs streaming.Used to display the time zone of the current system. Cannot be changed.
Used to set the time zone of the current session. The time zone can affect the results of certain time functions.
transactionIsolation and annotated with @VariableMgr.VarAttr(name = TRANSACTION_ISOLATION); its presence ensures compatibility with MySQL 5.8 clients. There is a related tx_isolation variable (kept for c3p0 client compatibility). The value is stored per-session to satisfy client/libraries, e.g., REPEATABLE-READ. Transaction isolation semantics in the engine are managed by StarRocks' transaction subsystem and may not be changed solely by modifying this session variable.REPEATABLE-READtx_read_only. This variable specifies the transaction access mode. ON indicates read only and OFF indicates readable and writable.AUTO to let the system pick a suitable algorithm based on environment (trade CPU for network bandwidth when beneficial). Other valid values are names recognized by CompressionUtils.findTCompressionByName() (for example, codec identifiers exposed by the runtime). For load-specific transmission you can use the separate load_transmission_compression_type session variable or supply transmission compression in stream-load parameters (HTTP header HTTP_TRANSMISSION_COMPRESSION_TYPE / thrift field transmission_compression_type).AUTOUsed for MySQL client compatibility. No practical usage. The alias is transaction_isolation.
MVTaskRunProcessor) temporarily sets this variable to Long.MAX_VALUE / 1000 to wait effectively indefinitely for visibility and restores the original value after refresh. When enable_sync_publish is set to true, this variable is ignored because the publish wait is derived from the job deadline instead.10Description: The maximum number of CN nodes that can be used. This variable is valid when prefer_compute_node=true. Valid values:
-1: indicates that all CN nodes are used.0: indicates that no CN nodes are used.Default: -1
Data type: Int
Introduced in: v2.4
tResult.setUse_page_cache) so BE execution honors the decision. Commonly disabled (false) for internal/background jobs (statistics collection, hyper queries, online optimize) to avoid polluting the shared page cache with non-user data — see usages in StatisticsCollectJob, HyperQueryJob, and OnlineOptimizeJobV2.trueThe MySQL server version returned to the client. The value is the same as FE parameter mysql_server_version.
The StarRocks version. Cannot be changed.