docs/tools/thinking.md
/t <level>, /think:<level>, or /thinking <level>.off | minimal | low | medium | high | xhigh | adaptive | max
think effort)x-high, x_high, extra-high, extra high, and extra_high map to xhigh.highest maps to high.on.adaptive, xhigh, and max are only advertised for provider/model profiles that support them. Typed directives for unsupported levels are rejected with that model's valid options.adaptive falls back to medium on non-adaptive models, while xhigh and max fall back to the largest supported non-off level for the selected model.adaptive when no explicit thinking level is set./think xhigh to adaptive thinking plus output_config.effort: "xhigh", because /think is a thinking directive and xhigh is the Opus 4.7 effort setting./think max; it maps to the same provider-owned max effort path./think xhigh|max; both map to DeepSeek reasoning_effort: "max" while lower non-off levels map to high./think xhigh and send OpenRouter-supported reasoning_effort values. Stored max overrides fall back to xhigh./think low|medium|high|max; max maps to native think: "high" because Ollama's native API accepts low, medium, and high effort strings./think through model-specific Responses API effort support. /think off sends reasoning.effort: "none" only when the target model supports it; otherwise OpenClaw omits the disabled reasoning payload instead of sending an unsupported value./think xhigh by setting models.providers.<provider>.models[].compat.supportedReasoningEfforts to include "xhigh". This uses the same compat metadata that maps outbound OpenAI reasoning effort payloads, so menus, session validation, agent CLI, and llm-task agree with transport behavior./think adaptive to Gemini's provider-owned dynamic thinking. Gemini 3 requests omit a fixed thinkingLevel, while Gemini 2.5 requests send thinkingBudget: -1; fixed levels still map to the closest Gemini thinkingLevel or budget for that model family.minimax/*) on the Anthropic-compatible streaming path defaults to thinking: { type: "disabled" } unless you explicitly set thinking in model params or request params. This avoids leaked reasoning_content deltas from MiniMax's non-native Anthropic stream format.zai/*) only supports binary thinking (on/off). Any non-off level is treated as on (mapped to low).moonshot/*) maps /think off to thinking: { type: "disabled" } and any non-off level to thinking: { type: "enabled" }. When thinking is enabled, Moonshot only accepts tool_choice auto|none; OpenClaw normalizes incompatible values to auto.agents.list[].thinkingDefault in config).agents.defaults.thinkingDefault in config).medium or the nearest supported non-off level for that model, and non-reasoning models stay off./think:medium or /t high./think:off or session idle reset.Thinking level set to high. / Thinking disabled.). If the level is invalid (e.g. /thinking big), the command is rejected with a hint and the session state is left unchanged./think (or /think:) with no argument to see the current thinking level.--effort when using claude-cli; see CLI backends.on|off.Fast mode enabled. / Fast mode disabled../fast (or /fast status) with no mode to see the current effective fast-mode state./fast on|offagents.list[].fastModeDefault)agents.defaults.models["<provider>/<model>"].params.fastModeoffopenai/*, fast mode maps to OpenAI priority processing by sending service_tier=priority on supported Responses requests.openai-codex/*, fast mode sends the same service_tier=priority flag on Codex Responses. OpenClaw keeps one shared /fast toggle across both auth paths.anthropic/* requests, including OAuth-authenticated traffic sent to api.anthropic.com, fast mode maps to Anthropic service tiers: /fast on sets service_tier=auto, /fast off sets service_tier=standard_only.minimax/* on the Anthropic-compatible path, /fast on (or params.fastMode: true) rewrites MiniMax-M2.7 to MiniMax-M2.7-highspeed.serviceTier / service_tier model params override the fast-mode default when both are set. OpenClaw still skips Anthropic service-tier injection for non-Anthropic proxy base URLs./status shows Fast only when fast mode is enabled.on (minimal) | full | off (default).Verbose logging enabled. / Verbose logging disabled.; invalid levels return a hint without changing state./verbose off stores an explicit session override; clear it via the Sessions UI by choosing inherit./verbose (or /verbose:) with no argument to see the current verbose level.<emoji> <tool-name>: <arg> when available. These tool summaries are sent as soon as each tool starts (separate bubbles), not as streaming deltas.on or full.full, tool outputs are also forwarded after completion (separate bubble, truncated to a safe length). If you toggle /verbose on|full|off while a run is in-flight, subsequent tool bubbles honor the new setting.agents.defaults.toolProgressDetail controls the shape of /verbose tool summaries and progress-draft tool lines. Use "explain" (default) for compact human labels such as 🛠️ Exec: checking JS syntax; use "raw" when you also want the raw command/detail appended for debugging. Per-agent agents.list[].toolProgressDetail overrides the default.
explain: 🛠️ Exec: check JS syntax for /tmp/app.jsraw: 🛠️ Exec: check JS syntax for /tmp/app.js, node --check /tmp/app.json | off (default).Plugin trace enabled. / Plugin trace disabled../trace (or /trace:) with no argument to see the current trace level./trace is narrower than /verbose: it only exposes plugin-owned trace/debug lines such as Active Memory debug summaries./status and as a follow-up diagnostic message after the normal assistant reply.on|off|stream.Reasoning:.stream (Telegram only): streams reasoning into the Telegram draft bubble while the reply is generating, then sends the final answer without reasoning./reason./reasoning (or /reasoning:) with no argument to see the current reasoning level.agents.list[].reasoningDefault), then fallback (off).Malformed local-model reasoning tags are handled conservatively. Closed <think>...</think> blocks stay hidden on normal replies, and unclosed reasoning after already visible text is also hidden. If a reply is fully wrapped in a single unclosed opening tag and would otherwise deliver as empty text, OpenClaw removes the malformed opening tag and delivers the remaining text.
Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.). Inline directives in a heartbeat message apply as usual (but avoid changing session defaults from heartbeats).Reasoning: message (when available), set agents.defaults.heartbeat.includeReasoning: true or per-agent agents.list[].heartbeat.includeReasoning: true.sessions.patch; it does not wait for the next send and it is not a one-shot thinkingOnce override.Default (<resolved level>), where the resolved default comes from the active session model's provider thinking profile plus the same fallback logic that /status and session_status use.thinkingLevels returned by the gateway session row/defaults, with thinkingOptions kept as a legacy label list. The browser UI does not keep its own provider regex list; plugins own model-specific level sets./think:<level> still works and updates the same stored session level, so chat directives and the picker stay in sync.resolveThinkingProfile(ctx) to define the model's supported levels and default.resolveClaudeThinkingProfile(modelId) from openclaw/plugin-sdk/provider-model-shared so direct Anthropic and proxy catalogs stay aligned.id (off, minimal, low, medium, high, xhigh, adaptive, or max) and may include a display label. Binary providers use { id: "low", label: "on" }.api.runtime.agent.resolveThinkingPolicy({ provider, model }) plus api.runtime.agent.normalizeThinkingLevel(...); they should not keep their own provider/model level lists.catalog into resolveThinkingPolicy so compat.supportedReasoningEfforts opt-ins are reflected in plugin-side validation.supportsXHighThinking, isBinaryThinking, and resolveDefaultThinkingLevel) remain as compatibility adapters, but new custom level sets should use resolveThinkingProfile.thinkingLevels, thinkingOptions, and thinkingDefault so ACP/chat clients render the same profile ids and labels that runtime validation uses.