docs/gateway/cli-backends.md
OpenClaw can run local AI CLIs as a text-only fallback when API providers are down, rate-limited, or temporarily misbehaving. This is intentionally conservative:
bundleMcp: true
can receive gateway tools via a loopback MCP bridge.This is designed as a safety net rather than a primary path. Use it when you want “always works” text responses without relying on external APIs.
If you want a full harness runtime with ACP session controls, background tasks, thread/conversation binding, and persistent external coding sessions, use ACP Agents instead. CLI backends are not ACP.
You can use Codex CLI without any config (the bundled OpenAI plugin registers a default backend):
openclaw agent --message "hi" --model codex-cli/gpt-5.5
If your gateway runs under launchd/systemd and PATH is minimal, add just the command path:
{
agents: {
defaults: {
cliBackends: {
"codex-cli": {
command: "/opt/homebrew/bin/codex",
},
},
},
},
}
That’s it. No keys, no extra auth config needed beyond the CLI itself.
If you use a bundled CLI backend as the primary message provider on a
gateway host, OpenClaw now auto-loads the owning bundled plugin when your config
explicitly references that backend in a model ref or under
agents.defaults.cliBackends.
Add a CLI backend to your fallback list so it only runs when primary models fail:
{
agents: {
defaults: {
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["codex-cli/gpt-5.5"],
},
models: {
"anthropic/claude-opus-4-6": { alias: "Opus" },
"codex-cli/gpt-5.5": {},
},
},
},
}
Notes:
agents.defaults.models (allowlist), you must include your CLI backend models there too.All CLI backends live under:
agents.defaults.cliBackends
Each entry is keyed by a provider id (e.g. codex-cli, my-cli).
The provider id becomes the left side of your model ref:
<provider>/<model>
{
agents: {
defaults: {
cliBackends: {
"codex-cli": {
command: "/opt/homebrew/bin/codex",
},
"my-cli": {
command: "my-cli",
args: ["--json"],
output: "json",
input: "arg",
modelArg: "--model",
modelAliases: {
"claude-opus-4-6": "opus",
"claude-sonnet-4-6": "sonnet",
},
sessionArg: "--session",
sessionMode: "existing",
sessionIdFields: ["session_id", "conversation_id"],
systemPromptArg: "--system",
// For CLIs with a dedicated prompt-file flag:
// systemPromptFileArg: "--system-file",
// Codex-style CLIs can point at a prompt file instead:
// systemPromptFileConfigArg: "-c",
// systemPromptFileConfigKey: "model_instructions_file",
systemPromptWhen: "first",
imageArg: "--image",
imageMode: "repeat",
serialize: true,
},
},
},
},
}
codex-cli/...).claude-cli backend keeps a Claude stdio process alive per
OpenClaw session and sends follow-up turns over stream-json stdin.The bundled OpenAI codex-cli backend passes OpenClaw's system prompt through
Codex's model_instructions_file config override (-c model_instructions_file="..."). Codex does not expose a Claude-style
--append-system-prompt flag, so OpenClaw writes the assembled prompt to a
temporary file for each fresh Codex CLI session.
The bundled Anthropic claude-cli backend receives the OpenClaw skills snapshot
two ways: the compact OpenClaw skills catalog in the appended system prompt, and
a temporary Claude Code plugin passed with --plugin-dir. The plugin contains
only the eligible skills for that agent/session, so Claude Code's native skill
resolver sees the same filtered set that OpenClaw would otherwise advertise in
the prompt. Skill env/API key overrides are still applied by OpenClaw to the
child process environment for the run.
Claude CLI also has its own noninteractive permission mode. OpenClaw maps that
to the existing exec policy instead of adding Claude-specific config: when the
effective requested exec policy is YOLO (tools.exec.security: "full" and
tools.exec.ask: "off"), OpenClaw adds --permission-mode bypassPermissions.
Per-agent agents.list[].tools.exec settings override global tools.exec for
that agent. To force a different Claude mode, set explicit raw backend args
such as --permission-mode default or --permission-mode acceptEdits under
agents.defaults.cliBackends.claude-cli.args and matching resumeArgs.
The bundled Anthropic claude-cli backend also maps OpenClaw /think levels
to Claude Code's native --effort flag for non-off levels. minimal and
low map to low, adaptive and medium map to medium, and high,
xhigh, and max map directly. Other CLI backends need their owning plugin to
declare an equivalent argv mapper before /think can affect the spawned CLI.
Before OpenClaw can use the bundled claude-cli backend, Claude Code itself
must already be logged in on the same host:
claude auth login
claude auth status --text
openclaw models auth login --provider anthropic --method cli --set-default
Use agents.defaults.cliBackends.claude-cli.command only when the claude
binary is not already on PATH.
sessionArg (e.g. --session-id) or
sessionArgs (placeholder {sessionId}) when the ID needs to be inserted
into multiple flags.resumeArgs (replaces args when resuming) and optionally resumeOutput
(for non-JSON resumes).sessionMode:
always: always send a session id (new UUID if none stored).existing: only send a session id if one was stored before.none: never send a session id.claude-cli defaults to liveSession: "claude-stdio", output: "jsonl",
and input: "stdin" so follow-up turns reuse the live Claude process while
it is active. Warm stdio is the default now, including for custom configs
that omit transport fields. If the Gateway restarts or the idle process
exits, OpenClaw resumes from the stored Claude session id. Stored session
ids are verified against an existing readable project transcript before
resume, so phantom bindings are cleared with reason=transcript-missing
instead of silently starting a fresh Claude CLI session under --resume.agents.defaults.cliBackends.claude-cli.reliability.outputLimits.maxTurnRawChars
and maxTurnLines; OpenClaw clamps those settings to 64 MiB and 100,000
lines./reset and explicit session.reset policies still
do.Serialization notes:
serialize: true keeps same-lane runs ordered.When a claude-cli attempt fails over to a non-CLI candidate in
agents.defaults.model.fallbacks, OpenClaw seeds
the next attempt with a context prelude harvested from Claude Code's local
JSONL transcript at ~/.claude/projects/. Without this seed, the fallback
provider would start cold because OpenClaw's own session transcript is empty
for claude-cli runs.
/compact summary or compact_boundary
marker, then appends the most recent post-boundary turns up to a char
budget. Pre-boundary turns are dropped because the summary already represents
them.(tool call: name) and
(tool result: …) hints to keep the prompt budget honest. The summary is
labeled (truncated) if it overflows.claude-cli to claude-cli fallbacks rely on Claude's own
--resume and skip the prelude.If your CLI accepts image paths, set imageArg:
imageArg: "--image",
imageMode: "repeat"
OpenClaw will write base64 images to temp files. If imageArg is set, those
paths are passed as CLI args. If imageArg is missing, OpenClaw appends the
file paths to the prompt (path injection), which is enough for CLIs that auto-
load local files from plain paths.
output: "json" (default) tries to parse JSON and extract text + session id.response and
usage from stats when usage is missing or empty.output: "jsonl" parses JSONL streams (for example Codex CLI --json) and extracts the final agent message plus session
identifiers when present.output: "text" treats stdout as the final response.Input modes:
input: "arg" (default) passes the prompt as the last CLI arg.input: "stdin" sends the prompt via stdin.maxPromptArgChars is set, stdin is used.The bundled OpenAI plugin also registers a default for codex-cli:
command: "codex"args: ["exec","--json","--color","never","--sandbox","workspace-write","--skip-git-repo-check"]resumeArgs: ["exec","resume","{sessionId}","-c","sandbox_mode=\"workspace-write\"","--skip-git-repo-check"]output: "jsonl"resumeOutput: "text"modelArg: "--model"imageArg: "--image"sessionMode: "existing"The bundled Google plugin also registers a default for google-gemini-cli:
command: "gemini"args: ["--output-format", "json", "--prompt", "{prompt}"]resumeArgs: ["--resume", "{sessionId}", "--output-format", "json", "--prompt", "{prompt}"]imageArg: "@"imagePathScope: "workspace"modelArg: "--model"sessionMode: "existing"sessionIdFields: ["session_id", "sessionId"]Prerequisite: the local Gemini CLI must be installed and available as
gemini on PATH (brew install gemini-cli or
npm install -g @google/gemini-cli).
Gemini CLI JSON notes:
response field.stats when usage is absent or empty.stats.cached is normalized into OpenClaw cacheRead.stats.input is missing, OpenClaw derives input tokens from
stats.input_tokens - stats.cached.Override only if needed (common: absolute command path).
CLI backend defaults are now part of the plugin surface:
api.registerCliBackend(...).id becomes the provider prefix in model refs.agents.defaults.cliBackends.<id> still overrides the plugin default.normalizeConfig hook.Plugins that need tiny prompt/message compatibility shims can declare bidirectional text transforms without replacing a provider or CLI backend:
api.registerTextTransforms({
input: [
{ from: /red basket/g, to: "blue basket" },
{ from: /paper ticket/g, to: "digital ticket" },
{ from: /left shelf/g, to: "right shelf" },
],
output: [
{ from: /blue basket/g, to: "red basket" },
{ from: /digital ticket/g, to: "paper ticket" },
{ from: /right shelf/g, to: "left shelf" },
],
});
input rewrites the system prompt and user prompt passed to the CLI. output
rewrites streamed assistant deltas and parsed final text before OpenClaw handles
its own control markers and channel delivery.
For CLIs that emit Claude Code stream-json compatible JSONL, set
jsonlDialect: "claude-stream-json" on that backend's config.
CLI backends do not receive OpenClaw tool calls directly, but a backend can
opt into a generated MCP config overlay with bundleMcp: true.
Current bundled behavior:
claude-cli: generated strict MCP config filecodex-cli: inline config overrides for mcp_servers; the generated
OpenClaw loopback server is marked with Codex's per-server tool approval mode
so MCP calls cannot stall on local approval promptsgoogle-gemini-cli: generated Gemini system settings fileWhen bundle MCP is enabled, OpenClaw:
OPENCLAW_MCP_TOKEN)If no MCP servers are enabled, OpenClaw still injects a strict config when a backend opts into bundle MCP so background runs stay isolated.
Session-scoped bundled MCP runtimes are cached for reuse within a session, then
reaped after mcp.sessionIdleTtlMs milliseconds of idle time (default 10
minutes; set 0 to disable). One-shot embedded runs such as auth probes,
slug generation, and active-memory recall request cleanup at run end so stdio
children and Streamable HTTP/SSE streams do not outlive the run.
bundleMcp: true.--json run. OpenClaw sessions still work
normally.command to a full path.modelAliases to map provider/model → CLI model.sessionArg is set and sessionMode is not
none (Codex CLI currently cannot resume with JSON output).imageArg (and verify CLI supports file paths).