docs/cli/onboard.md
openclaw onboardFull guided onboarding for local or remote Gateway setup. Use this when you want OpenClaw to walk through model auth, workspace, gateway, channels, skills, and health in one flow.
openclaw onboard
openclaw onboard --modern
openclaw onboard --flow quickstart
openclaw onboard --flow manual
openclaw onboard --flow import
openclaw onboard --import-from hermes --import-source ~/.hermes
openclaw onboard --skip-bootstrap
openclaw onboard --mode remote --remote-url wss://gateway-host:18789
--flow import uses plugin-owned migration providers such as Hermes. It only runs against a fresh OpenClaw setup; if existing config, credentials, sessions, or workspace memory/identity files are present, reset or choose a fresh setup before importing.
--modern starts the Crestodian conversational onboarding preview. Without
--modern, openclaw onboard keeps the classic onboarding flow.
For plaintext private-network ws:// targets (trusted networks only), set
OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1 in the onboarding process environment.
There is no openclaw.json equivalent for this client-side transport
break-glass.
Non-interactive custom provider:
openclaw onboard --non-interactive \
--auth-choice custom-api-key \
--custom-base-url "https://llm.example.com/v1" \
--custom-model-id "foo-large" \
--custom-api-key "$CUSTOM_API_KEY" \
--secret-input-mode plaintext \
--custom-compatibility openai \
--custom-image-input
--custom-api-key is optional in non-interactive mode. If omitted, onboarding checks CUSTOM_API_KEY.
OpenClaw marks common vision model IDs as image-capable automatically. Pass --custom-image-input for unknown custom vision IDs, or --custom-text-input to force text-only metadata.
LM Studio also supports a provider-specific key flag in non-interactive mode:
openclaw onboard --non-interactive \
--auth-choice lmstudio \
--custom-base-url "http://localhost:1234/v1" \
--custom-model-id "qwen/qwen3.5-9b" \
--lmstudio-api-key "$LM_API_TOKEN" \
--accept-risk
Non-interactive Ollama:
openclaw onboard --non-interactive \
--auth-choice ollama \
--custom-base-url "http://ollama-host:11434" \
--custom-model-id "qwen3.5:27b" \
--accept-risk
--custom-base-url defaults to http://127.0.0.1:11434. --custom-model-id is optional; if omitted, onboarding uses Ollama's suggested defaults. Cloud model IDs such as kimi-k2.5:cloud also work here.
Store provider keys as refs instead of plaintext:
openclaw onboard --non-interactive \
--auth-choice openai-api-key \
--secret-input-mode ref \
--accept-risk
With --secret-input-mode ref, onboarding writes env-backed refs instead of plaintext key values.
For auth-profile backed providers this writes keyRef entries; for custom providers this writes models.providers.<id>.apiKey as an env ref (for example { source: "env", provider: "default", id: "CUSTOM_API_KEY" }).
Non-interactive ref mode contract:
OPENAI_API_KEY).--openai-api-key) unless that env var is also set.Gateway token options in non-interactive mode:
--gateway-auth token --gateway-token <token> stores a plaintext token.--gateway-auth token --gateway-token-ref-env <name> stores gateway.auth.token as an env SecretRef.--gateway-token and --gateway-token-ref-env are mutually exclusive.--gateway-token-ref-env requires a non-empty env var in the onboarding process environment.--install-daemon, when token auth requires a token, SecretRef-managed gateway tokens are validated but not persisted as resolved plaintext in supervisor service environment metadata.--install-daemon, if token mode requires a token and the configured token SecretRef is unresolved, onboarding fails closed with remediation guidance.--install-daemon, if both gateway.auth.token and gateway.auth.password are configured and gateway.auth.mode is unset, onboarding blocks install until mode is set explicitly.gateway.mode="local" into the config. If a later config file is missing gateway.mode, treat that as config damage or an incomplete manual edit, not as a valid local-mode shortcut.--allow-unconfigured is a separate gateway runtime escape hatch. It does not mean onboarding may omit gateway.mode.Example:
export OPENCLAW_GATEWAY_TOKEN="your-token"
openclaw onboard --non-interactive \
--mode local \
--auth-choice skip \
--gateway-auth token \
--gateway-token-ref-env OPENCLAW_GATEWAY_TOKEN \
--accept-risk
Non-interactive local gateway health:
--skip-health, onboarding waits for a reachable local gateway before it exits successfully.--install-daemon starts the managed gateway install path first. Without it, you must already have a local gateway running, for example openclaw gateway run.--skip-health.--skip-bootstrap to set agents.defaults.skipBootstrap: true and skip creating AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md, HEARTBEAT.md, and BOOTSTRAP.md.--install-daemon tries Scheduled Tasks first and falls back to a per-user Startup-folder login item if task creation is denied.Interactive onboarding behavior with reference mode:
file or exec)# Promptless endpoint selection
openclaw onboard --non-interactive \
--auth-choice zai-coding-global \
--zai-api-key "$ZAI_API_KEY"
# Other Z.AI endpoint choices:
# --auth-choice zai-coding-cn
# --auth-choice zai-global
# --auth-choice zai-cn
Non-interactive Mistral example:
openclaw onboard --non-interactive \
--auth-choice mistral-api-key \
--mistral-api-key "$MISTRAL_API_KEY"
If the preferred-provider filter yields no loaded models yet, onboarding falls back to the unfiltered catalog instead of leaving the picker empty.
- **Grok** can offer optional `x_search` setup with the same `XAI_API_KEY` and an `x_search` model choice.
- **Kimi** can ask for the Moonshot API region (`api.moonshot.ai` vs `api.moonshot.cn`) and the default Kimi web-search model.
openclaw channels add
openclaw configure
openclaw agents add <name>
Use openclaw setup instead when you only need the baseline config/workspace. Use openclaw configure later for targeted changes and openclaw channels add for channel-only setup.