docs/integrations/openclaw.mdx
OpenClaw is a personal AI assistant that runs on your own devices. It bridges messaging services (WhatsApp, Telegram, Slack, Discord, iMessage, and more) to AI coding agents through a centralized gateway.
ollama launch openclaw
Ollama handles everything automatically:
<Note>OpenClaw requires a larger context window. It is recommended to use a context window of at least 64k tokens if using local models. See Context length for more information.</Note>
<Note>Previously known as Clawdbot. ollama launch clawdbot still works as an alias.</Note>
OpenClaw ships with a bundled Ollama web_search provider that lets local or cloud-backed Ollama setups search the web through the configured Ollama host.
ollama launch openclaw
Ollama web search is enabled automatically when launching OpenClaw through Ollama. To configure it manually:
openclaw configure --section web
<Note>Ollama web search for local models requires ollama signin.</Note>
To change the model without starting the gateway and TUI:
ollama launch openclaw --config
To use a specific model directly:
ollama launch openclaw --model kimi-k2.5:cloud
If the gateway is already running, it restarts automatically to pick up the new model.
Cloud models:
kimi-k2.5:cloud — Multimodal reasoning with subagentsqwen3.5:cloud — Reasoning, coding, and agentic tool use with visionglm-5.1:cloud — Reasoning and code generationminimax-m2.7:cloud — Fast, efficient coding and real-world productivityLocal models:
gemma4 — Reasoning and code generation locally (~16 GB VRAM)qwen3.5 — Reasoning, coding, and visual understanding locally (~11 GB VRAM)More models at ollama.com/search.
Run OpenClaw without interaction for use in Docker, CI/CD, or scripts:
ollama launch openclaw --model kimi-k2.5:cloud --yes
The --yes flag auto-pulls the model, skips selectors, and requires --model to be specified.
openclaw configure --section channels
Link WhatsApp, Telegram, Slack, Discord, or iMessage to chat with your local models from anywhere.
openclaw gateway stop