docs/integrations/openclaw.mdx
Add long-term memory to OpenClaw agents with the @mem0/openclaw-mem0 plugin. Your agent forgets everything between sessions — this plugin fixes that by automatically watching conversations, extracting what matters, and bringing it back when relevant.
The plugin provides:
Both auto-recall and auto-capture are opt-in (autoRecall: true, autoCapture: true in config). Once enabled, they run silently with no manual intervention required.
Check your OpenClaw version:
openclaw --version
# OpenClaw 2026.4.15 (041266a)
| OpenClaw Version | Plugin Support |
|---|---|
>= 2026.4.15 | Fully supported |
The fastest way is to install directly from your OpenClaw chat, no CLI or config editing needed.
Copy and paste this into your OpenClaw chat; Telegram, WhatsApp, default chat, or any channel where your agent lives:
Setup Mem0 from mem0.ai/claw-setup
OpenClaw installs the plugin, prompts you for your email, and connects your Mem0 account with OTP verification. See Chat Setup below for the full walkthrough.
If you prefer the OpenClaw CLI, or are setting up self-hosted / open-source mode, see Manual Config and Open-Source Mode below.
userIdThe userId field is a string you choose to uniquely identify the user whose memories are being stored. It is not something you look up in the Mem0 dashboard — you define it yourself.
Pick any stable, unique identifier for the user. Common choices:
"user_123", "[email protected]")"550e8400-e29b-41d4-a716-446655440000")"alice")All memories are scoped to this userId — different values create separate memory namespaces. If you don't set it, it defaults to your OS username.
<Tip>In a multi-user application, set userId dynamically per user (e.g. from your auth system) rather than hardcoding a single value.</Tip>
There are two ways to set up @mem0/openclaw-mem0 on the Mem0 platform:
openclaw.json directly.You no longer need manual config editing to get started. Everything happens inside the OpenClaw chat itself.
<Steps> <Step title="Send the setup command to your OpenClaw agent"> Open any OpenClaw channel — Telegram, WhatsApp, your default chat, wherever your agent lives. Paste and send this command:```
Setup Mem0 from mem0.ai/claw-setup
```
OpenClaw responds with a Mem0 setup card and immediately asks:
> "What's your email address? I'll send you a verification code to connect your Mem0 account."
> "Check your email for a 6-digit code and paste it here."
You'll see the confirmation:
> "Connected to Mem0."
That's it. No API key, no config file editing, no environment variables. The plugin is now active and auto-capture and auto-recall are running on every turn.
<Note>The chat flow uses the same underlying config as manual setup — it writes apiKey and userId into openclaw.json for you. You can still open the file to inspect or override values afterward.</Note>
```json5
{
"plugins": {
"slots": {
"memory": "openclaw-mem0"
},
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"apiKey": "${MEM0_API_KEY}",
"userId": "alice" // any unique identifier you choose for this user
}
}
}
}
}
```
No Mem0 key needed. Defaults use OpenAI (gpt-5-mini for LLM, text-embedding-3-small for embeddings) — requires OPENAI_API_KEY. For a fully local setup, use Ollama for both.
Run the guided 4-step wizard:
openclaw mem0 init --mode open-source
The wizard walks you through:
<Steps> <Step title="LLM provider"> Choose OpenAI (`gpt-5-mini`), Ollama (`llama3.1:8b`, fully local), or Anthropic (`claude-sonnet-4-5-20250514`). Provide an API key or base URL as needed. </Step> <Step title="Embedding provider"> Choose OpenAI (`text-embedding-3-small`) or Ollama (`nomic-embed-text`, local). If the same provider was chosen for LLM, the API key and URL are reused automatically. </Step> <Step title="Vector store"> Choose Qdrant (`http://localhost:6333`) or PGVector (PostgreSQL). Connectivity is verified before proceeding. </Step> <Step title="User ID"> Set your memory namespace identifier. </Step> </Steps>For CI/CD, scripts, or agent-driven setup — pass all options as flags:
# Fully local with Ollama + Qdrant
openclaw mem0 init --mode open-source \
--oss-llm ollama --oss-embedder ollama --oss-vector qdrant
# OpenAI + Qdrant
openclaw mem0 init --mode open-source \
--oss-llm openai --oss-llm-key <key> \
--oss-embedder openai --oss-embedder-key <key> \
--oss-vector qdrant
# Anthropic LLM + OpenAI embeddings + PGVector
openclaw mem0 init --mode open-source \
--oss-llm anthropic --oss-llm-key <key> \
--oss-embedder openai --oss-embedder-key <key> \
--oss-vector pgvector --oss-vector-user postgres --oss-vector-password secret
Add --json for machine-readable output (useful when an LLM agent is driving the setup).
Minimal config — uses OpenAI defaults:
{
"plugins": {
"slots": {
"memory": "openclaw-mem0"
},
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"mode": "open-source",
"userId": "alice" // any unique identifier you choose for this user
}
}
}
}
}
To customize providers:
{
"plugins": {
"slots": {
"memory": "openclaw-mem0"
},
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"mode": "open-source",
"userId": "your-user-id",
"oss": {
"embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
"vectorStore": { "provider": "qdrant", "config": { "url": "http://localhost:6333" } },
"llm": { "provider": "openai", "config": { "model": "gpt-5-mini" } }
}
}
}
}
}
}
All oss fields are optional. See Mem0 OSS docs for available providers.
Memories are organized into two scopes:
Session (short-term) — Auto-capture stores memories scoped to the current session via Mem0's run_id / runId parameter. These are contextual to the ongoing conversation.
User (long-term) — The agent can explicitly store long-term memories using the memory_add tool (with longTerm: true, the default). These persist across all sessions for the user.
During auto-recall, the plugin searches both scopes and presents them separately — long-term memories first, then session memories — so the agent has full context.
The agent gets eight tools it can call during conversations:
| Tool | Description |
|---|---|
memory_search | Search memories by natural language query. Supports scope, categories, filters. |
memory_add | Store facts. Accepts text or facts array, category, importance, metadata. |
memory_get | Retrieve a single memory by ID |
memory_list | List all memories. Filter by userId, agentId, scope. |
memory_update | Update a memory's text in place. Preserves history. |
memory_delete | Delete by memoryId, query (search-and-delete), or all: true. |
memory_event_list | List recent background processing events (platform mode only). |
memory_event_status | Get status of a specific event by ID (platform mode only). |
The memory_search and memory_list tools accept a scope parameter ("session", "long-term", or "all") to control which memories are queried.
All commands support --json for machine-readable output — useful when an LLM agent drives the CLI programmatically. Run openclaw mem0 help --json to discover every command and flag.
# Search all memories (long-term + session)
openclaw mem0 search "what languages does the user know"
# Search only long-term memories
openclaw mem0 search "what languages does the user know" --scope long-term
# Search only session/short-term memories
openclaw mem0 search "what languages does the user know" --scope session
# List all memories
openclaw mem0 list
openclaw mem0 list --user-id alice --top-k 20
# JSON output (any command)
openclaw mem0 search "preferences" --json
openclaw mem0 status --json
| Key | Type | Default | Description |
|---|---|---|---|
mode | "platform" | "open-source" | "platform" | Which backend to use |
userId | string | OS username | Scope memories per user |
autoRecall | boolean | false | Inject memories before each turn (opt-in) |
autoCapture | boolean | false | Store facts after each turn (opt-in) |
topK | number | 5 | Max memories per recall |
searchThreshold | number | 0.3 | Min similarity (0–1) |
| Key | Type | Default | Description |
|---|---|---|---|
apiKey | string | — | Required. Mem0 API key (supports ${MEM0_API_KEY}) |
customInstructions | string | (built-in) | Extraction rules — what to store, how to format |
customCategories | object | (12 defaults) | Category name → description map for tagging |
| Key | Type | Default | Description |
|---|---|---|---|
customInstructions | string | (built-in) | Extraction prompt for memory processing |
oss.embedder.provider | string | "openai" | Embedding provider ("openai", "ollama", etc.) |
oss.embedder.config | object | — | Provider config: apiKey, model, baseURL |
oss.vectorStore.provider | string | "memory" | Vector store ("memory", "qdrant", "chroma", etc.) |
oss.vectorStore.config | object | — | Provider config: host, port, collectionName, dimension |
oss.llm.provider | string | "openai" | LLM provider ("openai", "anthropic", "ollama", etc.) |
oss.llm.config | object | — | Provider config: apiKey, model, baseURL, temperature |
oss.historyDbPath | string | — | SQLite path for memory edit history |
oss.disableHistory | boolean | false | Disable memory edit history tracking |
Everything inside oss is optional — defaults use OpenAI embeddings (text-embedding-3-small), in-memory vector store, and OpenAI LLM (gpt-5-mini).
openclaw plugins update openclaw-mem0
openclaw plugins list
openclaw plugins inspect openclaw-mem0
If you see an error like:
[openclaw] Failed to start CLI: Error: The `openclaw mem0` command is unavailable
because `plugins.allow` excludes "mem0". Add "mem0" to `plugins.allow` if you want
that bundled plugin CLI surface.
Add mem0 to your plugins.allow list in openclaw.json:
{
"plugins": {
"allow": ["mem0"],
"slots": {
"memory": "openclaw-mem0"
}
}
}
If the plugin installs but doesn't work:
plugins.slots.memory is set to "openclaw-mem0" (not the npm package name)openclaw plugins list --enabled to confirm the plugin is loadedopenclaw mem0 status to verify configurationIf openclaw plugins update fails:
openclaw plugins update openclaw-mem0openclaw plugins update --allopenclaw plugins uninstall openclaw-mem0
openclaw plugins install @mem0/openclaw-mem0
| Mode | Where data goes | Storage |
|---|---|---|
| Platform | Conversations sent to api.mem0.ai for extraction and storage | Mem0 cloud |
| Open-source | Embeddings generated via configured provider (default: OpenAI API). Vectors stored locally. | ~/.mem0/vector_store.db (SQLite) |
Auto-capture and auto-recall are disabled by default (opt-in). To enable either or both:
{
"plugins": {
"entries": {
"openclaw-mem0": {
"config": {
"autoCapture": true, // send conversations to Mem0 for extraction
"autoRecall": true // inject relevant memories into context
}
}
}
}
}
Without these enabled, the agent can still use memory tools (memory_add, memory_search, etc.) explicitly — only the automatic background behavior is off.
The plugin never stores API keys, tokens, or secrets as memories. Five independent layers enforce this:
sk-, m0-, ghp_, AKIA, Bearer, password=, token=, secret=)skills.triage.credentialPatternsopenclaw mem0 config show redacts sensitive fields (apiKey, oss.*.config.apiKey)Plugin config is stored in ~/.openclaw/openclaw.json with file permissions 0o600 (owner-read-only). For production deployments, use environment variable references (${MEM0_API_KEY}) or SecretRef objects instead of plaintext keys.
Anonymous usage telemetry (PostHog) is enabled by default to help improve the plugin. No conversation content or memory values are included — only event counts (recall, capture, tool usage, CLI commands).
To opt out, set the environment variable:
export MEM0_TELEMETRY=false
The plugin injects memory-related instructions into the agent's system context via OpenClaw's prependSystemContext mechanism. This includes the memory triage protocol and recalled memories. This is the standard OpenClaw plugin SDK pattern for memory backends — no user-facing prompts are modified.