openclaw/README.md
Long-term memory for OpenClaw agents, powered by Mem0.
Your agent forgets everything between sessions. This plugin fixes that — it stores conversations, extracts what matters, and brings it back when relevant. Enable autoRecall and autoCapture in config to run this automatically, or use agent tools for explicit control.
Check your OpenClaw version:
openclaw --version
# OpenClaw 2026.4.15 (041266a)
| OpenClaw Version | Plugin Support |
|---|---|
>= 2026.4.15 | Fully supported |
Install the plugin via the OpenClaw CLI:
openclaw plugins install @mem0/openclaw-mem0
Get your API key from app.mem0.ai.
Select the plugin as your memory backend in openclaw.json. Either initialize via the CLI:
openclaw mem0 init --api-key <your-key> --user-id <your-user-id>
Or add the full config to your openclaw.json:
{
"plugins": {
"slots": {
"memory": "openclaw-mem0"
},
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"apiKey": "${MEM0_API_KEY}",
"userId": "alice"
}
}
}
}
}
Note: OpenClaw memory plugins load through an exclusive slot, so install alone does not activate the plugin. You must set
plugins.slots.memoryas shown above.
openclaw plugins update openclaw-mem0
No Mem0 key needed. Vectors are stored locally in SQLite at ~/.mem0/vector_store.db — no external database required.
Defaults: text-embedding-3-small (OpenAI) for embeddings, gpt-5-mini (OpenAI) for fact extraction — requires OPENAI_API_KEY. For a fully local setup, use Ollama for both LLM and embeddings.
Run the guided 4-step wizard:
openclaw mem0 init --mode open-source
The wizard walks you through:
gpt-5-mini), Ollama (llama3.1:8b, local), or Anthropic (claude-sonnet-4-5-20250514)text-embedding-3-small) or Ollama (nomic-embed-text, local)http://localhost:6333) or PGVector (PostgreSQL)Each step tests connectivity (Ollama, Qdrant, PGVector) before proceeding.
For CI/CD, scripts, or agent-driven setup — pass all options as flags:
# Fully local with Ollama + Qdrant
openclaw mem0 init --mode open-source \
--oss-llm ollama --oss-embedder ollama --oss-vector qdrant
# OpenAI + Qdrant
openclaw mem0 init --mode open-source \
--oss-llm openai --oss-llm-key <key> \
--oss-embedder openai --oss-embedder-key <key> \
--oss-vector qdrant
# Anthropic LLM + OpenAI embeddings + PGVector
openclaw mem0 init --mode open-source \
--oss-llm anthropic --oss-llm-key <key> \
--oss-embedder openai --oss-embedder-key <key> \
--oss-vector pgvector --oss-vector-user postgres --oss-vector-password secret
# JSON output (for LLM agents)
openclaw mem0 init --mode open-source --oss-llm ollama --oss-embedder ollama --oss-vector qdrant --json
| Flag | Description |
|---|---|
--oss-llm <provider> | openai, ollama, or anthropic |
--oss-llm-key <key> | API key for LLM provider |
--oss-llm-model <model> | Override default LLM model |
--oss-llm-url <url> | Base URL (Ollama only) |
--oss-embedder <provider> | openai or ollama |
--oss-embedder-key <key> | API key for embedder |
--oss-embedder-model <model> | Override default embedder model |
--oss-embedder-url <url> | Base URL (Ollama only) |
--oss-vector <provider> | qdrant or pgvector |
--oss-vector-url <url> | Qdrant server URL (default: http://localhost:6333) |
--oss-vector-host <host> | PGVector host |
--oss-vector-port <port> | PGVector port |
--oss-vector-user <user> | PGVector user |
--oss-vector-password <pw> | PGVector password |
--oss-vector-dbname <db> | PGVector database name |
--oss-vector-dims <n> | Override embedding dimensions |
Minimal config — uses OpenAI defaults:
{
"plugins": {
"slots": {
"memory": "openclaw-mem0"
},
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"mode": "open-source",
"userId": "alice"
}
}
}
}
}
Customize the embedder, vector store, or LLM via the oss block:
"config": {
"mode": "open-source",
"userId": "alice",
"oss": {
"embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
"vectorStore": { "provider": "qdrant", "config": { "url": "http://localhost:6333" } },
"llm": { "provider": "openai", "config": { "model": "gpt-5-mini" } }
}
}
All oss fields are optional. See the Mem0 OSS docs for supported providers.
Auto-Recall (autoRecall: true) — Before the agent responds, the plugin searches Mem0 for relevant memories and injects them into context.
Auto-Capture (autoCapture: true) — After the agent responds, the conversation is filtered through a noise-removal pipeline and sent to Mem0. New facts get stored, stale ones updated, duplicates merged.
Both are opt-in. Once enabled, they run silently — no prompting, no manual calls required. Without them, the agent can still use memory tools (memory_add, memory_search, etc.) explicitly.
run_id. Recalled alongside long-term memories.memory_add.Each agent gets its own memory namespace automatically via session key routing (agent:<name>:<uuid> maps to userId:agent:<name>). Single-agent setups are unaffected.
Eight tools are registered for agent use:
| Tool | Description |
|---|---|
memory_search | Search by natural language query. Supports scope (session, long-term, all), categories, filters, and agentId. |
memory_add | Store facts. Accepts text or facts array, category, importance, longTerm, metadata. |
memory_get | Retrieve a single memory by ID. |
memory_list | List all memories. Filter by userId, agentId, scope. |
memory_update | Update a memory's text in place. Preserves history. |
memory_delete | Delete by memoryId, query (search-and-delete), or all: true (requires confirm: true). |
memory_event_list | List recent background processing events. Platform mode only. |
memory_event_status | Get status of a specific event by ID. Platform mode only. |
All commands: openclaw mem0 <command>. All commands support --json for machine-readable output (for LLM agents).
# Memory operations
openclaw mem0 add "User prefers TypeScript over JavaScript"
openclaw mem0 search "what languages does the user know"
openclaw mem0 search "preferences" --scope long-term
openclaw mem0 get <memory_id>
openclaw mem0 list --user-id alice --top-k 20
openclaw mem0 update <memory_id> "Updated preference text"
openclaw mem0 delete <memory_id>
openclaw mem0 delete --all --user-id alice --confirm
openclaw mem0 import memories.json
# Management
openclaw mem0 init # interactive setup
openclaw mem0 init --mode open-source --oss-llm ollama # non-interactive OSS
openclaw mem0 init --api-key <key> --user-id alice # non-interactive platform
openclaw mem0 status
openclaw mem0 config show
openclaw mem0 config get api_key
openclaw mem0 config set user_id alice
# Events (platform only)
openclaw mem0 event list
openclaw mem0 event status <event_id>
# Memory consolidation
openclaw mem0 dream
openclaw mem0 dream --dry-run
# JSON output (any command)
openclaw mem0 search "preferences" --json
openclaw mem0 list --json
openclaw mem0 status --json
openclaw mem0 help --json # discover all commands + flags
| Key | Type | Default | Description |
|---|---|---|---|
mode | "platform" | "open-source" | "platform" | Backend mode |
userId | string | OS username | User identifier. All memories scoped to this value. |
autoRecall | boolean | false | Inject relevant memories before each turn |
autoCapture | boolean | false | Extract and store facts after each turn |
topK | number | 5 | Max memories returned per recall |
searchThreshold | number | 0.3 | Minimum similarity score (0-1) |
| Key | Type | Default | Description |
|---|---|---|---|
apiKey | string | — | Required. Mem0 API key (supports ${MEM0_API_KEY}) |
customInstructions | string | (built-in) | Custom extraction rules |
customCategories | object | (12 defaults) | Category name to description map |
All fields optional. Defaults: text-embedding-3-small embeddings, local SQLite vector store (~/.mem0/vector_store.db), gpt-5-mini LLM.
| Key | Type | Default | Description |
|---|---|---|---|
customPrompt | string | (built-in) | Extraction prompt |
oss.embedder.provider | string | "openai" | Embedding provider |
oss.embedder.config | object | — | Provider config (apiKey, model, baseURL) |
oss.vectorStore.provider | string | "memory" | Vector store provider (see list above) |
oss.vectorStore.config | object | — | Provider config (host, port, collectionName, dbPath) |
oss.llm.provider | string | "openai" | LLM provider |
oss.llm.config | object | — | Provider config (apiKey, model, baseURL) |
oss.historyDbPath | string | — | SQLite path for edit history |
| Mode | Where data goes | Credentials needed |
|---|---|---|
| Platform | Conversations sent to api.mem0.ai for memory extraction and retrieval | MEM0_API_KEY |
| Open-Source (OpenAI) | LLM/embedding calls to OpenAI API; vectors stored locally at ~/.mem0/vector_store.db | OPENAI_API_KEY |
| Open-Source (Ollama) | Fully local — LLM, embeddings, and vectors all on your machine | None |
The plugin stores configuration in ~/.openclaw/openclaw.json. If you use the chat setup flow or openclaw mem0 init, your API key and user ID are written to this file.
To avoid plaintext credentials:
"apiKey": "${MEM0_API_KEY}""apiKey": {"source": "env", "provider": "default", "id": "MEM0_API_KEY"}Both are disabled by default (false). When enabled:
autoCapture: sends conversation content to your configured backend (cloud or local) after each agent turnautoRecall: queries your memory store before each agent turn and injects results into agent contextDo not enable autoCapture in platform mode if your conversations contain sensitive data you do not want stored on Mem0 cloud.
| File | Purpose |
|---|---|
~/.openclaw/openclaw.json | Plugin configuration (API keys, user ID, settings) |
~/.mem0/vector_store.db | Local vector store (open-source mode only) |
~/.mem0/history.db | Memory edit history (open-source mode only) |
<pluginStateDir>/dream-state.json | Memory consolidation state |