Back to Mem0

@mem0/openclaw-mem0

openclaw/README.md

2.0.112.1 KB
Original Source

@mem0/openclaw-mem0

Long-term memory for OpenClaw agents, powered by Mem0.

Your agent forgets everything between sessions. This plugin fixes that — it stores conversations, extracts what matters, and brings it back when relevant. Enable autoRecall and autoCapture in config to run this automatically, or use agent tools for explicit control.

Requirements

Check your OpenClaw version:

bash
openclaw --version
# OpenClaw 2026.4.15 (041266a)
OpenClaw VersionPlugin Support
>= 2026.4.15Fully supported

Quick Start

Platform (Mem0 Cloud)

Install and Configure

  1. Install the plugin via the OpenClaw CLI:

    bash
    openclaw plugins install @mem0/openclaw-mem0
    
  2. Get your API key from app.mem0.ai.

  3. Select the plugin as your memory backend in openclaw.json. Either initialize via the CLI:

    bash
    openclaw mem0 init --api-key <your-key> --user-id <your-user-id>
    

    Or add the full config to your openclaw.json:

    json5
    {
      "plugins": {
        "slots": {
          "memory": "openclaw-mem0"
        },
        "entries": {
          "openclaw-mem0": {
            "enabled": true,
            "config": {
              "apiKey": "${MEM0_API_KEY}",
              "userId": "alice"
            }
          }
        }
      }
    }
    

Note: OpenClaw memory plugins load through an exclusive slot, so install alone does not activate the plugin. You must set plugins.slots.memory as shown above.

Updating the plugin to get the latest features and fixes:

bash
openclaw plugins update openclaw-mem0

Open-Source (Self-hosted)

No Mem0 key needed. Vectors are stored locally in SQLite at ~/.mem0/vector_store.db — no external database required.

Defaults: text-embedding-3-small (OpenAI) for embeddings, gpt-5-mini (OpenAI) for fact extraction — requires OPENAI_API_KEY. For a fully local setup, use Ollama for both LLM and embeddings.

Interactive Setup (Recommended)

Run the guided 4-step wizard:

bash
openclaw mem0 init --mode open-source

The wizard walks you through:

  1. LLM provider — OpenAI (gpt-5-mini), Ollama (llama3.1:8b, local), or Anthropic (claude-sonnet-4-5-20250514)
  2. Embedding provider — OpenAI (text-embedding-3-small) or Ollama (nomic-embed-text, local)
  3. Vector store — Qdrant (http://localhost:6333) or PGVector (PostgreSQL)
  4. User ID — your memory namespace identifier

Each step tests connectivity (Ollama, Qdrant, PGVector) before proceeding.

Non-Interactive Setup

For CI/CD, scripts, or agent-driven setup — pass all options as flags:

bash
# Fully local with Ollama + Qdrant
openclaw mem0 init --mode open-source \
  --oss-llm ollama --oss-embedder ollama --oss-vector qdrant

# OpenAI + Qdrant
openclaw mem0 init --mode open-source \
  --oss-llm openai --oss-llm-key <key> \
  --oss-embedder openai --oss-embedder-key <key> \
  --oss-vector qdrant

# Anthropic LLM + OpenAI embeddings + PGVector
openclaw mem0 init --mode open-source \
  --oss-llm anthropic --oss-llm-key <key> \
  --oss-embedder openai --oss-embedder-key <key> \
  --oss-vector pgvector --oss-vector-user postgres --oss-vector-password secret

# JSON output (for LLM agents)
openclaw mem0 init --mode open-source --oss-llm ollama --oss-embedder ollama --oss-vector qdrant --json
<details> <summary>All <code>--oss-*</code> flags</summary>
FlagDescription
--oss-llm <provider>openai, ollama, or anthropic
--oss-llm-key <key>API key for LLM provider
--oss-llm-model <model>Override default LLM model
--oss-llm-url <url>Base URL (Ollama only)
--oss-embedder <provider>openai or ollama
--oss-embedder-key <key>API key for embedder
--oss-embedder-model <model>Override default embedder model
--oss-embedder-url <url>Base URL (Ollama only)
--oss-vector <provider>qdrant or pgvector
--oss-vector-url <url>Qdrant server URL (default: http://localhost:6333)
--oss-vector-host <host>PGVector host
--oss-vector-port <port>PGVector port
--oss-vector-user <user>PGVector user
--oss-vector-password <pw>PGVector password
--oss-vector-dbname <db>PGVector database name
--oss-vector-dims <n>Override embedding dimensions
</details>

Manual Config

Minimal config — uses OpenAI defaults:

json5
{
  "plugins": {
    "slots": {
      "memory": "openclaw-mem0"
    },
    "entries": {
      "openclaw-mem0": {
        "enabled": true,
        "config": {
          "mode": "open-source",
          "userId": "alice"
        }
      }
    }
  }
}

Customize the embedder, vector store, or LLM via the oss block:

json5
"config": {
  "mode": "open-source",
  "userId": "alice",
  "oss": {
    "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
    "vectorStore": { "provider": "qdrant", "config": { "url": "http://localhost:6333" } },
    "llm": { "provider": "openai", "config": { "model": "gpt-5-mini" } }
  }
}

All oss fields are optional. See the Mem0 OSS docs for supported providers.

How It Works

<p align="center"> </p>

Auto-Recall (autoRecall: true) — Before the agent responds, the plugin searches Mem0 for relevant memories and injects them into context.

Auto-Capture (autoCapture: true) — After the agent responds, the conversation is filtered through a noise-removal pipeline and sent to Mem0. New facts get stored, stale ones updated, duplicates merged.

Both are opt-in. Once enabled, they run silently — no prompting, no manual calls required. Without them, the agent can still use memory tools (memory_add, memory_search, etc.) explicitly.

Memory Scopes

  • Session (short-term) — Scoped to the current conversation via run_id. Recalled alongside long-term memories.
  • User (long-term) — Persistent across all sessions. Default for memory_add.

Multi-Agent Isolation

Each agent gets its own memory namespace automatically via session key routing (agent:<name>:<uuid> maps to userId:agent:<name>). Single-agent setups are unaffected.

Agent Tools

Eight tools are registered for agent use:

ToolDescription
memory_searchSearch by natural language query. Supports scope (session, long-term, all), categories, filters, and agentId.
memory_addStore facts. Accepts text or facts array, category, importance, longTerm, metadata.
memory_getRetrieve a single memory by ID.
memory_listList all memories. Filter by userId, agentId, scope.
memory_updateUpdate a memory's text in place. Preserves history.
memory_deleteDelete by memoryId, query (search-and-delete), or all: true (requires confirm: true).
memory_event_listList recent background processing events. Platform mode only.
memory_event_statusGet status of a specific event by ID. Platform mode only.

CLI

All commands: openclaw mem0 <command>. All commands support --json for machine-readable output (for LLM agents).

bash
# Memory operations
openclaw mem0 add "User prefers TypeScript over JavaScript"
openclaw mem0 search "what languages does the user know"
openclaw mem0 search "preferences" --scope long-term
openclaw mem0 get <memory_id>
openclaw mem0 list --user-id alice --top-k 20
openclaw mem0 update <memory_id> "Updated preference text"
openclaw mem0 delete <memory_id>
openclaw mem0 delete --all --user-id alice --confirm
openclaw mem0 import memories.json

# Management
openclaw mem0 init                                          # interactive setup
openclaw mem0 init --mode open-source --oss-llm ollama      # non-interactive OSS
openclaw mem0 init --api-key <key> --user-id alice          # non-interactive platform
openclaw mem0 status
openclaw mem0 config show
openclaw mem0 config get api_key
openclaw mem0 config set user_id alice

# Events (platform only)
openclaw mem0 event list
openclaw mem0 event status <event_id>

# Memory consolidation
openclaw mem0 dream
openclaw mem0 dream --dry-run

# JSON output (any command)
openclaw mem0 search "preferences" --json
openclaw mem0 list --json
openclaw mem0 status --json
openclaw mem0 help --json                                   # discover all commands + flags

Configuration Reference

General

KeyTypeDefaultDescription
mode"platform" | "open-source""platform"Backend mode
userIdstringOS usernameUser identifier. All memories scoped to this value.
autoRecallbooleanfalseInject relevant memories before each turn
autoCapturebooleanfalseExtract and store facts after each turn
topKnumber5Max memories returned per recall
searchThresholdnumber0.3Minimum similarity score (0-1)

Platform Mode

KeyTypeDefaultDescription
apiKeystringRequired. Mem0 API key (supports ${MEM0_API_KEY})
customInstructionsstring(built-in)Custom extraction rules
customCategoriesobject(12 defaults)Category name to description map

Open-Source Mode

All fields optional. Defaults: text-embedding-3-small embeddings, local SQLite vector store (~/.mem0/vector_store.db), gpt-5-mini LLM.

KeyTypeDefaultDescription
customPromptstring(built-in)Extraction prompt
oss.embedder.providerstring"openai"Embedding provider
oss.embedder.configobjectProvider config (apiKey, model, baseURL)
oss.vectorStore.providerstring"memory"Vector store provider (see list above)
oss.vectorStore.configobjectProvider config (host, port, collectionName, dbPath)
oss.llm.providerstring"openai"LLM provider
oss.llm.configobjectProvider config (apiKey, model, baseURL)
oss.historyDbPathstringSQLite path for edit history

Privacy & Security

Data Flow

ModeWhere data goesCredentials needed
PlatformConversations sent to api.mem0.ai for memory extraction and retrievalMEM0_API_KEY
Open-Source (OpenAI)LLM/embedding calls to OpenAI API; vectors stored locally at ~/.mem0/vector_store.dbOPENAI_API_KEY
Open-Source (Ollama)Fully local — LLM, embeddings, and vectors all on your machineNone

Credential Storage

The plugin stores configuration in ~/.openclaw/openclaw.json. If you use the chat setup flow or openclaw mem0 init, your API key and user ID are written to this file.

To avoid plaintext credentials:

  • Use env var references: "apiKey": "${MEM0_API_KEY}"
  • Use SecretRef: "apiKey": {"source": "env", "provider": "default", "id": "MEM0_API_KEY"}

Auto-Capture & Auto-Recall

Both are disabled by default (false). When enabled:

  • autoCapture: sends conversation content to your configured backend (cloud or local) after each agent turn
  • autoRecall: queries your memory store before each agent turn and injects results into agent context

Do not enable autoCapture in platform mode if your conversations contain sensitive data you do not want stored on Mem0 cloud.

Persistence Locations

FilePurpose
~/.openclaw/openclaw.jsonPlugin configuration (API keys, user ID, settings)
~/.mem0/vector_store.dbLocal vector store (open-source mode only)
~/.mem0/history.dbMemory edit history (open-source mode only)
<pluginStateDir>/dream-state.jsonMemory consolidation state

License

Apache 2.0