docs/concepts/memory-builtin.md
The builtin engine is the default memory backend. It stores your memory index in a per-agent SQLite database and needs no extra dependencies to get started.
If you have an API key for OpenAI, Gemini, Voyage, Mistral, or DeepInfra, the builtin engine auto-detects it and enables vector search. No config needed.
To set a provider explicitly:
{
agents: {
defaults: {
memorySearch: {
provider: "openai",
},
},
},
}
Without an embedding provider, only keyword search is available.
To force the built-in local embedding provider, install the optional
node-llama-cpp runtime package next to OpenClaw, then point local.modelPath
at a GGUF file:
{
agents: {
defaults: {
memorySearch: {
provider: "local",
fallback: "none",
local: {
modelPath: "~/.node-llama-cpp/models/embeddinggemma-300m-qat-Q8_0.gguf",
},
},
},
},
}
| Provider | ID | Auto-detected | Notes |
|---|---|---|---|
| OpenAI | openai | Yes | Default: text-embedding-3-small |
| Gemini | gemini | Yes | Supports multimodal (image + audio) |
| Voyage | voyage | Yes | |
| Mistral | mistral | Yes | |
| DeepInfra | deepinfra | Yes | Default: BAAI/bge-m3 |
| Ollama | ollama | No | Local, set explicitly |
| Local | local | Yes (first) | Optional node-llama-cpp runtime |
Auto-detection picks the first provider whose API key can be resolved, in the
order shown. Set memorySearch.provider to override.
OpenClaw indexes MEMORY.md and memory/*.md into chunks (~400 tokens with
80-token overlap) and stores them in a per-agent SQLite database.
~/.openclaw/memory/<agentId>.sqliteopenclaw memory index --forceThe builtin engine is the right choice for most users:
Consider switching to QMD if you need reranking, query expansion, or want to index directories outside the workspace.
Consider Honcho if you want cross-session memory with automatic user modeling.
Memory search disabled? Check openclaw memory status. If no provider is
detected, set one explicitly or add an API key.
Local provider not detected? Confirm the local path exists and run:
openclaw memory status --deep --agent main
openclaw memory index --force --agent main
Both standalone CLI commands and the Gateway use the same local provider id.
If the provider is set to auto, local embeddings are considered first only
when memorySearch.local.modelPath points to an existing local file.
Stale results? Run openclaw memory index --force to rebuild. The watcher
may miss changes in rare edge cases.
sqlite-vec not loading? OpenClaw falls back to in-process cosine similarity
automatically. openclaw memory status --deep reports the local vector store
separately from the embedding provider, so Vector store: unavailable points
at sqlite-vec loading while Embeddings: unavailable points at provider/auth
or model readiness. Check logs for the specific load error.
For embedding provider setup, hybrid search tuning (weights, MMR, temporal decay), batch indexing, multimodal memory, sqlite-vec, extra paths, and all other config knobs, see the Memory configuration reference.