docs/changelog/sdk.mdx
Bug Fixes:
user_id, agent_id, run_id entity params to filters in GET /memories (#4960)prompt param in vector store extraction pipeline (#4914)text_lemmatized field in AsyncMemory._create_memory (#4886)_is_reasoning_model check to not match gpt-5.x variants (#4746)ca_certs config option for Elasticsearch vector store (#3993)agent_id and run_id to Elasticsearch/OpenSearch default mappings (#4906)embedding_dims from model metadata at init (#4711)Security:
Major Release — Python SDK with V3 memory pipeline, ADD-only extraction, and cleaned-up API surface.
New Features:
ADDITIVE_EXTRACTION_PROMPT. Memories accumulate via linked_memory_ids — no more UPDATE/DELETE events (#4805)keyword_search() added to 15 vector store adapters (Qdrant, Elasticsearch, OpenSearch, Azure AI Search, Weaviate, Redis, PGVector, Pinecone, Databricks, MongoDB, Milvus, Baidu, Upstash, Azure MySQL, Vertex AI) (#4805){collection}_entities) for cross-memory relationship retrieval. Optional dependency: pip install mem0ai[nlp] (#4805)Memory and async AsyncMemory at full parity (#4805)cluster_mode parameter for Valkey Cluster Mode Enabled (CME) deployments (#4759)MemoryClient.add() now posts to /v3/memories/add/; MemoryClient.get_all() posts to /v3/memories/ and returns a paginated envelope {"count": int, "next": str | None, "previous": str | None, "results": [...]} (#4856)gpt-5-mini is now the default across OpenAILLM, OpenAIStructuredLLM, AzureOpenAILLM, AzureOpenAIStructuredLLM, and LiteLLM fallback (#4829)Breaking Changes:
add() returns ADD-only events — No more "UPDATE" or "DELETE" events. Memories accumulate; nothing is overwritten (#4805)search() default threshold is now 0.1 — Pass threshold=0.0 for previous behavior (#4805)search() score is now a combined multi-signal score — The top-level score fuses semantic similarity, BM25 keyword match, and entity boost into one value. Absolute numbers shift versus the old raw cosine score; retune any hard thresholds against representative queries. Per-signal scores are not exposed on the response (#4805, #4836)search() default rerank is now False — Pass rerank=True for previous behavior (#4805)top_k default changed 100 → 20 in Memory.get_all() and Memory.search() (sync + async). Pass top_k=100 explicitly to restore the old behavior (#4843)user_id / agent_id / run_id are trimmed; empty-string and whitespace-only values now raise ValueError (#4843)threshold must be a number in [0, 1]; top_k must be a non-negative integer — invalid inputs raise ValueError (#4843)messages in Memory.add() rejects invalid types: Passing None or non-(str | dict | list) values raises Mem0ValidationError (error_code="VALIDATION_003") (#4843)qdrant-client>=1.12.0 required — Upgrade from >=1.9.1 (#4805)org_id and project_id removed — Removed from MemoryClient constructor and all method signatures (#4740)mem0/memory/graph_memory.py, memgraph_memory.py, kuzu_memory.py, apache_age_memory.py, and mem0/graphs/ (Neo4j / Memgraph / Kuzu / Apache AGE / Neptune drivers) deleted — ~4,000 lines. Graph memory is no longer supported in the OSS SDK; graph drivers (neo4j, memgraph, kuzu, etc.) can be uninstalled. Use the Platform API for graph features. Remove enable_graph and graph_store from your config (#4805)enable_graph removed from Client SDK — Graph memory is now a project-level setting on the Platform. Remove enable_graph from MemoryClient.add() / search() / get_all() / update_project() calls (#4776)custom_fact_extraction_prompt renamed to custom_instructions — Update config and memory module references (#4740)AddMemoryOptions, SearchMemoryOptions, GetAllMemoryOptions, DeleteAllMemoryOptions, UpdateMemoryOptions, ProjectUpdateOptions (#4740)Security:
FAISS vector store (#4833)Bug Fixes:
path=...), eliminating RocksDB lock contention between the main and entity collections (#4836)vector=None in update() to prevent boto3 validation error when event=NONE (#4594)store parameter opt-in to prevent leaking to non-OpenAI backends like Google Gemini (#4757)response_format to Azure OpenAI API to prevent JSON parsing failures (#4689)temp_uuid_mapping lookups against LLM-hallucinated IDs with safe .get() and warnings (#4674)MemoryClient.feedback() telemetry TypeError by merging feedback data into single payload (#4795)Improvements:
before_send hook to reduce event volume (#4771)See the OSS v1 to v2 migration guide and Platform migration guide for upgrade instructions.
</Update> <Update label="2026-04-06" description="v1.0.11">New Features & Updates:
multilingual parameter to project update (#4314)Bug Fixes:
DatetimeRange for datetime string values in Qdrant range filters (#4659)ConfigDict to vector store configs (Elasticsearch, MongoDB, Neptune, OpenSearch, PGVector, Supabase, Valkey) (#4656)New Features & Updates:
Bug Fixes:
response_format to OpenAI-compatible API for DeepSeek (#4635)response_format to OpenAI-compatible API for vLLM (#4608)Memory.reset() (#4185)AsyncMemory.from_config a regular classmethod (#4183)New Features & Updates:
reasoning_effort parameter support for reasoning models (#4461)Bug Fixes:
actor_id during memory update (#4570)updated_at on creation and preserve pre-existing created_at (#4499)README.md from wheel shared-data (#4052)vector=None in Milvus and Qdrant update methods (#4568)Improvements:
gemini-embedding-001 (#4571)New Features & Updates:
Bug Fixes:
_create_memory (#4529)mem0.add (#3996)ValueError when deleting nonexistent memory (#4455)Memory.delete() (#4505)knnVector to GA vectorSearch (#3995)None (#4362)/tmp/chroma path in ChromaDbConfig validator (#4179)Langchain.update (#4446)DELETE (#4188)do not remove local path on init (#4475)topP for Anthropic Converse in Bedrock; used AWSBedrockConfig in LlmFactory (#4469)temperature and top_p to Anthropic API (#4471)None content and empty candidates in GeminiLLM parsing (#4462)_parse_response to AzureOpenAIStructuredLLM (#4434)DELETE operations in history (#4492)Improvements:
Bug Fixes:
timezone.utc (#4404)http_auth in _safe_deepcopy_config for OpenSearch (#4418)encoding_format='float' in OpenAI embeddings for proxy compatibility (#4058)client.chat and parse tool_calls from response (#4176)LLMReranker for non-OpenAI providers (#4405)vector_distance to float in Redis search (#4377)Improvements:
Bug Fixes:
MEM0_TELEMETRY is disabled (#4351)vector_store.reset() call from delete_all() that was wiping the entire vector store instead of deleting only the target memories (#4349)OllamaLLM now respects the configured URL instead of always falling back to localhost (#4320)KeyError when LLM omits the entities key in tool call response (#4313)json_object response format (#4271)Dependencies:
<7.0.0 (#4326)New Features & Updates:
timestamp parameter to update() — accepts Unix epoch (int/float) or ISO 8601 stringNew Features & Updates:
New Features & Updates:
New Features & Updates:
Bug Fixes:
New Features & Updates:
Improvements:
Bug Fixes:
New Features & Updates:
Improvements:
Bug Fixes:
New Features & Updates:
export_openmemory.sh migration scriptImprovements:
Bug Fixes:
query_vector args in search methodapp_id on client for Neptune AnalyticsRefactoring:
New Features & Updates:
Improvements:
Bug Fixes:
Refactoring:
New Features & Updates:
client.project and AsyncMemoryClient.project interfacesImprovements:
Documentation:
client.project.get() and client.project.update() instead of deprecated methods.Deprecation:
get_project() and update_project() as deprecated (these methods were already present); added warnings to guide users to the new API.Bug Fixes:
New Features:
Improvements:
Bug Fixes:
Bug Fixes:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
Improvements:
New Features:
Improvements:
Bug Fixes:
Bug Fixes:
New Features:
Improvements:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
New Features:
Bug Fixes:
New Features:
Improvements:
New Features:
Improvements:
Bug Fixes:
New Features:
Bug Fixes:
New Features:
Improvements:
Improvements:
Bug Fixes:
New Features:
Improvements:
Improvements:
Bug Fixes:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
New Features:
Improvements:
Improvements:
New Features:
Improvements:
Bug Fixes:
New Features:
Improvements:
Documentation:
Bug Fixes:
New Features:
New Features:
Documentation:
Bug Fixes:
package.json file to fix deployment errorsexpiration date
</Update>
Bug Fixes:
package.json file to fix deployment errorsexpiration date
</Update>
Bug Fixes:
Bug Fixes:
timeout config to OpenAI client in JS OSS LLM providers (#4770)Improvements:
Bug Fixes:
define, replacing the two hardcoded version strings in src/client/telemetry.ts and src/oss/src/utils/telemetry.ts. Previously these were stuck at 2.1.36 and 2.1.34 while the published package was on 3.x, so every telemetry event was reporting the wrong client_version. The placeholder is substituted with a string literal at bundle time — no runtime require("./package.json") in the shipped bundle (#4897).Major Release — TypeScript SDK with V3 memory pipeline, camelCase parameters, and cleaned-up API surface.
V3 Memory Pipeline (OSS):
entity_extraction.ts module (720+ lines) with cross-memory relationship retrieval (#4805)SQLiteManager.ts with rolling window for LLM context (#4805)embedBatch() support in OpenAI and Azure embedding providers (#4805)scoring.ts and lemmatization.ts utilities for hybrid search (#4805)prompts/index.ts (592+ lines) with additive extraction prompt aligned with Python SDK (#4805)MemoryClient.add() now posts to /v3/memories/add/; MemoryClient.getAll() posts to /v3/memories/ with paginated envelope { count, next, previous, results } (#4856)gpt-5-mini is now the default in OpenAI, OpenAIStructured, and Azure LLM providers (#4829)Breaking Changes:
graph_memory.ts (675 lines), graphs/tools.ts (267 lines), graphs/utils.ts (116 lines), graphs/configs.ts (30 lines) deleted. Graph memory is no longer supported in the OSS SDK — use Platform API for graph features (#4805)camelToSnakeKeys() / snakeToCamelKeys() (#4776)
// Before
client.add(messages, { user_id: "alice", top_k: 5 });
// After
client.add(messages, { userId: "alice", topK: 5 });
MemoryOptions with typed interfaces: AddMemoryOptions, SearchMemoryOptions, GetAllMemoryOptions, DeleteAllMemoryOptions (#4740)org_id, project_id, api_version, output_format, async_mode, enable_graph, limit removed from client method signatures. ClientOptions reduced to { apiKey, host } only (#4740)limit renamed to topK (OSS): Update all search calls (#4740)topK default changed 100 → 20 in Memory.getAll() and Memory.search(). Pass topK: 100 explicitly to restore the old behavior (#4843)userId / agentId / runId are trimmed; empty-string and whitespace-only values now throw (#4843)threshold must be in [0, 1]; topK must be a non-negative integer — invalid inputs throw (#4843)messages in Memory.add() is required: Passing undefined or null now throws (#4843)customPrompt renamed to customInstructions (OSS): Update memory and vector store configurations (#4740)enableGraph removed (OSS): Config option removed — graph memory no longer available in OSS (#4776)New Features:
api.deepseek.com (#4613)MemoryVectorStore now uses a dedicated _entities.db file, preventing entity/memory store collisions (#4829, #4841)Bug Fixes:
PGVector.initialize() now memoises the in-flight init promise (#4841)moduleList response shapes (#4841)ConfigManager.mergeConfig() to only include graphStore when explicitly provided by user, preventing default Neo4j connection attempts (#4776)userConf.url for baseURL — prevents custom LLM providers (Ollama, LMStudio) from silently connecting to OpenAI (#4761)Improvements:
See the TypeScript SDK migration guide for upgrade instructions.
</Update> <Update label="2026-04-06" description="v2.4.6">New Features & Updates:
multilingual parameter to project update types (#4314)Bug Fixes:
.single() with .maybeSingle() in SupabaseDB.get() to handle missing rows (#4599)Bug Fixes:
New Features & Updates:
VectorStoreFactory (#3997)Bug Fixes:
pg import compatible with ESM (#4544)VectorStoreFactory (#4502)toCamelCase in Redis get method for the payload (#3172)Bug Fixes:
createWebhook and updateWebhook API serializationMEMORY_CATEGORIZED event type to WebhookEvent enumWebhookCreatePayload and WebhookUpdatePayload for better type safetyTests:
Bug Fixes:
Improvements:
Bug Fixes:
SQLITE_CANTOPEN errors when running as a LaunchAgent, systemd service, or in containers where process.cwd() is read-only (e.g. /). Default vector_store.db location changed from process.cwd()/vector_store.db to ~/.mem0/vector_store.db.historyDbPath config being silently ignored — config merging always overwrote it with defaults. Top-level historyDbPath is now correctly propagated into historyStore.config with proper precedence.ensureSQLiteDirectory() — parent directories for SQLite database files are now auto-created before opening, preventing SQLITE_CANTOPEN when using nested paths.Improvements:
vector_store.db is found at the old process.cwd() location, guiding users to move it or set vectorStore.config.dbPath explicitly.Breaking Changes:
better-sqlite3 v12)Bug Fixes:
sqlite3 with better-sqlite3 to fix native binding resolution failures under jiti-based loaders (e.g. OpenClaw plugin system). Fixes issues where the bindings module walked V8 stack frames with synthetic filenames, failing to locate the native .node addon.SQLiteManager — init() is now synchronousMemoryVectorStore from sqlite3 to better-sqlite3 with transactional batch insertsImprovements:
SQLiteManager for faster history operationsinsert() in MemoryVectorStore wrapped in a transaction for atomicitytsup.config.ts externals from sqlite3 to better-sqlite3New Features & Updates:
timestamp parameter to update() — accepts Unix epoch or ISO 8601 stringNew Features & Updates:
Improvements:
add and search methods, allowing additional properties beyond defined options for experimental featuresNew Features:
Improvements:
embeddingDims and url parametersBug Fixes:
embeddingDims values in embedders (OpenAI, Ollama, Google, Azure)Improvements:
Improvements:
model in LLM and Embedder to use type any from string to use langchain llm modelsImprovements:
Improvements:
Improvements:
mem0ai to use 2.1.12Improvements:
New Features:
add, search, and list commands from v1/v2 to v3 API endpoints — POST /v3/memories/add/, POST /v3/memories/search/, POST /v3/memories/. Aligns both CLIs with the Python and TypeScript SDKs which already use v3 (#4916)Breaking Changes:
--graph / --no-graph removed: The enable_graph config option, --graph and --no-graph CLI flags, and MEM0_ENABLE_GRAPH environment variable have been removed from both CLIs. Graph memory is now a project-level setting on the Platform (#4916)Bug Fixes:
"anonymous-cli" fallback with a persistent per-machine random hash (cli-anon-<uuid>), so anonymous CLI users are counted individually in PostHog instead of collapsing into one identity (#4789)$identify event on first authenticated run to stitch pre-signup anonymous history onto the authenticated user profile (#4789)Improvements:
source=CLI in request bodies (POST/PUT) and query params (GET/DELETE) for server-side attribution (#4789)New Features:
/v1/ping/ on startup — fail-fast with a helpful error instead of cryptic 401s (#4701)Bug Fixes:
npx npm@latest (#4724)New Features:
Bug Fixes:
repository field to Node packages for npm provenance (#4671)New Features:
event commands: mem0 event list shows recent background processing events in a table; mem0 event status <id> shows full detail including nested memory results (#4649)--json / --agent flag: Root-level flag switches all command output to a structured JSON envelope for programmatic/agent consumption. Envelope format: {"status", "command", "duration_ms", "scope", "count", "data"} (#4649)add → {id, memory, event}, search → {id, memory, score, created_at, categories}) (#4649)mem0 init (#4623)Bug Fixes:
MODULE_NOT_FOUND crash on status, import, and all commands when installed globally — replaced runtime createRequire with build-time version injection (#4636)status command: Replaced heavyweight /v1/entities/ check with dedicated GET /v1/ping/ endpoint (#4649)add command: Deduplicated PENDING results from API; changed misleading count message (#4649)init command: Partial flags now work in non-TTY; warns before overwriting existing config; added --force flag (#4649)delete command: Fixed entity delete via v2 API for all entity types (#4649)Improvements:
mem0 get <id> fail) (#4636)config get api_key short-form aliases added (#4636)--expires, --page-size, --page, --top-k, --threshold, and empty content (#4636)printInfo / printScope moved to stderr to avoid contaminating JSON piping (#4636)Initial Release — Official Mem0 CLI
A full-featured command-line interface for Mem0, available in both Python and Node.js:
pip install mem0-cli (Python) or npm install -g @mem0/cli (Node.js)add, search, list, get, update, delete, import, config, init, status, entitymem0 init with API key entry and user ID configuration-o json flag for CI/CD pipelines and automationcli-spec.json ensuring identical behavior (#4575)Mem0 Plugin for Claude Code, Cursor, and Codex
The unified Mem0 plugin for AI development environments:
add_memory, search_memories, get_memories, get_memory, update_memory, delete_memory, delete_all_memories, delete_entities, list_entities — all via mcp.mem0.aimem0-plugin/skills/mem0-codex for Codex workflowsBug Fix:
Improvements: