Back to Mem0

Search Memory

docs/core-concepts/memory-operations/search.mdx

2.0.18.0 KB
Original Source

How Mem0 Searches Memory

Mem0's search operation lets agents ask natural-language questions and get back the memories that matter most. Like a smart librarian, it finds exactly what you need from everything you've stored.

<Info> **Why it matters** - Retrieves the right facts without rebuilding prompts from scratch. - Supports both managed Platform and OSS so you can test locally and deploy at scale. - Keeps results relevant with filters, rerankers, and thresholds. </Info>

Key terms

  • Query – Natural-language question or statement you pass to search.
  • Filters – JSON logic (AND/OR, comparison operators) that narrows results by user, categories, dates, etc.
  • top_k / threshold – Controls how many memories return and the minimum similarity score.
  • Rerank – Optional second pass that boosts precision when a reranker is configured.

Architecture

<Frame caption="Architecture diagram illustrating the memory search process."> </Frame> <Steps> <Step title="Query processing"> Mem0 cleans and enriches your natural-language query so the downstream embedding search is accurate. </Step> <Step title="Vector search"> Embeddings locate the closest memories using cosine similarity across your scoped dataset. </Step> <Step title="Filtering & reranking"> Logical filters narrow candidates; rerankers or thresholds fine-tune ordering. </Step> <Step title="Results delivery"> Formatted memories (with metadata and timestamps) return to your agent or calling service. </Step> </Steps>

This pipeline runs the same way for the hosted Platform API and the OSS SDK.

How does it work?

Search converts your natural language question into a vector embedding, then finds memories with similar embeddings in your database. The results are ranked by similarity score and can be further refined with filters or reranking.

python
# Minimal example that shows the concept in action
# Platform API
client.search("What are Alice's hobbies?", filters={"user_id": "alice"})

# OSS
m.search("What are Alice's hobbies?", filters={"user_id": "alice"})
<Tip> Always provide at least a `user_id` filter to scope searches to the right user's memories. This prevents cross-contamination between users. </Tip>

When should you use it?

  • Context retrieval - When your agent needs past context to generate better responses
  • Personalization - To recall user preferences, history, or past interactions
  • Fact checking - To verify information against stored memories before responding
  • Decision support - When agents need relevant background information to make decisions

Platform vs OSS usage

CapabilityMem0 PlatformMem0 OSS
Entity IDs on search / get_allInside filters={"user_id": "alice"}Inside filters={"user_id": "alice"} (aligned with Platform in v3 — top-level kwargs raise ValueError)
Filter syntaxLogical operators (AND, OR, comparisons) with field-level accessBasic field filters, extend via Python hooks
RerankingToggle rerank=True with managed reranker catalogRequires configuring local or third-party rerankers
ThresholdsRequest-level configuration (threshold, top_k)Controlled via SDK parameters
Response metadataIncludes confidence scores, timestamps, dashboard visibilityDetermined by your storage backend

Search with Mem0 Platform

<CodeGroup> ```python Python from mem0 import MemoryClient

client = MemoryClient(api_key="your-api-key")

query = "What do you know about me?" filters = { "OR": [ {"user_id": "alice"}, {"agent_id": {"in": ["travel-assistant", "customer-support"]}} ] }

results = client.search(query, filters=filters)


```javascript JavaScript
import { MemoryClient } from "mem0ai";

const client = new MemoryClient({apiKey: "your-api-key"});

const query = "I'm craving some pizza. Any recommendations?";
const filters = {
  AND: [
    { user_id: "alice" }
  ]
};

const results = await client.search(query, {
  filters
});
</CodeGroup>

Search with Mem0 Open Source

<CodeGroup> ```python Python from mem0 import Memory

m = Memory()

Simple search — entity IDs go in filters

related_memories = m.search("Should I drink coffee or tea?", filters={"user_id": "alice"})

Search with additional metadata filters (combine entity + metadata in the same dict)

memories = m.search( "food preferences", filters={"user_id": "alice", "categories": {"contains": "diet"}}, )


```javascript JavaScript
import { Memory } from 'mem0ai/oss';

const memory = new Memory();

// Simple search — entity IDs go inside `filters`
const relatedMemories = memory.search("Should I drink coffee or tea?", {
    filters: { userId: "alice" },
});

// Combine entity + metadata filters in the same filters object
const memories = memory.search("food preferences", {
    filters: { userId: "alice", categories: { contains: "diet" } },
});
</CodeGroup> <Info icon="check"> Expect an array of memory documents. Platform responses include vectors, metadata, and timestamps; OSS returns your stored schema. </Info>

Filter patterns

Filters help narrow down search results. Common use cases:

Filter by Session Context:

Platform API:

python
# Get memories from a specific agent session
client.search("query", filters={
    "AND": [
        {"user_id": "alice"},
        {"agent_id": "chatbot"},
        {"run_id": "session-123"}
    ]
})

OSS:

python
# Get memories from a specific agent session — entity IDs combined in filters
m.search("query", filters={
    "user_id": "alice",
    "agent_id": "chatbot",
    "run_id": "session-123",
})

Filter by Date Range:

python
# Platform only - date filtering
client.search("recent memories", filters={
    "AND": [
        {"user_id": "alice"},
        {"created_at": {"gte": "2024-07-01"}}
    ]
})

Filter by Categories:

python
# Platform only - category filtering
client.search("preferences", filters={
    "AND": [
        {"user_id": "alice"},
        {"categories": {"contains": "food"}}
    ]
})
  • Use natural language: Mem0 understands intent, so describe what you're looking for naturally
  • Scope with user ID: Always provide user_id to scope search to relevant memories
    • Platform API: Use filters={"user_id": "alice"}
    • OSS: Use user_id="alice" as parameter
  • Combine filters: Use AND/OR logic to create precise queries (Platform)
  • Consider wildcard filters: Use wildcard filters (e.g., run_id: "*") for broader matches
  • Tune parameters: Adjust top_k for result count, threshold for relevance cutoff
  • Enable reranking: Use rerank=True (default) when you have a reranker configured
<Callout type="tip" icon="plug"> **MCP Alternative**: With <Link href="/platform/mem0-mcp">Mem0 MCP</Link>, AI agents can search their own memories proactively when needed. </Callout>

More Details

For the full list of filter logic, comparison operators, and optional search parameters, see the Search Memory API Reference.

Put it into practice

  • Revisit the <Link href="/core-concepts/memory-operations/add">Add Memory</Link> guide to ensure you capture the context you expect to retrieve.
  • Configure rerankers and filters in <Link href="/platform/features/advanced-retrieval">Advanced Retrieval</Link> for higher precision.

See it live

  • <Link href="/cookbooks/operations/support-inbox">Support Inbox with Mem0</Link> demonstrates scoped search with rerankers.
  • <Link href="/cookbooks/integrations/tavily-search">Tavily Search with Mem0</Link> shows hybrid search in action.
<CardGroup cols={2}> <Card title="Search Memory API" description="Complete API reference with all filter operators and parameters." icon="book" href="/api-reference/memory/search-memories" /> <Card title="Support Inbox Cookbook" description="Build a complete support system with scoped search and reranking." icon="rocket" href="/cookbooks/operations/support-inbox" /> </CardGroup>