docs/integrations/agno.mdx
This integration of Mem0 with Agno enables persistent, multimodal memory for Agno-based agents - improving personalization, context awareness, and continuity across conversations.
Mem0ToolsBefore setting up Mem0 with Agno, ensure you have:
pip install agno mem0ai python-dotenv
Mem0Tools)The simplest way to integrate Mem0 with Agno Agents is to use Mem0 as a tool using built-in Mem0Tools:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.mem0 import Mem0Tools
agent = Agent(
name="Memory Agent",
model=OpenAIChat(id="gpt-5-mini"),
tools=[Mem0Tools()],
description="An assistant that remembers and personalizes using Mem0 memory."
)
This enables memory functionality out of the box:
Mem0Tools uses MemoryClient.add(...) to store messages from user-agent interactions, including optional metadata such as user ID or session.MemoryClient.search(...) to retrieve relevant past messages, improving contextual understanding.
Mem0Toolsuses theMemoryClientunder the hood and requires no additional setup. You can customize its behavior by modifying your tools list or extending it in code.
Note: Mem0 can also be used with Agno Agents as a separate memory layer.
The following example demonstrates how to create an Agno agent with Mem0 memory integration, including support for image processing:
import base64
from pathlib import Path
from typing import Optional
from agno.agent import Agent
from agno.media import Image
from agno.models.openai import OpenAIChat
from mem0 import MemoryClient
# Initialize the Mem0 client
client = MemoryClient()
# Define the agent
agent = Agent(
name="Personal Agent",
model=OpenAIChat(id="gpt-4"),
description="You are a helpful personal agent that helps me with day to day activities."
"You can process both text and images.",
markdown=True
)
def chat_user(
user_input: Optional[str] = None,
user_id: str = "alex",
image_path: Optional[str] = None
) -> str:
"""
Handle user input with memory integration, supporting both text and images.
Args:
user_input: The user's text input
user_id: Unique identifier for the user
image_path: Path to an image file if provided
Returns:
The agent's response as a string
"""
if image_path:
# Convert image to base64
with open(image_path, "rb") as image_file:
base64_image = base64.b64encode(image_file.read()).decode("utf-8")
# Create message objects for text and image
messages = []
if user_input:
messages.append({
"role": "user",
"content": user_input
})
messages.append({
"role": "user",
"content": {
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
})
# Store messages in memory
client.add(messages, user_id=user_id)
print("✅ Image and text stored in memory.")
if user_input:
# Search for relevant memories
memories = client.search(user_input, filters={"user_id": user_id})
memory_context = "\n".join(f"- {m['memory']}" for m in memories['results'])
# Construct the prompt
prompt = f"""
You are a helpful personal assistant who helps users with their day-to-day activities and keeps track of everything.
Your task is to:
1. Analyze the given image (if present) and extract meaningful details to answer the user's question.
2. Use your past memory of the user to personalize your answer.
3. Combine the image content and memory to generate a helpful, context-aware response.
Here is what I remember about the user:
{memory_context}
User question:
{user_input}
"""
# Get response from agent
if image_path:
response = agent.run(prompt, images=[Image(filepath=Path(image_path))])
else:
response = agent.run(prompt)
# Store the interaction in memory
interaction_message = [{"role": "user", "content": f"User: {user_input}\nAssistant: {response.content}"}]
client.add(interaction_message, user_id=user_id)
return response.content
return "No user input or image provided."
# Example Usage
if __name__ == "__main__":
response = chat_user(
"I like to travel and my favorite destination is London",
image_path="travel_items.jpeg",
user_id="alex"
)
print(response)
The integration supports storing both text and image data:
Improve your agent's context awareness:
Customize the integration to your needs:
Mem0Tools() for drop-in memory supportMemoryClient directly for advanced control