apps/docs/integrations/mastra.mdx
Integrate Supermemory with Mastra to give your AI agents persistent memory. Use the withSupermemory wrapper for zero-config setup or processors for fine-grained control.
npm install @supermemory/tools @mastra/core
Wrap your agent config with withSupermemory to add memory capabilities:
import { Agent } from "@mastra/core/agent"
import { withSupermemory } from "@supermemory/tools/mastra"
import { openai } from "@ai-sdk/openai"
// Create agent with memory-enhanced config
const agent = new Agent(withSupermemory(
{
id: "my-assistant",
name: "My Assistant",
model: openai("gpt-4o"),
instructions: "You are a helpful assistant.",
},
"user-123", // containerTag - scopes memories to this user
{
mode: "full",
addMemory: "always",
threadId: "conv-456",
}
))
const response = await agent.generate("What do you know about me?")
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), ... },
"user-123",
{
addMemory: "always",
threadId: "conv-456" // Required for conversation grouping
}
))
The Mastra integration uses Mastra's native Processor interface:
sequenceDiagram
participant User
participant Agent
participant InputProcessor
participant LLM
participant OutputProcessor
participant Supermemory
User->>Agent: Send message
Agent->>InputProcessor: Process input
InputProcessor->>Supermemory: Fetch memories
Supermemory-->>InputProcessor: Return memories
InputProcessor->>Agent: Inject into system prompt
Agent->>LLM: Generate response
LLM-->>Agent: Return response
Agent->>OutputProcessor: Process output
OutputProcessor->>Supermemory: Save conversation (if enabled)
Agent-->>User: Return response
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | SUPERMEMORY_API_KEY env | Your Supermemory API key |
baseUrl | string | https://api.supermemory.ai | Custom API endpoint |
mode | "profile" | "query" | "full" | "profile" | Memory search mode |
addMemory | "always" | "never" | "never" | Auto-save conversations |
threadId | string | - | Conversation ID for grouping messages |
verbose | boolean | false | Enable debug logging |
promptTemplate | function | - | Custom memory formatting |
Profile Mode (Default) - Retrieves the user's complete profile without query-based filtering:
const agent = new Agent(withSupermemory(config, "user-123", { mode: "profile" }))
Query Mode - Searches memories based on the user's message:
const agent = new Agent(withSupermemory(config, "user-123", { mode: "query" }))
Full Mode - Combines profile AND query-based search for maximum context:
const agent = new Agent(withSupermemory(config, "user-123", { mode: "full" }))
### Mode Comparison
| Mode | Description | Use Case |
|------|-------------|----------|
| `profile` | Static + dynamic user facts | General personalization |
| `query` | Semantic search on user message | Specific Q&A |
| `full` | Both profile and search | Chatbots, assistants |
---
## Saving Conversations
Enable automatic conversation saving with `addMemory: "always"`. A `threadId` is required to group messages:
```typescript
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions: "..." },
"user-123",
{
addMemory: "always",
threadId: "conv-456",
}
))
// All messages in this conversation are saved
await agent.generate("I prefer TypeScript over JavaScript")
await agent.generate("My favorite framework is Next.js")
Customize how memories are formatted and injected. The template receives userMemories, generalSearchMemories, and searchResults (raw array for filtering by metadata):
import { Agent } from "@mastra/core/agent"
import { withSupermemory } from "@supermemory/tools/mastra"
import type { MemoryPromptData } from "@supermemory/tools/mastra"
const claudePrompt = (data: MemoryPromptData) => `
<context>
<user_profile>
${data.userMemories}
</user_profile>
<relevant_memories>
${data.generalSearchMemories}
</relevant_memories>
</context>
`.trim()
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions: "..." },
"user-123",
{
mode: "full",
promptTemplate: claudePrompt,
}
))
For advanced use cases, use processors directly instead of the wrapper:
Inject memories without saving conversations:
import { Agent } from "@mastra/core/agent"
import { createSupermemoryProcessor } from "@supermemory/tools/mastra"
import { openai } from "@ai-sdk/openai"
const agent = new Agent({
id: "my-assistant",
name: "My Assistant",
model: openai("gpt-4o"),
inputProcessors: [
createSupermemoryProcessor("user-123", {
mode: "full",
verbose: true,
}),
],
})
Save conversations without memory injection:
import { Agent } from "@mastra/core/agent"
import { createSupermemoryOutputProcessor } from "@supermemory/tools/mastra"
import { openai } from "@ai-sdk/openai"
const agent = new Agent({
id: "my-assistant",
name: "My Assistant",
model: openai("gpt-4o"),
outputProcessors: [
createSupermemoryOutputProcessor("user-123", {
addMemory: "always",
threadId: "conv-456",
}),
],
})
Use the factory function for shared configuration:
import { Agent } from "@mastra/core/agent"
import { createSupermemoryProcessors } from "@supermemory/tools/mastra"
import { openai } from "@ai-sdk/openai"
const { input, output } = createSupermemoryProcessors("user-123", {
mode: "full",
addMemory: "always",
threadId: "conv-456",
verbose: true,
})
const agent = new Agent({
id: "my-assistant",
name: "My Assistant",
model: openai("gpt-4o"),
inputProcessors: [input],
outputProcessors: [output],
})
Mastra's RequestContext can provide threadId dynamically:
import { Agent } from "@mastra/core/agent"
import { RequestContext, MASTRA_THREAD_ID_KEY } from "@mastra/core/request-context"
import { withSupermemory } from "@supermemory/tools/mastra"
import { openai } from "@ai-sdk/openai"
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions: "..." },
"user-123",
{
mode: "full",
addMemory: "always",
// threadId not set - will use RequestContext
}
))
// Set threadId dynamically via RequestContext
const ctx = new RequestContext()
ctx.set(MASTRA_THREAD_ID_KEY, "dynamic-thread-id")
await agent.generate("Hello!", { requestContext: ctx })
Enable detailed logging for debugging:
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions: "..." },
"user-123",
{ verbose: true }
))
// Console output:
// [supermemory] Starting memory search { containerTag: "user-123", mode: "profile" }
// [supermemory] Found 5 memories
// [supermemory] Injected memories into system prompt { length: 1523 }
The wrapper correctly merges with existing processors in the config:
// Supermemory processors are merged correctly:
// - Input: [supermemory, myLogging] (supermemory runs first)
// - Output: [myAnalytics, supermemory] (supermemory runs last)
const agent = new Agent(withSupermemory(
{
id: "my-assistant",
model: openai("gpt-4o"),
inputProcessors: [myLoggingProcessor],
outputProcessors: [myAnalyticsProcessor],
},
"user-123"
))
withSupermemoryEnhances a Mastra agent config with memory capabilities.
function withSupermemory<T extends AgentConfig>(
config: T,
containerTag: string,
options?: SupermemoryMastraOptions
): T
Parameters:
config - The Mastra agent configuration objectcontainerTag - User/container ID for scoping memoriesoptions - Configuration optionsReturns: Enhanced config with Supermemory processors injected
createSupermemoryProcessorCreates an input processor for memory injection.
function createSupermemoryProcessor(
containerTag: string,
options?: SupermemoryMastraOptions
): SupermemoryInputProcessor
createSupermemoryOutputProcessorCreates an output processor for conversation saving.
function createSupermemoryOutputProcessor(
containerTag: string,
options?: SupermemoryMastraOptions
): SupermemoryOutputProcessor
createSupermemoryProcessorsCreates both processors with shared configuration.
function createSupermemoryProcessors(
containerTag: string,
options?: SupermemoryMastraOptions
): {
input: SupermemoryInputProcessor
output: SupermemoryOutputProcessor
}
SupermemoryMastraOptionsinterface SupermemoryMastraOptions {
apiKey?: string
baseUrl?: string
mode?: "profile" | "query" | "full"
addMemory?: "always" | "never"
threadId?: string
verbose?: boolean
promptTemplate?: (data: MemoryPromptData) => string
}
SUPERMEMORY_API_KEY=your_supermemory_key
Processors gracefully handle errors without breaking the agent:
// Missing API key throws immediately
const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions: "..." },
"user-123",
{ apiKey: undefined } // Will check SUPERMEMORY_API_KEY env
))
// Error: SUPERMEMORY_API_KEY is not set