apps/docs/ai-sdk/user-profiles.mdx
The withSupermemory middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
// Wrap any model with Supermemory middleware
const modelWithMemory = withSupermemory(
openai("gpt-4"), // Your base model
"user-123" // Container tag (user ID)
)
// Use normally - profiles are automatically injected!
const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "Help me with my current project" }]
})
// The model knows about the user's background, skills, and current work!
The withSupermemory middleware:
All of this happens transparently - you write code as if using a normal model, but get personalized responses.
<Note> **Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:const model = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
})
Configure how the middleware retrieves and uses memory:
Retrieves the user's complete profile without query-specific search. Best for general personalization.
// Default behavior - profile mode
const model = withSupermemory(openai("gpt-4"), "user-123")
// Or explicitly specify
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
// Response uses full user profile for context
Searches memories based on the user's specific message. Best for finding relevant information.
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "What was that Python script I wrote last week?"
}]
})
// Searches for memories about Python scripts from last week
Combines profile AND query-based search for comprehensive context. Best for complex interactions.
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "Help me debug this similar to what we did before"
}]
})
// Uses both profile (user's expertise) AND search (previous debugging sessions)
Customize how memories are formatted and injected into the system prompt using the promptTemplate option. This is useful for:
import { generateText } from "ai"
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
const customPrompt = (data: MemoryPromptData) => `
<user_memories>
Here is some information about your past conversations with the user:
${data.userMemories}
${data.generalSearchMemories}
</user_memories>
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full",
promptTemplate: customPrompt
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
The MemoryPromptData object passed to your template function provides:
userMemories: Pre-formatted markdown combining static profile facts (name, preferences, goals) and dynamic context (current projects, recent interests)generalSearchMemories: Pre-formatted search results based on semantic similarity to the current query (empty string if mode is "profile")searchResults: Raw search results array (Array<{ memory: string; metadata?: Record<string, unknown> }>) for traversing, filtering, or selectively including results based on metadataClaude models perform better with XML-structured prompts:
const claudePrompt = (data: MemoryPromptData) => `
<context>
<user_profile>
${data.userMemories}
</user_profile>
<relevant_memories>
${data.generalSearchMemories}
</relevant_memories>
</context>
Use the above context to provide personalized responses.
`.trim()
const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
mode: "full",
promptTemplate: claudePrompt
})
Use searchResults to traverse the raw data and pick what's important:
const selectivePrompt = (data: MemoryPromptData) => {
const relevant = data.searchResults.filter(
(r) => (r.metadata?.score as number) > 0.7
)
return `
<user_memories>
${data.userMemories}
</user_memories>
<relevant_context>
${relevant.map((r) => `- ${r.memory}`).join("\n")}
</relevant_context>
`.trim()
}
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full",
promptTemplate: selectivePrompt
})
Remove "supermemories" references and use your own branding:
const brandedPrompt = (data: MemoryPromptData) => `
You are an AI assistant with access to the user's personal knowledge base.
User Profile:
${data.userMemories}
Relevant Context:
${data.generalSearchMemories}
Use this information to provide personalized and contextually relevant responses.
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
promptTemplate: brandedPrompt
})
If no promptTemplate is provided, the default format is used:
const defaultPrompt = (data: MemoryPromptData) =>
`User Supermemories: \n${data.userMemories}\n${data.generalSearchMemories}`.trim()
Enable detailed logging to see exactly what's happening:
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true // Enable detailed logging
})
const result = await generateText({
model,
messages: [{ role: "user", content: "Where do I live?" }]
})
// Console output:
// [supermemory] Searching memories for container: user-123
// [supermemory] User message: Where do I live?
// [supermemory] System prompt exists: false
// [supermemory] Found 3 memories
// [supermemory] Memory content: You live in San Francisco, California...
// [supermemory] Creating new system prompt with memories
The AI SDK middleware abstracts away the complexity of manual profile management:
<Tabs> <Tab title="With AI SDK (Simple)"> ```typescript // One line setup const model = withSupermemory(openai("gpt-4"), "user-123")// Use normally
const result = await generateText({
model,
messages: [{ role: "user", content: "Help me" }]
})
```
// Manual prompt construction
const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
// Manual LLM call with profile
const result = await generateText({
model: openai("gpt-4"),
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: "Help me" }
]
})
```
withSupermemory middleware is currently in betaSUPERMEMORY_API_KEY is set in your environment