docs/open-source/features/custom-instructions.mdx
Custom instructions let you decide exactly which facts Mem0 records from a conversation. Define a focused prompt, give a few examples, and Mem0 will add only the memories that match your use case.
<Info> **You'll use this when...** - A project needs domain-specific facts (order numbers, customer info) without storing casual chatter. - You already have a clear schema for memories and want the LLM to follow it. - You must prevent irrelevant details from entering long-term storage. </Info> <Warning> Prompts that are too broad cause unrelated facts to slip through. Keep instructions tight and test them with real transcripts. </Warning> <Note> The `custom_fact_extraction_prompt` parameter has been renamed to `custom_instructions`. If you are upgrading from an older version, update your configuration accordingly. </Note>facts array that Mem0 converts into individual memories.custom_instructions (Python) or customInstructions (TypeScript) lives alongside your model settings.Input: Hi. Output: {"facts" : []}
Input: The weather is nice today. Output: {"facts" : []}
Input: My order #12345 hasn't arrived yet. Output: {"facts" : ["Order #12345 not received"]}
Input: I'm John Doe, and I'd like to return the shoes I bought last week. Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
Input: I ordered a red shirt, size medium, but received a blue one instead. Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
Return the facts and customer information in a json format as shown above. """
```ts TypeScript
const customInstructions = `
Please only extract entities containing customer support information, order details, and user information.
Here are some few shot examples:
Input: Hi.
Output: {"facts" : []}
Input: The weather is nice today.
Output: {"facts" : []}
Input: My order #12345 hasn't arrived yet.
Output: {"facts" : ["Order #12345 not received"]}
Input: I am John Doe, and I would like to return the shoes I bought last week.
Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
Input: I ordered a red shirt, size medium, but received a blue one instead.
Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
Return the facts and customer information in a json format as shown above.
`;
config = { "llm": { "provider": "openai", "config": { "model": "gpt-5-mini", "temperature": 0.2, "max_tokens": 2000, } }, "custom_instructions": custom_instructions, }
m = Memory.from_config(config)
```ts TypeScript
import { Memory } from "mem0ai/oss";
const config = {
llm: {
provider: "openai",
config: {
apiKey: process.env.OPENAI_API_KEY ?? "",
model: "gpt-4-turbo-preview",
temperature: 0.2,
maxTokens: 1500,
},
},
customInstructions: customInstructions,
};
const memory = new Memory(config);
await memory.add("Yesterday, I ordered a laptop, the order id is 12345", { userId: "user123" });
{
"results": [
{"memory": "Ordered a laptop", "event": "ADD"},
{"memory": "Order ID: 12345", "event": "ADD"},
{"memory": "Order placed yesterday", "event": "ADD"}
],
"relations": []
}
await memory.add("I like going to hikes", { userId: "user123" });
{
"results": [],
"relations": []
}
facts array matches your schema.results array.[] so the model learns to skip them.facts to simplify downstream parsing.