docs/platform/features/memory-decay.mdx
Older memories drift in relevance at different speeds. A user's coffee order matters every morning; a one-off project name from last quarter rarely matters again. Memory Decay makes that intuition explicit at search time: every time a memory is returned in a search it gets a small reinforcement, and memories that haven't been touched in a while have their ranking score gently dampened.
It is a soft ranking bias, never a filter. Decay never zeroes a candidate out — at worst it scales its score by 0.3×. Anything that would have surfaced without decay can still surface with decay on, just with a different ranking among similarly-scored results.
Every memory carries a small piece of bookkeeping: when was it last retrieved, and how often. Memory Decay turns that history into a scaling factor in the range 0.3× to 1.5× and multiplies it into the ranking score at search time.
| Memory state | Scaling factor | Ranking effect |
|---|---|---|
| Just accessed | ≈ 1.5× | Strong boost |
| Touched today | 1.2 – 1.4× | Mild boost |
| Idle for a few days | 0.6 – 1.0× | Mild dampening |
| Idle for weeks | 0.4 – 0.6× | Stronger dampening |
| Idle for many months / years | ≈ 0.3× | Floor — never lower |
The bounds matter: 0.3 is the floor and 1.5 is the ceiling, so decay can meaningfully reorder candidates without ever dominating the underlying relevance score.
At search time the pipeline:
top_k × 3, with a floor of 50) so reordering has room.0.3×–1.5× range can rearrange candidates.score clamped to [0, 1] so the API contract is preserved.top_k you requested.Memories created before decay was enabled don't yet have an access history. They use a sensible fallback: their updated_at is treated as a single past touch, so the same scale above applies based on how stale that update is — a recently-updated legacy memory enters near the neutral band, a long-stale one sits closer to the floor. Once surfaced in a search after decay is on, they accumulate access history naturally and behave like any other memory.
MEM0_API_KEY in your environment, or pass it to the SDK constructor.The toggle lives on the project. You enable decay by patching the project's decay field; everything else — your add calls, your search calls, your application code — stays exactly the same.
The toggle is exposed on the standard project-update endpoint, the same place where multilingual and custom_categories live.
import os
import requests
org_id = os.environ["MEM0_ORG_ID"]
project_id = os.environ["MEM0_PROJECT_ID"]
requests.patch(
f"https://api.mem0.ai/api/v1/orgs/organizations/{org_id}/projects/{project_id}/",
headers={"Authorization": f"Token {os.environ['MEM0_API_KEY']}"},
json={"decay": True},
)
const res = await fetch(
`https://api.mem0.ai/api/v1/orgs/organizations/${process.env.MEM0_ORG_ID}/projects/${process.env.MEM0_PROJECT_ID}/`,
{
method: "PATCH",
headers: {
Authorization: `Token ${process.env.MEM0_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ decay: true }),
},
);
{ "message": "Updated decay" }
decay is returned on every project read. To fetch only this field, use ?fields=decay.
{ "decay": true }
The toggle is fully reversible. Setting it to false immediately restores the pre-decay ranking; nothing about your stored memories is modified or lost.
top_k you requested, but the items returned can come from a deeper slice of the pre-decay ranking than before.score field stays in [0, 1]. Even when the internal product exceeds 1, the field returned to the client is clamped, so existing assertions and downstream UI logic continue to work.threshold is still applied during candidate selection.| Stage | Scaling factor | Effect |
|---|---|---|
| Just added | ≈ 1.5× | Strong boost — fresh facts surface easily. |
| Reinforced on a recent search | 1.2 – 1.5× | Sustains its boost for the next several searches. |
| Idle for a few days | 0.6 – 1.0× | Falls back into the neutral band. |
| Idle for weeks | 0.4 – 0.6× | Mild dampening — can still surface for strong matches. |
| Pre-decay legacy memory (no access history) | 0.3 – 1.0× | Falls back to updated_at: recently-updated entries land near 1.0×, long-stale entries approach the 0.3× floor. |
The reinforcement is bounded: each memory tracks at most the last 20 access timestamps, so the boost stays well-behaved no matter how many times a memory is retrieved.
Will decay ever drop a result that would otherwise surface?
No. The floor is 0.3× — the scaling factor can dampen a score, never zero it. Threshold filtering happens before decay, so any candidate that cleared the threshold is in the pool decay reorders.
Why is the public score sometimes below my requested threshold?
The threshold is applied to the candidate pool pre-decay; the scaling factor then reshapes scores in the 0.3×–1.5× band. A stale-but-relevant candidate can come back with a final score slightly under your threshold by design — the candidate stays visible but visibly dampened. Filter client-side if you need a hard floor on the response.
Does decay change how I add memories?
No. The client.add(...) path is unchanged. Decay is a search-time ranking adjustment.
What if I had memories before turning decay on?
They use a fallback: the memory's updated_at is treated as a single historical touch, so the same scaling applies based on how stale that update is — a recently-updated legacy memory enters near the neutral band (~1.0×), a long-stale one closer to the floor (~0.3×). Once retrieved they accumulate access history and behave like any other memory.
Can I tune how aggressively decay scales scores? Not in this version. The current scaling is calibrated to be conservative — wide enough to meaningfully reorder candidates, narrow enough to never dominate the underlying relevance score. Per-project tuning is on the roadmap.
Can I see the scaling factor per result?
Internal scoring details are persisted on the search Event for support and debugging. They aren't exposed in the public response by design — the response surface stays a single score field.
Does decay interact with reranking?
Yes — they layer cleanly. The reranker produces a richer relevance score; decay then biases that score by reinforcement history before final truncation to top_k.
This release is deliberately the simplest version of decay we could ship — every memory contributes to ranking through its access history alone, so the signal can be evaluated in isolation. On the roadmap:
health will be able to carry more weight than a passing observation tagged misc, so important categories don't get dampened the same way as noise.Both extensions are forward-compatible — no migration on your side will be needed when they ship.