site/docs/configuration/expected-outputs/model-graded/context-recall.md
Checks if your retrieved context contains the information needed to generate a known correct answer.
Use when: You have ground truth answers and want to verify your retrieval finds supporting evidence.
How it works: Breaks the expected answer into statements and checks if each can be attributed to the context. Score = attributable statements / total statements.
Example:
Expected: "Python was created by Guido van Rossum in 1991"
Context: "Python was released in 1991"
Score: 0.5 (year ✓, creator ✗)
assert:
- type: context-recall
value: 'Python was created by Guido van Rossum in 1991'
threshold: 1.0 # Context must support entire answer
value - Expected answer/ground truthcontext - Retrieved text (in vars or via contextTransform)threshold - Minimum score 0-1 (default: 0)tests:
- vars:
query: 'Who created Python?'
context: 'Guido van Rossum created Python in 1991.'
assert:
- type: context-recall
value: 'Python was created by Guido van Rossum in 1991'
threshold: 1.0
For RAG systems that return context with their response:
# Provider returns { answer: "...", context: "..." }
assert:
- type: context-recall
value: 'Expected answer here'
contextTransform: 'output.context' # Extract context field
threshold: 0.8
context-relevance - Is retrieved context relevant?context-faithfulness - Does output stay faithful to context?