site/docs/configuration/expected-outputs/model-graded/answer-relevance.md
The answer-relevance assertion evaluates whether an LLM's output is relevant to the original query. It uses a combination of embedding similarity and LLM evaluation to determine relevance.
To use the answer-relevance assertion type, add it to your test configuration like this:
assert:
- type: answer-relevance
threshold: 0.7 # Score between 0 and 1
The answer relevance checker:
A higher threshold requires the output to be more closely related to the original query.
Here's a complete example showing how to use answer relevance:
prompts:
- 'Tell me about {{topic}}'
providers:
- openai:gpt-5
tests:
- vars:
topic: quantum computing
assert:
- type: answer-relevance
threshold: 0.8
Answer relevance uses two types of providers:
You can override either or both:
defaultTest:
options:
provider:
text:
id: gpt-5
config:
temperature: 0
embedding:
id: openai:text-embedding-ada-002
You can also override providers at the assertion level:
assert:
- type: answer-relevance
threshold: 0.8
provider:
text: anthropic:claude-2
embedding: cohere:embed-english-v3.0
You can customize the question generation prompt using the rubricPrompt property:
defaultTest:
options:
rubricPrompt: |
Given this answer: {{output}}
Generate 3 questions that this answer would be appropriate for.
Make the questions specific and directly related to the content.
See model-graded metrics for more options.