examples/basics/scorers/answer-relevancy/README.md
This example demonstrates how to use Mastra's Answer Relevancy Scorer to evaluate the relevance of LLM-generated responses to given inputs.
Clone the repository and navigate to the project directory:
git clone https://github.com/mastra-ai/mastra
cd examples/basics/scorers/answer-relevancy
Copy the environment variables file and add your OpenAI API key:
cp .env.example .env
Then edit .env and add your OpenAI API key:
OPENAI_API_KEY=sk-your-api-key-here
Install dependencies:
pnpm install --ignore-workspace
Run the example:
pnpm start
The Answer Relevancy Scorer measures how well an LLM's response aligns with and addresses the given input question. It evaluates:
The example includes three scenarios:
Each scenario demonstrates:
The example will output:
createAnswerRelevancyScorer: Function that creates the scorer instancemodel: The language model to use for evaluation (e.g., OpenAI GPT-4)options.uncertaintyWeight: Weight for uncertain verdicts (default: 0.3)options.scale: Scale factor for the final score (default: 1)scorer.run(): Method to evaluate input/output pairs
{ input, output } where:
input: Array of chat messages (e.g., [{ role: 'user', content: 'question' }])output: Response object (e.g., { role: 'assistant', text: 'response' }){ score, reason } with numerical score and explanation