examples/basics/scorers/hallucination/README.md
This example demonstrates how to use Mastra's Hallucination Scorer to evaluate whether responses contain information not supported by the provided context.
Clone the repository and navigate to the project directory:
git clone https://github.com/mastra-ai/mastra
cd examples/basics/scorers/hallucination
Copy the environment variables file and add your OpenAI API key:
cp .env.example .env
Then edit .env and add your OpenAI API key:
OPENAI_API_KEY=sk-your-api-key-here
Install dependencies:
pnpm install --ignore-workspace
Run the example:
pnpm start
The Hallucination Scorer evaluates whether responses contain information not supported by the provided context. It analyzes responses to detect:
The example includes three scenarios:
Each scenario demonstrates:
The example will output:
createHallucinationScorer: Function that creates the hallucination scorer instancemodel: The language model to use for evaluation (e.g., OpenAI GPT-4)scorer.run(): Method to evaluate input/output pairs for hallucinations
{ input, output, additionalContext } where:
input: Array of chat messages (e.g., [{ role: 'user', content: 'question' }])output: Response object (e.g., { role: 'assistant', text: 'response' })additionalContext: Object with context array containing the source material{ score, reason } with numerical score and detailed explanation