examples/basics/scorers/completeness/README.md
This example demonstrates how to use Mastra's Completeness Scorer to evaluate how thoroughly LLM-generated responses cover key elements from the input.
Clone the repository and navigate to the project directory:
git clone https://github.com/mastra-ai/mastra
cd examples/basics/scorers/completeness
Install dependencies:
pnpm install --ignore-workspace
Run the example:
pnpm start
The Completeness Scorer evaluates how thoroughly responses cover key elements from the input. It uses NLP analysis to extract and compare elements, evaluating:
The scorer uses the compromise NLP library to extract meaningful elements and calculates coverage percentage based on how many input elements are present in the output.
The example includes three scenarios:
Each scenario demonstrates:
The example will output:
createCompletenessScorer: Function that creates the completeness scorer instancescorer.run(): Method to evaluate input/output pairs for completeness
{ input, output } where:
input: Array of chat messages (e.g., [{ role: 'user', content: 'text' }])output: Response object (e.g., { role: 'assistant', text: 'response' })score: Numerical score (0-1)extractStepResult: Detailed analysis including missing elements and counts