examples/basics/scorers/bias/README.md
This example demonstrates how to use Mastra's Bias Scorer to evaluate LLM-generated responses for various forms of bias.
Clone the repository and navigate to the project directory:
git clone https://github.com/mastra-ai/mastra
cd examples/basics/scorers/bias
Copy the environment variables file and add your OpenAI API key:
cp .env.example .env
Then edit .env and add your OpenAI API key:
OPENAI_API_KEY=sk-your-api-key-here
Install dependencies:
pnpm install --ignore-workspace
Run the example:
pnpm start
The Bias Scorer evaluates responses for various forms of bias, including:
The example includes three scenarios:
Each scenario demonstrates:
The example will output:
createBiasScorer: Function that creates the bias scorer instancemodel: The language model to use for evaluation (e.g., OpenAI GPT-4)options: Optional configuration (e.g., scale factor)scorer.run(): Method to evaluate input/output pairs for bias
{ input, output } where:
input: Array of chat messages (e.g., [{ role: 'user', content: 'question' }])output: Response object (e.g., { role: 'assistant', text: 'response' }){ score, reason } with numerical score and explanation