examples/provider-cerebras/README.md
This example demonstrates how to use the Cerebras provider with promptfoo to evaluate Cerebras Inference API models, which offer high-performance inference for Llama and other LLM models.
You can run this example with:
npx promptfoo@latest init --example provider-cerebras
cd provider-cerebras
export CEREBRAS_API_KEY="your-api-key-here"
Alternatively, you can add it to your .env file:
CEREBRAS_API_KEY=your-api-key-here
This repository contains three example configurations demonstrating different Cerebras features:
promptfooconfig.yaml)This configuration evaluates two Cerebras models on their ability to explain complex concepts in simple terms.
promptfoo eval
Expected output: You'll see a comparison of how each model explains concepts from different domains, with metrics on clarity, accuracy, and response time.
promptfooconfig-structured.yaml)The structured output example demonstrates Cerebras's JSON schema enforcement capabilities, ensuring the model returns consistent, structured recipe data with proper types and required fields.
promptfoo eval -c promptfooconfig-structured.yaml
Expected output: You'll receive structured JSON outputs for different recipes, with consistent fields like cuisine type, difficulty level, ingredients, and cooking instructions - all following the defined schema.
Example output:
{
"name": "Traditional Pasta Carbonara",
"cuisine": "Italian",
"difficulty": "medium",
"prepTime": 15,
"cookTime": 20,
"ingredients": [
{ "name": "spaghetti", "amount": "400g" },
{ "name": "pancetta", "amount": "150g" },
{ "name": "eggs", "amount": "3 large" },
{ "name": "parmesan cheese", "amount": "50g" }
],
"instructions": [
"Bring a large pot of salted water to boil",
"Cook spaghetti according to package instructions",
"In a separate pan, cook pancetta until crispy",
"In a bowl, whisk eggs and grated parmesan cheese",
"Drain pasta, reserving some pasta water",
"Toss hot pasta with pancetta, then quickly mix in egg mixture",
"Add pasta water as needed to create a silky sauce"
]
}
promptfooconfig-tools.yaml)The tool use example demonstrates Cerebras's function calling capabilities with a calculator tool that the model can use to solve math problems.
promptfoo eval -c promptfooconfig-tools.yaml
Expected output: The model will use the calculator tool to solve math problems and provide step-by-step explanations of the solution process. For example, when given "15 × 7", it will calculate 105 and explain multiplication concepts.
Cerebras supports several powerful models:
llama-4-scout-17b-16e-instruct - Llama 4 Scout 17B model with 16 expert MoE (featured in examples)llama3.1-8b - Llama 3.1 8B modelllama-3.3-70b - Llama 3.3 70B modeldeepSeek-r1-distill-llama-70B (private preview)Cerebras Inference API offers competitive pricing compared to other inference services. Check the official pricing page for the most current rates. Usage is billed based on input and output tokens.