examples/azure/mistral/README.md
This example demonstrates how to use Mistral models on Azure AI Foundry with promptfoo.
You can run this example with:
npx promptfoo@latest init --example azure/mistral
cd azure/mistral
promptfooconfig.yaml with your deployment name and API hostexport AZURE_API_KEY=your-api-key
| Model | Description |
|---|---|
Mistral-Large-3 | Mistral Large 3 - Most capable |
Mistral-Large-2411 | Mistral Large - Previous gen |
mistral-small-2503 | Mistral Small - Fast, efficient |
Pixtral-Large-2411 | Pixtral Large - Vision + text |
Ministral-3B-2410 | Ministral 3B - Fast, lightweight |
Mistral-Nemo | Mistral Nemo - Balanced |
npx promptfoo@latest eval
npx promptfoo@latest view
The example compares Mistral Large 3 and Mistral Small 2503 on text generation tasks. This helps evaluate the trade-off between model capacity, speed, and quality.