examples/eval-tool-use/README.md
This example demonstrates how to evaluate LLM function/tool calling capabilities using promptfoo.
You can run this example with:
npx promptfoo@latest init --example eval-tool-use
cd eval-tool-use
This example shows how to configure and test function/tool calling capabilities across multiple LLM providers:
Each provider has slightly different syntax and requirements for implementing function/tool calling.
This example requires the following environment variables:
OPENAI_API_KEY - Your OpenAI API keyANTHROPIC_API_KEY - Your Anthropic API keyAWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - For AWS Bedrock (if using the Bedrock example)GROQ_API_KEY - If using Groq's LLaMA modelsYou can set these in a .env file or directly in your environment.
Each provider implements tool use with different syntax:
The configuration for this example is in:
promptfooconfig.yaml - Main example with OpenAI, Anthropic, and Groqpromptfooconfig.bedrock.yaml - Example specifically for AWS Bedrock modelsTo run the main example:
promptfoo eval
To run the Bedrock example:
promptfoo eval -c promptfooconfig.bedrock.yaml
After running the evaluation, view the results with:
promptfoo view
This example uses a simple weather lookup function that takes a location and optionally a temperature unit. The example illustrates how different providers handle the same function definition with different syntaxes.
External tools can also be loaded from separate files, as demonstrated with external_tools.yaml.
The Anthropic provider includes an example with strict: true enabled, which uses Anthropic's structured outputs feature to guarantee that tool parameters exactly match your schema. This is useful for:
When strict: true is enabled, Claude will always return tool inputs that strictly follow your input_schema, with no type mismatches or missing required fields. See the Anthropic structured outputs example for more details.
This example also demonstrates the use of finish-reason assertions to validate why a model stopped generating:
tool_calls: Verifies the model stopped to make a function/tool call (e.g., weather lookup for cities)The example shows that when models are asked about weather in real cities (Boston, New York, Paris), they correctly stop generation to make tool calls, resulting in a tool_calls finish reason. This helps ensure your models are using tools appropriately when they should be.