examples/eval-python-assert/README.md
Example configurations for testing LLM outputs using Python assertions with promptfoo.
You can run this example with:
npx promptfoo@latest init --example eval-python-assert
cd eval-python-assert
This example demonstrates how to use Python assertions for custom output validation with:
OPENAI_API_KEY - Your OpenAI API key (required)This example includes two different approaches:
promptfooconfig-external.yaml)Uses external Python files for complex assertion logic:
promptfooconfig-external.yaml - Configuration with external Python assertionsassert.py - Basic assertion function with detailed scoringassert_with_config.py - Configuration-based assertion functionpromptfooconfig-inline.yaml)Demonstrates inline Python assertions directly in the configuration:
promptfooconfig-inline.yaml - Configuration with inline Python codeExternal Python assertions example:
promptfoo eval -c promptfooconfig-external.yaml
Inline Python assertions example:
promptfoo eval -c promptfooconfig-inline.yaml
View results:
promptfoo view
def get_assert(output, context):
return "expected_word" in output.lower()
def get_assert(output, context):
if "perfect" in output.lower():
return 1.0
elif "good" in output.lower():
return 0.5
else:
return 0.0
def get_assert(output, context):
return {
"pass": True,
"score": 0.8,
"reason": "Contains expected content",
"namedScores": {"quality": 0.9, "relevance": 0.7}
}