Back to Promptfoo

Configuration Overview - Prompts, Tests, and Outputs

site/docs/configuration/parameters.md

0.121.95.2 KB
Original Source

Prompts, tests, and outputs

Configure how promptfoo evaluates your LLM applications.

:::tip Detailed Documentation For comprehensive guides, see the dedicated pages:

Quick Start

yaml
# Define your prompts
prompts:
  - 'Translate to {{language}}: {{text}}'

# Configure test cases
tests:
  - vars:
      language: French
      text: Hello world
    assert:
      - type: contains
        value: Bonjour
# Run evaluation
# promptfoo eval

Core Concepts

๐Ÿ“ Prompts

Define what you send to your LLMs - from simple strings to complex conversations.

<details> <summary><strong>Common patterns</strong></summary>

Text prompts

yaml
prompts:
  - 'Summarize this: {{content}}'
  - file://prompts/customer_service.txt

Chat conversations

yaml
prompts:
  - file://prompts/chat.json

Dynamic prompts

yaml
prompts:
  - file://generate_prompt.js
  - file://create_prompt.py
</details>

Learn more about prompts โ†’

๐Ÿงช Test Cases

Configure evaluation scenarios with variables and assertions.

<details> <summary><strong>Common patterns</strong></summary>

Inline tests

yaml
tests:
  - vars:
      question: "What's 2+2?"
    assert:
      - type: equals
        value: '4'

CSV test data

yaml
tests: file://test_cases.csv

HuggingFace datasets

yaml
tests: huggingface://datasets/rajpurkar/squad

Dynamic generation

yaml
tests: file://generate_tests.js
</details>

Learn more about test cases โ†’

๐Ÿ“Š Output Formats

Save and analyze your evaluation results.

<details> <summary><strong>Available formats</strong></summary>
bash
# Visual report
promptfoo eval --output results.html

# Data analysis
promptfoo eval --output results.json

# Spreadsheet
promptfoo eval --output results.csv
</details>

Learn more about outputs โ†’

Complete Example

Here's a real-world example that combines multiple features:

yaml
# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json
description: Customer service chatbot evaluation

prompts:
  # Simple text prompt
  - 'You are a helpful customer service agent. {{query}}'

  # Chat conversation format
  - file://prompts/chat_conversation.json

  # Dynamic prompt with logic
  - file://prompts/generate_prompt.js

providers:
  - openai:gpt-5-mini
  - anthropic:claude-3-haiku

tests:
  # Inline test cases
  - vars:
      query: 'I need to return a product'
    assert:
      - type: contains
        value: 'return policy'
      - type: llm-rubric
        value: 'Response is helpful and professional'

  # Load more tests from CSV
  - file://test_scenarios.csv

# Save results
outputPath: evaluations/customer_service_results.html

Quick Reference

Supported File Formats

FormatPromptsTestsUse Case
.txtโœ…โŒSimple text prompts
.jsonโœ…โœ…Chat conversations, structured data
.yamlโœ…โœ…Complex configurations
.csvโœ…โœ…Bulk data, multiple variants
.js/.tsโœ…โœ…Dynamic generation with logic
.pyโœ…โœ…Python-based generation
.mdโœ…โŒMarkdown-formatted prompts
.j2โœ…โŒJinja2 templates
HuggingFace datasetsโŒโœ…Import from existing datasets

Variable Syntax

Variables use Nunjucks templating:

yaml
# Basic substitution
prompt: "Hello {{name}}"

# Filters
prompt: "URGENT: {{message | upper}}"

# Conditionals
prompt: "{% if premium %}Premium support: {% endif %}{{query}}"

File References

All file paths are relative to the config file:

yaml
# Single file
prompts:
  - file://prompts/main.txt

# Multiple files with glob
tests:
  - file://tests/*.yaml

# Specific function
prompts:
  - file://generate.js:createPrompt

Wildcards like path/to/prompts/**/*.py:func_name are also supported.

Next Steps