examples/integration-vercel/ai-sdk/README.md
Demonstrates dynamic prompt construction using the Vercel AI SDK with promptfoo's provider prompt reporting feature.
Modern LLM applications dynamically construct prompts with:
Without prompt reporting, promptfoo shows {{topic}} as the prompt, making it impossible to debug what was actually sent or run assertions on the real prompt content.
The provider reports the actual prompt it sent using the prompt field:
return {
output: result.text,
prompt: [
{ role: 'system', content: dynamicSystemPrompt },
{ role: 'user', content: dynamicUserPrompt },
],
};
| Feature | Description |
|---|---|
| Multiple personas | expert, coder, analyst with different system prompts |
| Task types | explain, compare, troubleshoot with different structures |
| Context injection | RAG-style context added to prompts |
| Template filling | Variables like {{domain}}, {{audience}} filled dynamically |
npx promptfoo@latest init --example integration-vercel/ai-sdk
cd integration-vercel/ai-sdk
npm install
export OPENAI_API_KEY=sk-...
npx promptfoo@latest eval
npx promptfoo@latest view
In the promptfoo UI, click any result to see "Actual Prompt Sent" showing the full dynamically-constructed prompt instead of just {{topic}}.
Input:
vars:
topic: quantum entanglement
persona: expert
domain: quantum physics
audience: college students
Actual Prompt Sent:
System: You are a world-class expert in quantum physics.
Your communication style:
- Clear and precise explanations
- Use analogies for complex concepts
- Include concrete examples
- Acknowledge limitations honestly
Your audience: college students
User: Explain quantum entanglement in a way that's accessible and engaging.
Focus on:
1. Core concepts and why they matter
2. Real-world applications
3. Common misconceptions to avoid
| File | Description |
|---|---|
aiSdkProvider.mjs | Provider using Vercel AI SDK with dynamic prompt construction |
promptfooconfig.yaml | Test cases showcasing different personas and task types |
package.json | Dependencies (ai, @ai-sdk/openai) |
The pattern works with any framework:
// LangChain
const chain = prompt.pipe(model);
const result = await chain.invoke(input);
return {
output: result,
prompt: prompt.format(input),
};
// Custom RAG
const context = await retrieveContext(query);
const fullPrompt = `Context: ${context}\n\nQuestion: ${query}`;
const result = await llm.generate(fullPrompt);
return {
output: result,
prompt: fullPrompt,
};
If you enable Vercel AI SDK experimental_telemetry for tool-calling workflows, Promptfoo trajectory assertions can normalize the SDK's tool-call spans from ai.toolCall.name plus the matching ai.toolCall.args, ai.toolCall.arguments, or ai.toolCall.input attributes.