apps/opik-documentation/documentation/fern/docs/prompt_engineering/prompt_management.mdx
Opik provides a prompt library that you can use to manage your prompts. Storing prompts in a library allows you to version them, reuse them across projects, and manage them in a central location.
<Note> In Opik 2.0, prompts are project-scoped. Specify a `project_name` when creating prompts to associate them with the correct project. </Note>Using a prompt library does not mean you can't store your prompt in code, we have designed the prompt library to work seamlessly with your existing prompt files while providing the benefits of a central prompt library.
Opik supports two types of prompts:
Text prompts are simple string-based templates that support variable substitution. They are ideal for single-turn interactions or when you need to generate a single piece of text.
The recommended way to create and manage text prompts is using the
Prompt
object. This will allow you to continue versioning your prompts in code while
also getting the benefit of having prompt versions managed in the Opik platform
so you can more easily keep track of your progress.
```python
import opik
# Prompt text stored in a variable
PROMPT_TEXT = "Write a summary of the following text: {{text}}"
# Create a prompt
prompt = opik.Prompt(
name="prompt-summary",
prompt=PROMPT_TEXT,
metadata={"environment": "production"},
project_name="my-project"
)
# Print the prompt text
print(prompt.prompt)
# Build the prompt
print(prompt.format(text="Hello, world!"))
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { Prompt } from "opik";
// Prompt text stored in a variable
const PROMPT_TEXT = "Write a summary of the following text: {{text}}";
// Create a prompt
const prompt = new Prompt({
name: "prompt-summary",
prompt: PROMPT_TEXT,
metadata: { environment: "production" },
projectName: "my-project",
});
// Print the prompt text
console.log(prompt.prompt);
// Build the prompt
console.log(prompt.format({ text: "Hello, world!" }));
```
</Tab>
Prompts can also be stored in a file and loaded from it:
<Tabs> <Tab value="Python" title="Python"> ```python
import opik
# Read the prompt from a file
with open("prompt.txt", "r") as f:
prompt_text = f.read()
prompt = opik.Prompt(name="prompt-summary", prompt=prompt_text, project_name="my-project")
# Print the prompt text
print(prompt.prompt)
# Build the prompt
print(prompt.format(text="Hello, world!"))
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { Prompt } from "opik";
import { readFileSync } from "fs";
// Read the prompt from a file
const promptText = readFileSync("prompt.txt", "utf-8");
const prompt = new Prompt({
name: "prompt-summary",
prompt: promptText,
projectName: "my-project",
});
// Print the prompt text
console.log(prompt.prompt);
// Build the prompt
console.log(prompt.format({ text: "Hello, world!" }));
```
</Tab>
The prompt will now be stored in the library and versioned:
<Frame> </Frame> <Tip> The [`Prompt`](https://www.comet.com/docs/opik/python-sdk-reference/library/Prompt.html) object will create a new prompt in the library if this prompt doesn't already exist, otherwise it will return the existing prompt.This means you can safely run the above code multiple times without creating duplicate prompts.
</Tip>If you would rather keep text prompts in the Opik platform and manually update / download them, you can use the low-level Python SDK to manage your prompts.
<Note> **When to use client methods vs. classes:**Use Prompt() class (recommended): For most use cases, this class automatically uses the global Opik configuration set by opik.configure().
Use client.create_prompt(): When you need to use a specific client configuration that differs from the global configuration (e.g., different workspace, host, or API key).
You can create a new prompt in the library using both the SDK and the UI:
<Tabs> <Tab value="Python SDK" title="Python SDK"> ```python import opik opik.configure()
client = opik.Opik()
# Create a new prompt
prompt = client.create_prompt(name="prompt-summary", prompt="Write a summary of the following text: {{text}}", metadata={"environment": "development"}, project_name="my-project")
```
</Tab>
<Tab value="TypeScript SDK" title="TypeScript SDK">
```typescript
import { Prompt } from "opik";
// Create a new prompt
const prompt = new Prompt({
name: "prompt-summary",
prompt: "Write a summary of the following text: {{text}}",
metadata: { environment: "development" },
projectName: "my-project",
});
```
</Tab>
<Tab value="Using the UI" title="Using the UI">
You can create a prompt in the UI by navigating to the Prompt library and clicking `Create new prompt`. This will open a dialog where you can enter the prompt name, the prompt text, and optionally a description:
<Frame>
You can also edit a prompt by clicking on the prompt name in the library and clicking `Edit prompt`.
</Tab>
You can associate prompts with your traces and spans using the opik_context module. This is useful when you want to track which prompts were used during the execution of your functions:
# Create prompts
system_prompt = opik.Prompt(
name="system-prompt",
prompt="You are a helpful assistant that provides accurate and concise answers.",
project_name="my-project"
)
# Get prompt from the Prompt library
client = opik.Opik()
user_prompt = client.get_prompt(name="user-prompt")
@opik.track
def process_user_query(question: str) -> str:
# Add prompts to the current trace
update_current_trace(
name="user-query-processing",
prompts=[system_prompt, user_prompt],
metadata={"query_type": "general"}
)
# Your processing logic here
formatted_prompt = user_prompt.format(question=question)
# ... rest of your function
return "Response to: " + question
```
</Tab>
<Tab value="Adding prompts to spans" title="Adding prompts to spans">
```python
import opik
from opik.opik_context import update_current_span
# Create a prompt for a specific operation
analysis_prompt = opik.Prompt(
name="text-analysis-prompt",
prompt="Analyze the sentiment of the following text: {{text}}",
project_name="my-project"
)
@opik.track
def analyze_sentiment(text: str) -> str:
# Add prompt to the current span
update_current_span(
name="sentiment-analysis",
prompts=[analysis_prompt],
metadata={"analysis_type": "sentiment"}
)
# Your analysis logic here
formatted_prompt = analysis_prompt.format(text=text)
# ... rest of your function
return "Positive" # example result
```
</Tab>
<Tab value="Combined usage" title="Combined usage">
```python
import opik
from opik.opik_context import update_current_trace, update_current_span
# Create different prompts for different purposes
main_prompt = opik.Prompt(
name="main-processing-prompt",
prompt="Process the following data: {{data}}",
project_name="my-project"
)
validation_prompt = opik.Prompt(
name="validation-prompt",
prompt="Validate this result: {{result}}",
project_name="my-project"
)
@opik.track
def validate_result(result: Dict[str, Any]) -> str:
# Add validation prompt to span level
update_current_span(
name="result-validation",
prompts=[validation_prompt],
metadata={"validation_type": "result_check"}
)
# ... validation logic
return "Valid" # example result
@opik.track
def complex_processing(data: str) -> str:
# Add main prompt to trace level
update_current_trace(
name="complex-data-processing",
prompts=[main_prompt],
metadata={"processing_type": "complex"}
)
# Process the data
result = process_data(data)
# Validate the result
validated_result = validate_result(result)
return validated_result
complex_processing("My data")
```
</Tab>
You can view the prompts associated with a trace or span in the Opik UI:
<Frame> </Frame>Further details on using prompts from the Prompt library are provided in the following sections.
Prompts can be used in all supported third-party integrations by attaching them to traces and spans through the opik_context module.
For instance, you can use prompts with the Google ADK integration, as shown in the example here.
Once a prompt is created in the library, you can download it in code:
<Tabs> <Tab value="Python" title="Python"> Use the [`Opik.get_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_prompt) method: ```python
import opik
opik.configure()
client = opik.Opik()
# Get a dataset
dataset = client.get_or_create_dataset("test_dataset", project_name="my-project")
# Get the prompt
prompt = client.get_prompt(name="prompt-summary")
# Create the prompt message
prompt.format(text="Hello, world!")
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
Use the `getPrompt` method:
```typescript
import { Opik } from "opik";
const client = new Opik();
// Get the prompt
const prompt = await client.getPrompt({ name: "prompt-summary" });
if (prompt) {
// Format the prompt
const formatted = prompt.format({ text: "Hello, world!" });
console.log(formatted);
}
```
</Tab>
If you are not using the SDK, you can download a prompt by using the REST API.
To discover prompts by name substring and/or filters, use search_prompts. Filters are written in Opik Query Language (OQL):
```python
import opik
client = opik.Opik()
# Search by name substring only
latest_versions = client.search_prompts(
filter_string='name contains "summary"'
)
# Search by name substring and tags filter
filtered = client.search_prompts(
filter_string='name contains "summary" AND tags contains "alpha" AND tags contains "beta"',
)
# Search for only text prompts
text_prompts = client.search_prompts(
filter_string='template_structure = "text"'
)
for prompt in filtered:
print(prompt.name, prompt.commit, prompt.prompt)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
Use `searchPrompts`:
```typescript
import { Opik } from "opik";
const client = new Opik();
// Search by name substring only
const latestVersions = await client.searchPrompts(
'name contains "summary"'
);
// Search by name substring and tags filter
const filtered = await client.searchPrompts(
'name contains "summary" AND tags contains "alpha" AND tags contains "beta"'
);
// Search for only text prompts
const textPrompts = await client.searchPrompts(
'template_structure = "text"'
);
for (const prompt of filtered) {
console.log(prompt.name, prompt.commit, prompt.prompt);
}
```
</Tab>
You can filter by template_structure to search for only text prompts ("text") or only chat prompts ("chat"). Without the filter, search_prompts returns both types.
The filter_string parameter uses Opik Query Language (OQL) with the format:
"<COLUMN> <OPERATOR> <VALUE> [AND <COLUMN> <OPERATOR> <VALUE>]*"
Supported columns for prompts:
| Column | Type | Operators |
|---|---|---|
id | String | =, !=, contains, not_contains, starts_with, ends_with, >, < |
name | String | =, !=, contains, not_contains, starts_with, ends_with, >, < |
created_by | String | =, !=, contains, not_contains, starts_with, ends_with, >, < |
tags | List | contains |
template_structure | String | =, != |
Examples:
tags contains "production" - Filter by tagname contains "summary" - Filter by name substringcreated_by = "[email protected]" - Filter by creatortags contains "alpha" AND tags contains "beta" - Multiple tag filteringtemplate_structure = "text" - Filter for only text promptstemplate_structure = "chat" - Filter for only chat promptssearch_prompts returns the latest version for each matching prompt.
Chat prompts are structured message-based templates designed for conversational AI applications. They support multiple message roles (system, user, assistant) and multimodal content including text, images, and videos.
{{variable}}) or Jinja2 syntaxSimilar to text prompts, you can create and manage chat prompts using the
ChatPrompt
class. This allows you to version your chat prompts in code while benefiting from
centralized management in the Opik platform.
```python
import opik
# Define chat messages with variables
messages = [
{"role": "system", "content": "You are a helpful assistant specializing in {{domain}}."},
{"role": "user", "content": "Explain {{topic}} in simple terms."}
]
# Create a chat prompt
chat_prompt = opik.ChatPrompt(
name="educational-assistant",
messages=messages,
metadata={"category": "education"},
project_name="my-project"
)
# Format the prompt with variables
formatted_messages = chat_prompt.format(
variables={
"domain": "physics",
"topic": "quantum entanglement"
}
)
# Use formatted messages with your LLM
print(formatted_messages)
# Output:
# [
# {"role": "system", "content": "You are a helpful assistant specializing in physics."},
# {"role": "user", "content": "Explain quantum entanglement in simple terms."}
# ]
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
// Define chat messages with variables
const messages = [
{
role: "system",
content: "You are a helpful assistant specializing in {{domain}}.",
},
{ role: "user", content: "Explain {{topic}} in simple terms." },
];
// Create a chat prompt
const chatPrompt = new ChatPrompt({
name: "educational-assistant",
messages: messages,
metadata: { category: "education" },
projectName: "my-project",
});
// Format the prompt with variables
const formattedMessages = chatPrompt.format({
domain: "physics",
topic: "quantum entanglement",
});
// Use formatted messages with your LLM
console.log(formattedMessages);
// Output:
// [
// { role: "system", content: "You are a helpful assistant specializing in physics." },
// { role: "user", content: "Explain quantum entanglement in simple terms." }
// ]
```
</Tab>
Chat prompts work seamlessly with OpenAI's chat completion API:
<Tabs> <Tab value="Python" title="Python"> ```python
import opik
from openai import OpenAI
# Define chat messages with variables
messages = [
{"role": "system", "content": "You are a helpful assistant specializing in {{domain}}."},
{"role": "user", "content": "Explain {{topic}} in simple terms."}
]
# Create a chat prompt
chat_prompt = opik.ChatPrompt(
name="educational-assistant",
messages=messages,
metadata={"category": "education"},
project_name="my-project"
)
# Format the prompt with variables
formatted_messages = chat_prompt.format(
variables={
"domain": "physics",
"topic": "quantum entanglement"
}
)
# Use with OpenAI API
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=formatted_messages
)
print(response.choices[0].message.content)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
import OpenAI from "openai";
const openai = new OpenAI();
// Define chat messages with variables
const messages = [
{
role: "system",
content: "You are a helpful assistant specializing in {{domain}}.",
},
{ role: "user", content: "Explain {{topic}} in simple terms." },
];
// Create a chat prompt
const chatPrompt = new ChatPrompt({
name: "educational-assistant",
messages: messages,
metadata: { category: "education" },
projectName: "my-project",
});
// Format the prompt with variables
const formattedMessages = chatPrompt.format({
domain: "physics",
topic: "quantum entanglement",
});
// Use with OpenAI API
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: formattedMessages,
});
console.log(response.choices[0].message.content);
```
</Tab>
You can create templates for multi-turn conversations:
<Tabs> <Tab value="Python" title="Python"> ```python
import opik
# Define a multi-turn conversation template
messages = [
{"role": "system", "content": "You are a customer support agent for {{company}}."},
{"role": "user", "content": "I have an issue with {{product}}."},
{"role": "assistant", "content": "I'd be happy to help with your {{product}}. Can you describe the issue?"},
{"role": "user", "content": "{{issue_description}}"}
]
chat_prompt = opik.ChatPrompt(
name="customer-support-flow",
messages=messages,
project_name="my-project"
)
# Format with specific values
formatted = chat_prompt.format(
variables={
"company": "Acme Corp",
"product": "Widget Pro",
"issue_description": "It won't turn on"
}
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
// Define a multi-turn conversation template
const messages = [
{
role: "system",
content: "You are a customer support agent for {{company}}.",
},
{ role: "user", content: "I have an issue with {{product}}." },
{
role: "assistant",
content:
"I'd be happy to help with your {{product}}. Can you describe the issue?",
},
{ role: "user", content: "{{issue_description}}" },
];
const chatPrompt = new ChatPrompt({
name: "customer-support-flow",
messages: messages,
projectName: "my-project",
});
// Format with specific values
const formatted = chatPrompt.format({
company: "Acme Corp",
product: "Widget Pro",
issue_description: "It won't turn on",
});
```
</Tab>
The chat prompt will now be stored in the library and versioned, just like text prompts.
<Tip> The [`ChatPrompt`](https://www.comet.com/docs/opik/python-sdk-reference/library/ChatPrompt.html) class will create a new chat prompt in the library if it doesn't already exist, otherwise it will return the existing prompt.This means you can safely run the code multiple times without creating duplicate prompts. </Tip>
Once created, you can view and manage your chat prompts in the Opik UI:
<Frame> </Frame>Chat prompts support multimodal content, allowing you to include images and videos alongside text. This is useful for vision-enabled models.
```python
import opik
# Chat prompt with image content
messages = [
{"role": "system", "content": "You analyze images and provide detailed descriptions."},
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image of {{subject}}?"},
{
"type": "image_url",
"image_url": {
"url": "{{image_url}}",
"detail": "high"
}
}
]
}
]
chat_prompt = opik.ChatPrompt(
name="image-analyzer",
messages=messages,
project_name="my-project"
)
# Format with variables
formatted = chat_prompt.format(
variables={
"subject": "a sunset",
"image_url": "https://example.com/sunset.jpg"
},
supported_modalities={"vision": True}
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
// Chat prompt with image content
const messages = [
{
role: "system",
content: "You analyze images and provide detailed descriptions.",
},
{
role: "user",
content: [
{ type: "text", text: "What's in this image of {{subject}}?" },
{
type: "image_url",
image_url: {
url: "{{image_url}}",
detail: "high",
},
},
],
},
];
const chatPrompt = new ChatPrompt({
name: "image-analyzer",
messages: messages,
projectName: "my-project",
});
// Format with variables
const formatted = chatPrompt.format(
{
subject: "a sunset",
image_url: "https://example.com/sunset.jpg",
},
{ vision: true }
);
```
</Tab>
Chat prompts can also include video content:
<Tabs> <Tab value="Python" title="Python"> ```python
import opik
# Chat prompt with video content
messages = [
{"role": "system", "content": "You analyze videos and provide insights."},
{
"role": "user",
"content": [
{"type": "text", "text": "Analyze this video: {{description}}"},
{
"type": "video_url",
"video_url": {
"url": "{{video_url}}",
"mime_type": "video/mp4"
}
}
]
}
]
chat_prompt = opik.ChatPrompt(
name="video-analyzer",
messages=messages,
project_name="my-project"
)
# Format with variables
formatted = chat_prompt.format(
variables={
"description": "traffic analysis",
"video_url": "https://example.com/traffic.mp4"
},
supported_modalities={"vision": True}
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
// Chat prompt with video content
const messages = [
{
role: "system",
content: "You analyze videos and provide insights.",
},
{
role: "user",
content: [
{ type: "text", text: "Analyze this video: {{description}}" },
{
type: "video_url",
video_url: {
url: "{{video_url}}",
mime_type: "video/mp4",
},
},
],
},
];
const chatPrompt = new ChatPrompt({
name: "video-analyzer",
messages: messages,
projectName: "my-project",
});
// Format with variables
const formatted = chatPrompt.format(
{
description: "traffic analysis",
video_url: "https://example.com/traffic.mp4",
},
{ vision: true, video: true }
);
```
</Tab>
You can combine multiple types of content in a single message:
<Tabs> <Tab value="Python" title="Python"> ```python
import opik
# Chat prompt with multiple images and text
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Compare these two images:"},
{
"type": "image_url",
"image_url": {"url": "{{image1_url}}"}
},
{"type": "text", "text": "and"},
{
"type": "image_url",
"image_url": {"url": "{{image2_url}}"}
},
{"type": "text", "text": "What are the main differences?"}
]
}
]
chat_prompt = opik.ChatPrompt(
name="image-comparison",
messages=messages,
project_name="my-project"
)
formatted = chat_prompt.format(
variables={
"image1_url": "https://example.com/before.jpg",
"image2_url": "https://example.com/after.jpg"
},
supported_modalities={"vision": True}
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt } from "opik";
// Chat prompt with multiple images and text
const messages = [
{
role: "user",
content: [
{ type: "text", text: "Compare these two images:" },
{
type: "image_url",
image_url: { url: "{{image1_url}}" },
},
{ type: "text", text: "and" },
{
type: "image_url",
image_url: { url: "{{image2_url}}" },
},
{ type: "text", text: "What are the main differences?" },
],
},
];
const chatPrompt = new ChatPrompt({
name: "image-comparison",
messages: messages,
projectName: "my-project",
});
const formatted = chatPrompt.format(
{
image1_url: "https://example.com/before.jpg",
image2_url: "https://example.com/after.jpg",
},
{ vision: true }
);
```
</Tab>
{"vision": True}), the structured content is preserved<<>>)This allows you to use the same prompt template with different models that may or may not support certain modalities. </Note>
You can also use the low-level Python SDK to create and manage chat prompts directly.
<Note> **When to use client methods vs. classes:**Use ChatPrompt() or Prompt() classes (recommended): For most use cases, these classes automatically use the global Opik configuration set by opik.configure().
Use client.create_chat_prompt() or client.create_prompt(): When you need to use a specific client configuration that differs from the global configuration (e.g., different workspace, host, or API key).
```python
import opik
opik.configure()
client = opik.Opik()
# Create a new chat prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, {{name}}!"}
]
chat_prompt = client.create_chat_prompt(
name="greeting-prompt",
messages=messages,
metadata={"environment": "development"},
project_name="my-project"
)
# Format and use the prompt
formatted = chat_prompt.format(variables={"name": "Alice"})
print(formatted)
```
</Tab>
<Tab value="TypeScript SDK" title="TypeScript SDK">
Use `ChatPrompt` to create a chat prompt:
```typescript
import { ChatPrompt } from "opik";
// Create a new chat prompt
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello, {{name}}!" },
];
const chatPrompt = new ChatPrompt({
name: "greeting-prompt",
messages: messages,
metadata: { environment: "development" },
projectName: "my-project",
});
// Format and use the prompt
const formatted = chatPrompt.format({ name: "Alice" });
console.log(formatted);
```
</Tab>
<Tab value="Using the UI" title="Using the UI">
You can create a chat prompt in the UI by navigating to the Prompt library and clicking `Create new prompt`. Select "Chat prompt" as the prompt type, then enter the prompt name, add your chat messages with different roles (system, user, assistant), and optionally add metadata.
<Frame>
</Frame>
You can also edit a chat prompt by clicking on the prompt name in the library and clicking `Edit prompt`.
</Tab>
Once a chat prompt is created in the library, you can download it in code:
<Tabs> <Tab value="Python" title="Python"> Use the [`Opik.get_chat_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_chat_prompt) method: ```python
import opik
opik.configure()
client = opik.Opik()
# Get a chat prompt
chat_prompt = client.get_chat_prompt(name="greeting-prompt")
# Format the messages
formatted_messages = chat_prompt.format(variables={"name": "Bob"})
# Use with your LLM
# response = llm.chat(messages=formatted_messages)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
Use the `getChatPrompt` method:
```typescript
import { Opik } from "opik";
const client = new Opik();
// Get a chat prompt
const chatPrompt = await client.getChatPrompt({ name: "greeting-prompt" });
if (chatPrompt) {
// Format the messages
const formattedMessages = chatPrompt.format({ name: "Bob" });
// Use with your LLM
// const response = await llm.chat(formattedMessages);
}
```
</Tab>
You can search for chat prompts specifically by using the template_structure filter:
```python
import opik
client = opik.Opik()
# Search for only chat prompts
chat_prompts = client.search_prompts(
filter_string='template_structure = "chat" AND name contains "assistant"'
)
for prompt in chat_prompts:
print(f"Chat prompt: {prompt.name}")
print(f"Messages: {prompt.template}")
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
Use `searchPrompts` with the filter:
```typescript
import { Opik } from "opik";
const client = new Opik();
// Search for only chat prompts
const chatPrompts = await client.searchPrompts(
'template_structure = "chat" AND name contains "assistant"'
);
for (const prompt of chatPrompts) {
console.log(`Chat prompt: ${prompt.name}`);
console.log(`Messages: ${prompt.messages}`);
}
```
</Tab>
To search for text prompts only, use template_structure = "text". Without the filter, search_prompts returns both text and chat prompts.
The filter_string parameter uses Opik Query Language (OQL) and supports the same columns and operators as text prompts (see Searching prompts above).
Chat prompts support two template types for variable substitution.
messages = [
{"role": "user", "content": "Hello {{name}}, you live in {{city}}."}
]
chat_prompt = opik.ChatPrompt(
name="mustache-example",
messages=messages,
type=PromptType.MUSTACHE, # Default
project_name="my-project"
)
formatted = chat_prompt.format(variables={"name": "Alice", "city": "Paris"})
# Result: [{"role": "user", "content": "Hello Alice, you live in Paris."}]
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt, PromptType } from "opik";
const messages = [
{ role: "user", content: "Hello {{name}}, you live in {{city}}." },
];
const chatPrompt = new ChatPrompt({
name: "mustache-example",
messages: messages,
type: PromptType.MUSTACHE, // Default
projectName: "my-project",
});
const formatted = chatPrompt.format({ name: "Alice", city: "Paris" });
// Result: [{ role: "user", content: "Hello Alice, you live in Paris." }]
```
</Tab>
For more advanced templating with conditionals, loops, and filters:
<Tabs> <Tab value="Python" title="Python"> ```python import opik from opik.api_objects.prompt import PromptType messages = [
{
"role": "user",
"content": """
{% if is_premium %}
Hello {{ name }}, welcome to our premium service!
{% else %}
Hello {{ name }}, welcome!
{% endif %}
"""
}
]
chat_prompt = opik.ChatPrompt(
name="jinja-example",
messages=messages,
type=PromptType.JINJA2,
project_name="my-project"
)
# With premium user
formatted = chat_prompt.format(
variables={"name": "Alice", "is_premium": True}
)
# Result includes: "Hello Alice, welcome to our premium service!"
# With regular user
formatted = chat_prompt.format(
variables={"name": "Bob", "is_premium": False}
)
# Result includes: "Hello Bob, welcome!"
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt, PromptType } from "opik";
const messages = [
{
role: "user",
content: `
{% if is_premium %}
Hello {{ name }}, welcome to our premium service!
{% else %}
Hello {{ name }}, welcome!
{% endif %}
`,
},
];
const chatPrompt = new ChatPrompt({
name: "jinja-example",
messages: messages,
type: PromptType.JINJA2,
projectName: "my-project",
});
// With premium user
const formatted1 = chatPrompt.format({
name: "Alice",
is_premium: true,
});
// Result includes: "Hello Alice, welcome to our premium service!"
// With regular user
const formatted2 = chatPrompt.format({
name: "Bob",
is_premium: false,
});
// Result includes: "Hello Bob, welcome!"
```
</Tab>
Chat prompts are automatically versioned when the messages change. Each version has a unique commit ID.
<Tabs> <Tab value="Python" title="Python"> ```python import opik # Create initial version
messages_v1 = [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hi!"}
]
chat_prompt_v1 = opik.ChatPrompt(
name="assistant-prompt",
messages=messages_v1,
project_name="my-project"
)
print(f"Version 1 commit: {chat_prompt_v1.commit}")
# Create new version with different messages
messages_v2 = [
{"role": "system", "content": "You are very helpful."},
{"role": "user", "content": "Hello there!"},
{"role": "assistant", "content": "How can I assist you?"}
]
chat_prompt_v2 = opik.ChatPrompt(
name="assistant-prompt",
messages=messages_v2,
project_name="my-project"
)
print(f"Version 2 commit: {chat_prompt_v2.commit}")
# Get specific version by commit
client = opik.Opik()
specific_version = client.get_chat_prompt(
name="assistant-prompt",
commit=chat_prompt_v1.commit
)
# Get version history
history = client.get_chat_prompt_history(name="assistant-prompt")
print(f"Total versions: {len(history)}")
```
You can use [`get_chat_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_chat_prompt) to retrieve a specific version by commit ID, and [`get_chat_prompt_history`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_chat_prompt_history) to get all versions.
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { ChatPrompt, Opik } from "opik";
// Create initial version
const messagesV1 = [
{ role: "system", content: "You are helpful." },
{ role: "user", content: "Hi!" },
];
const chatPromptV1 = new ChatPrompt({
name: "assistant-prompt",
messages: messagesV1,
projectName: "my-project",
});
console.log(`Version 1 commit: ${chatPromptV1.commit}`);
// Create new version with different messages
const messagesV2 = [
{ role: "system", content: "You are very helpful." },
{ role: "user", content: "Hello there!" },
{ role: "assistant", content: "How can I assist you?" },
];
const chatPromptV2 = new ChatPrompt({
name: "assistant-prompt",
messages: messagesV2,
projectName: "my-project",
});
console.log(`Version 2 commit: ${chatPromptV2.commit}`);
// Get specific version by commit
const client = new Opik();
const specificVersion = await client.getChatPrompt({
name: "assistant-prompt",
commit: chatPromptV1.commit,
});
// Get version history
const versions = await chatPromptV1.getVersions();
console.log(`Total versions: ${versions.length}`);
```
You can use `getChatPrompt` with a commit parameter to retrieve a specific version, and the `getVersions()` method to get all versions.
</Tab>
Once you create a prompt with a specific structure (text or chat), that structure cannot be changed. This ensures consistency and prevents accidental mixing of prompt types.
import opik
# Create a chat prompt
chat_prompt = opik.ChatPrompt(
name="my-prompt",
messages=[{"role": "user", "content": "Hello"}],
project_name="my-project"
)
# Attempting to create a text prompt with the same name will raise an error
try:
text_prompt = opik.Prompt(
name="my-prompt",
prompt="Hello {{name}}",
project_name="my-project"
)
except opik.exceptions.PromptTemplateStructureMismatch as e:
print("Error: Cannot change prompt structure from chat to text")
Similarly, if you create a text prompt first, you cannot later create a chat prompt with the same name.
Both Prompt and ChatPrompt classes will raise a PromptTemplateStructureMismatch exception if you attempt to change the structure of an existing prompt.
```python maxLines=1000
import opik
opik.configure()
client = opik.Opik()
# Get the complete version history for a text prompt
prompt_history = client.get_prompt_history(name="prompt-summary")
# Iterate through all versions
for version in prompt_history:
print(f"Name: {version.name}")
print(f"Commit: {version.commit}")
print(f"Prompt text: {version.prompt}")
print(f"Metadata: {version.metadata}")
print(f"Type: {version.type}")
print("-" * 50)
```
This returns a list of Prompt objects (each representing a specific version) for the given prompt name.
</Tab>
<Tab value="Chat prompts" title="Chat prompts">
Use [`get_chat_prompt_history`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_chat_prompt_history) for chat prompts:
```python maxLines=1000
import opik
opik.configure()
client = opik.Opik()
# Get the complete version history for a chat prompt
chat_prompt_history = client.get_chat_prompt_history(name="assistant-prompt")
# Iterate through all versions
for version in chat_prompt_history:
print(f"Name: {version.name}")
print(f"Commit: {version.commit}")
print(f"Messages: {version.template}")
print(f"Metadata: {version.metadata}")
print(f"Type: {version.type}")
print("-" * 50)
```
This returns a list of ChatPrompt objects (each representing a specific version) for the given prompt name.
</Tab>
You can use this information to:
```python maxLines=1000
import opik
opik.configure()
client = opik.Opik()
# Get a specific version of a text prompt by commit ID
prompt = client.get_prompt(name="prompt-summary", commit="abc123def456")
# Use the prompt in your application
formatted_prompt = prompt.format(text="Hello, world!")
print(formatted_prompt)
```
</Tab>
<Tab value="Chat prompts" title="Chat prompts">
Use the `commit` parameter with [`get_chat_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_chat_prompt):
```python maxLines=1000
import opik
opik.configure()
client = opik.Opik()
# Get a specific version of a chat prompt by commit ID
chat_prompt = client.get_chat_prompt(name="assistant-prompt", commit="abc123def456")
# Use the prompt in your application
formatted_messages = chat_prompt.format(variables={"name": "Alice"})
print(formatted_messages)
```
</Tab>
The commit parameter accepts the commit ID of the specific prompt version you want to retrieve. You can find commit IDs in the prompt history in the Opik UI or by using the get_prompt_history or get_chat_prompt_history methods (see above).
This is particularly useful when you want to:
Opik supports tags at two levels, which serve different purposes:
tags parameter when creating or updating a prompt. These are used for searching and organizing prompts and are covered in the Searching prompts section.batch_update_prompt_version_tags / updatePromptVersionTags API after creating the version. These allow you to label individual milestones (e.g., "production", "stable", "deprecated") and filter version history.This section covers prompt version tags (the per-version ones).
When you open a prompt from the library, the prompt page shows its full commit history — each version listed as a row with its commit ID, creation date, and any tags assigned to it:
<Frame> </Frame>The commits table gives you a detailed view of all versions for a prompt. Each row shows the commit ID, template preview, change description, author, creation date, and the tags attached to that version. You can add or remove tags directly from this table by clicking on the tag editor for any row:
<Frame> </Frame>The commits table supports filtering so you can quickly find versions by tag, template content, author, or other fields. Use the filter controls above the table to apply one or more filters:
<Frame> </Frame>When selecting a version to compare, the dropdown lists all versions with their tags displayed inline, making it easy to identify the right candidate without opening each version individually:
<Frame> </Frame>Once two versions are selected, their tags are shown alongside the commit ID and change description in the comparison dialog:
<Frame> </Frame>Use the batch update API to efficiently update tags on one or more prompt versions at once. This is especially useful when you want to promote a version to production or clean up tags after a release.
There are two update modes:
```python
import opik
client = opik.Opik()
# Get version IDs from the prompt history
history = client.get_prompt_history(name="summarizer")
version_id = history[0].version_id # latest version
prompts_client = client.get_prompts_client()
# Replace all tags (default behavior)
prompts_client.batch_update_prompt_version_tags(
version_ids=[version_id],
tags=["production", "v2"]
)
# Merge new tags with existing tags
prompts_client.batch_update_prompt_version_tags(
version_ids=[version_id],
tags=["hotfix"],
merge=True
)
# Clear all tags by passing an empty list
prompts_client.batch_update_prompt_version_tags(
version_ids=[version_id],
tags=[]
)
# Update multiple versions at once
version_ids = [v.version_id for v in history[:3]]
prompts_client.batch_update_prompt_version_tags(
version_ids=version_ids,
tags=["archived"]
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
Use `client.updatePromptVersionTags()`:
```typescript
import { Opik } from "opik";
const client = new Opik();
// Get version IDs from the prompt history
const prompt = await client.getPrompt({ name: "summarizer" });
const versions = await prompt.getVersions();
const versionId = versions[0].id; // latest version
// Replace all tags (default behavior)
await client.updatePromptVersionTags([versionId], {
tags: ["production", "v2"],
});
// Merge new tags with existing tags
await client.updatePromptVersionTags([versionId], {
tags: ["hotfix"],
mergeTags: true,
});
// Clear all tags by passing an empty array
await client.updatePromptVersionTags([versionId], {
tags: [],
});
// Update multiple versions at once
const versionIds = versions.slice(0, 3).map((v) => v.id);
await client.updatePromptVersionTags(versionIds, {
tags: ["archived"],
});
```
</Tab>
You can search, filter, and sort prompt version history to find specific versions.
search performs a free-text match against template content and change description fields:
client = opik.Opik()
# Find versions whose template or change description contains "customer"
results = client.get_prompt_history(name="summarizer", search="customer")
# Works the same for chat prompts
chat_results = client.get_chat_prompt_history(name="assistant", search="helpful")
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { Opik } from "opik";
const client = new Opik();
const prompt = await client.getPrompt({ name: "summarizer" });
// Find versions whose template or change description contains "customer"
const results = await prompt.getVersions({ search: "customer" });
```
</Tab>
Use filter_string (Python OQL) or filters (TypeScript JSON array) to narrow results by any version field.
client = opik.Opik()
# Filter by tag
production_versions = client.get_prompt_history(
name="summarizer",
filter_string='tags contains "production"'
)
# Filter by multiple tags (AND logic)
stable_production = client.get_prompt_history(
name="summarizer",
filter_string='tags contains "production" AND tags contains "stable"'
)
# Filter by template content
customer_versions = client.get_prompt_history(
name="summarizer",
filter_string='template contains "customer"'
)
# Filter by creator
my_versions = client.get_prompt_history(
name="summarizer",
filter_string='created_by = "[email protected]"'
)
# Filter by date
recent_versions = client.get_prompt_history(
name="summarizer",
filter_string='created_at >= "2024-01-01T00:00:00Z"'
)
# Filter by metadata field (dot notation)
prod_env = client.get_prompt_history(
name="summarizer",
filter_string='metadata.environment = "prod"'
)
# Combine search and filter
results = client.get_prompt_history(
name="summarizer",
search="customer",
filter_string='tags contains "production"'
)
# Same API for chat prompts
chat_production = client.get_chat_prompt_history(
name="assistant",
filter_string='tags contains "production"'
)
for version in production_versions:
print(f"Commit: {version.commit}, Tags: {version.tags}")
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { Opik } from "opik";
const client = new Opik();
const prompt = await client.getPrompt({ name: "summarizer" });
// Filter by tag
const productionVersions = await prompt.getVersions({
filters: JSON.stringify([
{ field: "tags", operator: "contains", value: "production" },
]),
});
// Filter by multiple tags (AND logic)
const stableProduction = await prompt.getVersions({
filters: JSON.stringify([
{ field: "tags", operator: "contains", value: "production" },
{ field: "tags", operator: "contains", value: "stable" },
]),
});
// Filter by template content
const customerVersions = await prompt.getVersions({
filters: JSON.stringify([
{ field: "template", operator: "contains", value: "customer" },
]),
});
// Filter by creator
const myVersions = await prompt.getVersions({
filters: JSON.stringify([
{ field: "created_by", operator: "=", value: "[email protected]" },
]),
});
// Filter by date
const recentVersions = await prompt.getVersions({
filters: JSON.stringify([
{ field: "created_at", operator: ">=", value: "2024-01-01T00:00:00Z" },
]),
});
// Combine search and filter
const results = await prompt.getVersions({
search: "customer",
filters: JSON.stringify([
{ field: "tags", operator: "contains", value: "production" },
]),
});
for (const version of productionVersions) {
console.log(`Commit: ${version.commit}, Tags: ${version.tags}`);
}
```
</Tab>
Supported fields for version filtering:
| Column | Type | Supported operators | Notes |
|---|---|---|---|
id | String | =, !=, contains, not_contains, starts_with, ends_with, >, < | |
commit | String | =, !=, contains, not_contains, starts_with, ends_with, >, < | |
template | String | =, !=, contains, not_contains, starts_with, ends_with, >, < | |
change_description | String | =, !=, contains, not_contains, starts_with, ends_with, >, < | |
created_by | String | =, !=, contains, not_contains, starts_with, ends_with, >, < | |
type | Enum | =, != | Values: "mustache", "jinja2" |
tags | List | contains | Multiple entries require all tags (AND logic) |
created_at | DateTime | >=, <=, >, < | ISO 8601 format: "2024-01-01T00:00:00Z" |
metadata | Dict | =, != | Python only. Use dot notation: metadata.environment = "prod" |
For Python, conditions are combined with AND in the OQL string. For TypeScript, add multiple objects to the filter array.
Use the sorting option with a JSON-encoded sort array. Each entry specifies a field and direction ("ASC" or "DESC"):
import { Opik } from "opik";
const client = new Opik();
const prompt = await client.getPrompt({ name: "summarizer" });
// Sort by template content alphabetically
const alphabetical = await prompt.getVersions({
sorting: JSON.stringify([{ field: "template", direction: "ASC" }]),
});
// Sort by creation date (oldest first)
const oldest = await prompt.getVersions({
sorting: JSON.stringify([{ field: "created_at", direction: "ASC" }]),
});
// Combine sort with filter
const sorted = await prompt.getVersions({
filters: JSON.stringify([
{ field: "tags", operator: "contains", value: "production" },
]),
sorting: JSON.stringify([{ field: "created_at", direction: "DESC" }]),
});
Sortable fields include: id, commit, template, change_description, created_by, type, created_at.
The following end-to-end example shows a common workflow: create a new prompt version, and then promote the best-performing version by tagging it as production.
client = opik.Opik()
prompts_client = client.get_prompts_client()
# Create a new version
candidate = opik.Prompt(
name="summarizer",
prompt="Provide a concise summary of: {{text}}",
project_name="my-project"
)
# ... run your evaluation logic here ...
# Assume the evaluation passed and we want to promote this version
# Promote to production (use version_id to target the specific version)
prompts_client.batch_update_prompt_version_tags(
version_ids=[candidate.version_id],
tags=["production", "v3"]
)
# Archive previous production versions
old_production = client.get_prompt_history(
name="summarizer",
filter_string='tags contains "production"'
)
older_version_ids = [
v.version_id for v in old_production
if v.version_id != candidate.version_id
]
if older_version_ids:
prompts_client.batch_update_prompt_version_tags(
version_ids=older_version_ids,
tags=["archived"]
)
```
</Tab>
<Tab value="TypeScript" title="TypeScript">
```typescript
import { Opik, Prompt } from "opik";
const client = new Opik();
// Create a new version
const candidate = new Prompt({
name: "summarizer",
prompt: "Provide a concise summary of: {{text}}",
projectName: "my-project",
});
// ... run your evaluation logic here ...
// Assume the evaluation passed and we want to promote this version
// Promote to production (versionId targets the specific version)
await client.updatePromptVersionTags([candidate.versionId], {
tags: ["production", "v3"],
});
// Archive previous production versions
const allVersions = await candidate.getVersions({
filters: JSON.stringify([
{ field: "tags", operator: "contains", value: "production" },
]),
});
const olderVersionIds = allVersions
.filter((v) => v.id !== candidate.versionId)
.map((v) => v.id);
if (olderVersionIds.length > 0) {
await client.updatePromptVersionTags(olderVersionIds, {
tags: ["archived"],
});
}
```
</Tab>
opik.configure()
opik_client = opik.Opik()
openai_client = OpenAI()
# Get a dataset
dataset = opik_client.get_or_create_dataset("test_dataset", project_name="my-project")
# Create a text prompt
prompt = opik.Prompt(name="My prompt", prompt="Summarize: {{text}}", project_name="my-project")
# Create an evaluation task
def evaluation_task(dataset_item):
# Use the prompt in your task
formatted_prompt = prompt.format(text=dataset_item["input"])
# Call OpenAI API with formatted prompt
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": formatted_prompt}]
)
return {"output": response.choices[0].message.content}
# Run the evaluation
evaluation = evaluate(
experiment_name="My experiment",
dataset=dataset,
task=evaluation_task,
prompts=[prompt],
project_name="my-project",
)
```
</Tab>
<Tab value="Chat prompt" title="Chat prompt">
```python maxLines=1000
import opik
from opik.evaluation import evaluate
from opik.evaluation.metrics import Hallucination
from openai import OpenAI
opik.configure()
opik_client = opik.Opik()
openai_client = OpenAI()
# Get a dataset
dataset = opik_client.get_or_create_dataset("test_dataset", project_name="my-project")
# Create a chat prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Summarize: {{text}}"}
]
chat_prompt = opik.ChatPrompt(name="My chat prompt", messages=messages, project_name="my-project")
# Create an evaluation task
def evaluation_task(dataset_item):
# Use the chat prompt in your task
formatted_messages = chat_prompt.format(
variables={"text": dataset_item["input"]}
)
# Call OpenAI API with formatted messages
response = openai_client.chat.completions.create(
model="gpt-4",
messages=formatted_messages
)
return {"output": response.choices[0].message.content}
# Run the evaluation
evaluation = evaluate(
experiment_name="My experiment",
dataset=dataset,
task=evaluation_task,
prompts=[chat_prompt],
project_name="my-project",
)
```
</Tab>
The experiment will now be linked to the prompt, allowing you to view all experiments that use a specific prompt:
<Frame> </Frame> opik.configure()
client = opik.Opik()
# Get the dataset
dataset = client.get_or_create_dataset("test_dataset", project_name="my-project")
# Get different versions of the same text prompt
prompt_v1 = client.get_prompt(name="prompt-summary", commit="abc123")
prompt_v2 = client.get_prompt(name="prompt-summary", commit="def456")
# Define evaluation task that uses the prompt
def evaluation_task_v1(dataset_item):
formatted_prompt = prompt_v1.format(text=dataset_item["input"])
# Call your LLM with the formatted prompt
return {"output": "llm_response"}
def evaluation_task_v2(dataset_item):
formatted_prompt = prompt_v2.format(text=dataset_item["input"])
# Call your LLM with the formatted prompt
return {"output": "llm_response"}
# Run experiments with different prompt versions
experiment_v1 = evaluate(
experiment_name="My experiment - v1",
dataset=dataset,
task=evaluation_task_v1,
prompts=[prompt_v1],
project_name="my-project",
)
experiment_v2 = evaluate(
experiment_name="My experiment - v2",
dataset=dataset,
task=evaluation_task_v2,
prompts=[prompt_v2],
project_name="my-project",
)
# Compare results in the Opik UI
```
</Tab>
<Tab value="Chat prompts" title="Chat prompts">
```python maxLines=1000
import opik
from opik.evaluation import evaluate
opik.configure()
client = opik.Opik()
# Get the dataset
dataset = client.get_or_create_dataset("test_dataset", project_name="my-project")
# Get different versions of the same chat prompt
chat_prompt_v1 = client.get_chat_prompt(name="assistant-prompt", commit="abc123")
chat_prompt_v2 = client.get_chat_prompt(name="assistant-prompt", commit="def456")
# Define evaluation tasks that use the prompts
def evaluation_task_v1(dataset_item):
formatted_messages = chat_prompt_v1.format(
variables={"text": dataset_item["input"]}
)
# Call your LLM with the formatted messages
return {"output": "llm_response"}
def evaluation_task_v2(dataset_item):
formatted_messages = chat_prompt_v2.format(
variables={"text": dataset_item["input"]}
)
# Call your LLM with the formatted messages
return {"output": "llm_response"}
# Run experiments with different chat prompt versions
experiment_v1 = evaluate(
experiment_name="My experiment - v1",
dataset=dataset,
task=evaluation_task_v1,
prompts=[chat_prompt_v1],
project_name="my-project",
)
experiment_v2 = evaluate(
experiment_name="My experiment - v2",
dataset=dataset,
task=evaluation_task_v2,
prompts=[chat_prompt_v2],
project_name="my-project",
)
# Compare results in the Opik UI
```
</Tab>
This workflow allows you to systematically test and compare different prompt versions to identify the most effective one for your use case.