Back to Opik

Quickstart

apps/opik-documentation/documentation/fern/docs-v2/development/optimization-runs/quickstart.mdx

2.0.22-6605-merge-20654.3 KB
Original Source
<Note> In Opik 2.0, datasets and experiments are project-scoped. Make sure to specify a `project_name` when creating datasets and running experiments so they are associated with the correct project. </Note>

Opik Agent Optimizer Quickstart gives you the fastest path from “hello world” to a successful optimization run. If you already walked through the main Opik Quickstart (tracing + evaluation), this is the next stop—it layers on the opik-optimizer SDK so you can automatically improve prompts and agents. Prefer a UI workflow? Use Optimization Studio instead.

Why Opik Agent Optimizer?

  • Production-grade workflows – reuse the same datasets, metrics, and tracing you already have in Opik.
  • Multiple strategies – swap between MetaPrompt, Hierarchical Reflective Prompt Optimizer (HRPO), Evolutionary, GEPA, and more with one API.
  • Deep analysis – every trial is logged to Opik so you can inspect prompts, tool calls, and failure modes.
<Callout> Estimated time: **≤10 minutes** if you already have Python and an Opik API key configured. </Callout>

Prerequisites

  • Python 3.10+
  • Opik account
  • Access to an OpenAI-compatible LLM via LiteLLM (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)

1. Install and authenticate

bash
pip install --upgrade opik opik-optimizer
opik configure  # paste your API key

2. Create a dataset and metric

python
import opik
from opik.evaluation.metrics import LevenshteinRatio

client = opik.Opik()
dataset = client.get_or_create_dataset(name="agent-opt-quickstart", project_name="my-project")
dataset.insert([
    {"question": "What is Opik?", "answer": "Opik is an LLM observability and optimization platform."},
    {"question": "How do I reduce hallucinations?", "answer": "Use evaluations and prompt optimization to enforce grounding."},
])

def answer_quality(item, output):
    metric = LevenshteinRatio()
    return metric.score(reference=item["answer"], output=output)

3. Run the optimizer

python
from opik_optimizer import MetaPromptOptimizer, ChatPrompt

prompt = ChatPrompt(
    messages=[
        {"role": "system", "content": "You are a precise assistant."},
        {"role": "user", "content": "{question}"},
    ],
    model="openai/gpt-5-nano"  # The model your prompt runs on
)

optimizer = MetaPromptOptimizer(model="openai/gpt-5-nano")  # The model that improves your prompt
result = optimizer.optimize_prompt(
    prompt=prompt,
    dataset=dataset,
    metric=answer_quality,
    max_trials=3,
    n_samples=2,
)

result.display()
<Tip> **Using a different LLM provider?** The optimizer supports OpenAI, Anthropic, Gemini, Azure, Ollama, and 100+ other providers via LiteLLM. See the [Configure LLM Providers](/development/optimization-runs/optimization/configure_models) guide for setup instructions. </Tip>

4. Inspect results

  • Run opik dashboard or open https://www.comet.com/opik.
  • In the left nav, go to Evaluation → Optimization runs, then select your latest run.
  • Review the optimization-progress chart, trial table, and per-trial traces to decide whether to ship the new prompt.

Common first issues

<AccordionGroup> <Accordion title="Prompt must be a ChatPrompt object"> Import `ChatPrompt` from `opik_optimizer` and wrap your `messages` list before passing it to any optimizer. </Accordion> <Accordion title="Authentication failed"> Re-run `opik configure` and confirm the account has Agent Optimizer access. If you changed machines, copy the `~/.opik/config` file or re-enter the key. </Accordion> <Accordion title="liteLLM provider errors"> Ensure provider keys (e.g., `OPENAI_API_KEY`) are exported in the same shell running the script, and verify the model you selected is enabled for that key. </Accordion> </AccordionGroup>

Next steps