apps/opik-documentation/documentation/fern/docs/agent_optimization/overview.mdx
Opik Agent Optimizer is a turnkey, open-source agent and prompt optimization SDK. It automatically tunes prompts, tools, and agent workflows using the datasets, metrics, and traces you already log to Opik. Instead of hand-editing instructions and re-running evaluations, pick an optimizer (MetaPrompt, HRPO, Evolutionary, GEPA, etc.) and let it iterate for you online or fully offline inside Docker and Kubernetes.
<Frame> </Frame>The optimizer implements both proprietary and open-source optimization algorithms. Each one has its strengths and weaknesses, we recommend first trying out either GEPA or HRPO (Hierarchical Reflective Prompt Optimizer) as a first step:
| Algorithm | Description |
|---|---|
| MetaPrompt Optimization | Uses an LLM ("reasoning model") to critique and iteratively refine an initial instruction prompt. Good for general prompt wording, clarity, and structural improvements. Supports MCP tool calling optimization. |
| HRPO (Hierarchical Reflective Prompt Optimizer) | Uses hierarchical root cause analysis to systematically improve prompts by analyzing failures in batches, synthesizing findings, and addressing identified failure modes. Best for complex prompts requiring systematic refinement based on understanding why they fail. |
| Few-shot Bayesian Optimization | Specifically for chat models, this optimizer uses Bayesian optimization (Optuna) to find the optimal number and combination of few-shot examples (demonstrations) to accompany a system prompt. |
| Evolutionary Optimization | Employs genetic algorithms to evolve a population of prompts. Can discover novel prompt structures and supports multi-objective optimization (e.g., score vs. length). Can use LLMs for advanced mutation/crossover. |
| GEPA Optimization | Wraps the external GEPA package to optimize a single system prompt for single-turn tasks using a reflection model. Requires pip install gepa. |
| Parameter Optimization | Optimizes LLM call parameters (temperature, top_p, etc.) using Bayesian optimization. Uses Optuna for efficient parameter search with global and local search phases. Best for tuning model behavior without changing the prompt. |