apps/opik-documentation/documentation/fern/docs-v2/development/optimization-runs/optimization/concepts.mdx
Understanding the core concepts of the Opik Agent Optimizer is essential for unlocking its full potential in LLM evaluation and optimization. This section explains the foundational terms, processes, and strategies that underpin effective agent and prompt optimization within Opik.
In Opik, Agent Optimization refers to the systematic process of refining and evaluating the prompts, configurations, and overall design of language model-based applications to maximize their performance. This is an iterative approach leveraging continuous testing, data-driven refinement, and advanced evaluation techniques.
Prompt Optimization is a crucial subset of Agent Optimization. It focuses specifically on improving the instructions (prompts) given to Large Language Models (LLMs) to achieve desired outputs more accurately, consistently, and efficiently. Since prompts are the primary way to interact with and guide LLMs, optimizing them is fundamental to enhancing any LLM-powered agent or application.
Opik Agent Optimizer provides tools for both: directly optimizing individual prompt strings and
also for optimizing more complex agentic structures that might involve multiple prompts, few-shot
examples, or tool interactions.
It should return either a [ScoreResult](https://www.comet.com/docs/opik/python-sdk-reference/Objects/ScoreResult.html) object or a float.
See Configure LLM Providers for setup instructions.
</Accordion>
<Accordion title="Optimizer model">
The model that is used to optimize the prompt. This is the model that the optimizer uses to improve your prompt,
you will get the best performance by using the most powerful model for the optimization. Configure it via the optimizer's model parameter.
See [Configure LLM Providers](/development/optimization-runs/optimization/configure_models) for setup instructions.