apps/opik-documentation/documentation/fern/docs-v2/development/optimization-runs/faq.mdx
The optimizer will allow you to improve the performance of your agents without the need for
manual prompt engineering. You can also use the optimizer to reduce the size of the prompts in
your agents, reducing cost and latency while maintaining performance.
Once you have these, check out the Quickstart Guide to run your first optimization. </Accordion> <Accordion title="Can you help me optimize my prompt ?"> Yes, we would be more than happy to help you setup the Opik Optimizer for your use case ! You can join our Slack community and ask for help there. </Accordion> </AccordionGroup>
If you would like us to add a new optimization algorithm, simply create an issue on our GitHub repository and we will be happy to add it ! </Accordion> <Accordion title="How do I choose the right optimizer for my task?"> Knowing which optimizer to use depends on your specific needs. As a rule of thumb, we recommend starting with HRPO (Hierarchical Reflective Prompt Optimizer) as this has been shown to be a strong baseline for most tasks.
You can also try to use:
Models use the LiteLLM format: provider/model-name (e.g., gemini/gemini-2.0-flash).
For frequent setup and usage errors, head to Known Issues. We keep the error catalog there so fixes and version notes stay in one place.
```python
result = optimizer.optimize_prompt(
prompt=my_prompt,
dataset=training_dataset, # Used for analysis and improvements
validation_dataset=validation_dataset, # Used for ranking candidates
metric=my_metric,
max_trials=5,
)
```
If you don't provide a validation dataset, the optimizer will use the same dataset for both training and validation, which may lead to overfitting on that specific dataset. For best results, split your data 70/30 or 80/20 between training and validation sets.