apps/opik-documentation/documentation/fern/docs/agent_optimization/known_issues.mdx
**Workaround**: pin `pyrate-limiter` to a 3.x release:
```bash
pip install "pyrate-limiter>=3.0.0,<4.0.0"
```
**Fixed in**: `3.0.0` (2026-01-26). Upgrade the SDK to remove the legacy flag entirely.
**Workaround**: pin `tqdm` to 4.70.0:
```bash
pip install tqdm==4.70.0
```
**Fixed in**: `3.0.0` (2026-01-26).
**Workaround**: avoid the affected LiteLLM builds:
```bash
pip install --upgrade "litellm<1.81.1"
```
**Fixed in**: `3.0.0` (2026-01-26).
**Workaround**:
```bash
pip install --upgrade "litellm<1.81.0"
```
**Fixed in**: `3.0.0` (2026-01-26).
**Solution**: Ensure you're using the `ChatPrompt` class to define your prompt:
```python
from opik_optimizer import ChatPrompt
prompt = ChatPrompt(
messages=[
{"role": "system", "content": "Your system prompt here"},
{"role": "user", "content": "Your user prompt with {variable}"},
],
model="gpt-4",
)
```
**Solution**: Use the `Dataset` class to create your dataset:
```python
import opik
client = opik.Opik()
dataset = client.get_or_create_dataset(name="your-dataset-name", project_name="my-project")
dataset.insert(
[
{"input": "example 1", "output": "expected 1"},
{"input": "example 2", "output": "expected 2"},
]
)
```
**Solution**: Ensure your metric is a function that takes `dataset_item` and `llm_output` as arguments and returns a `ScoreResult`:
```python
from opik.evaluation.metrics import ScoreResult
def my_metric(dataset_item, llm_output):
# Your scoring logic here
score = calculate_score(dataset_item, llm_output)
return ScoreResult(
name="my-metric",
value=score,
reason="Explanation for the score",
)
```
**Solution**: Ensure all placeholders in your prompt match the keys in your dataset:
```python
# Prompt with {question} placeholder
prompt = ChatPrompt(
user="Answer: {question}",
model="gpt-4",
)
# Dataset must have 'question' field
dataset = Dataset.from_list(
[
{"question": "What is AI?", "output": "..."},
]
)
```
**Solution**: Install the gepa package:
```bash
pip install gepa
```
**Solution**: Set the appropriate environment variable for your LLM provider:
```bash
# For OpenAI
export OPENAI_API_KEY="your-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# For other providers, check the LiteLLM documentation
```