apps/opik-documentation/documentation/fern/docs/quickstart.mdx
This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform.
<Frame> </Frame>Before you begin, you'll need to choose how you want to use Opik:
Opik makes it easy to integrate with your existing LLM application, here are some of our most popular integrations:
<Tabs> <Tab title="Python SDK" value="python-function-decorator"> If you are using the Python function decorator, you can integrate by:<Steps>
<Step>
Install the Opik Python SDK:
```bash
pip install opik
```
</Step>
<Step>
Configure the Opik Python SDK:
```bash
opik configure
```
</Step>
<Step>
Wrap your function with the `@track` decorator:
```python
from opik import track
@track
def my_function(input: str) -> str:
return input
```
All calls to the `my_function` will now be logged to Opik. This works well for any function
even nested ones and is also supported by most integrations (just wrap any parent function
with the `@track` decorator).
</Step>
</Steps>
<Steps>
<Step>
Install the Opik TypeScript SDK:
```bash
npm install opik
```
</Step>
<Step>
Configure the Opik TypeScript SDK by running the interactive CLI tool:
```bash
npx opik-ts configure
```
This will detect your project setup, install required dependencies, and help you configure environment variables.
</Step>
<Step>
Log a trace using the Opik client:
```typescript
import { Opik } from "opik";
const client = new Opik();
const trace = client.trace({
name: "My LLM Application",
input: { prompt: "What is the capital of France?" },
output: { response: "The capital of France is Paris." },
});
trace.end();
await client.flush();
```
All traces will now be logged to Opik. You can also log spans within traces for more detailed observability.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik Python SDK:
```bash
pip install opik
```
</Step>
<Step>
Configure the Opik Python SDK, this will prompt you for your API key if you are using Opik
Cloud or your Opik server address if you are self-hosting:
```bash
opik configure
```
</Step>
<Step>
Wrap your OpenAI client with the `track_openai` function:
```python
from opik.integrations.openai import track_openai
from openai import OpenAI
# Wrap your OpenAI client
client = OpenAI()
client = track_openai(client)
# Use the client as normal
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello, how are you?",
},
],
)
print(completion.choices[0].message.content)
```
All OpenAI calls made using the `client` will now be logged to Opik. You can combine
this with the `@track` decorator to log the traces for each step of your agent.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik TypeScript SDK:
```bash
npm install opik-openai
```
</Step>
<Step>
Configure the Opik TypeScript SDK by running the interactive CLI tool:
```bash
npx opik-ts configure
```
This will detect your project setup, install required dependencies, and help you configure environment variables.
</Step>
<Step>
Wrap your OpenAI client with the `trackOpenAI` function:
```typescript
import OpenAI from "openai";
import { trackOpenAI } from "opik-openai";
// Initialize the original OpenAI client
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Wrap the client with Opik tracking
const trackedOpenAI = trackOpenAI(openai);
// Use the tracked client just like the original
const completion = await trackedOpenAI.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello, how can you help me today?" }],
});
console.log(completion.choices[0].message.content);
// Ensure all traces are sent before your app terminates
await trackedOpenAI.flush();
```
All OpenAI calls made using the `trackedOpenAI` will now be logged to Opik.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik Vercel integration:
```bash
npm install opik-vercel
```
</Step>
<Step>
Configure the Opik AI Vercel SDK by running the interactive CLI tool:
```bash
npx opik-ts configure
```
This will detect your project setup, install required dependencies, and help you configure environment variables.
</Step>
<Step>
Initialize the OpikExporter with your AI SDK:
```ts
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
import { OpikExporter } from "opik-vercel";
// Set up OpenTelemetry with Opik
const sdk = new NodeSDK({
traceExporter: new OpikExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
// Your AI SDK calls with telemetry enabled
const result = await generateText({
model: openai("gpt-4o"),
prompt: "What is love?",
experimental_telemetry: { isEnabled: true },
});
console.log(result.text);
```
All AI SDK calls with `experimental_telemetry: { isEnabled: true }` will now be logged to Opik.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik Python SDK:
```bash
pip install opik
```
</Step>
<Step>
Configure the Opik Python SDK:
```bash
opik configure
```
</Step>
<Step>
Integrate Opik with your Ollama calls:
<Tabs>
<Tab title="Ollama Python Package">
Wrap your Ollama calls with the `@track` decorator:
```python
import ollama
from opik import track
@track
def ollama_call(user_message: str):
response = ollama.chat(
model='llama3.1',
messages=[{'role': 'user', 'content': user_message}]
)
return response['message']
# Call your function
result = ollama_call("Say this is a test")
print(result)
```
</Tab>
<Tab title="OpenAI SDK">
Use Opik's OpenAI integration with Ollama's OpenAI-compatible API:
```python
from openai import OpenAI
from opik.integrations.openai import track_openai
# Create an OpenAI client pointing to Ollama
client = OpenAI(
base_url='http://localhost:11434/v1/',
api_key='ollama' # required but ignored
)
# Wrap the client with Opik tracking
client = track_openai(client)
# Call the local Ollama model
response = client.chat.completions.create(
model='llama3.1',
messages=[{'role': 'user', 'content': 'Say this is a test'}]
)
print(response.choices[0].message.content)
```
</Tab>
<Tab title="LangChain">
Use Opik's LangChain integration with Ollama:
```python
from langchain_ollama import ChatOllama
from opik.integrations.langchain import OpikTracer
# Create the Opik tracer
opik_tracer = OpikTracer()
# Create the Ollama model with Opik tracing
llm = ChatOllama(
model="llama3.1",
temperature=0,
).with_config({"callbacks": [opik_tracer]})
# Call the Ollama model
messages = [
("system", "You are a helpful assistant."),
("human", "Say this is a test")
]
response = llm.invoke(messages)
print(response)
```
</Tab>
</Tabs>
All Ollama calls will now be logged to Opik. See the [full Ollama guide](/v1/integrations/ollama) for more advanced usage.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik SDK:
```bash
pip install opik google-adk
```
</Step>
<Step>
Configure the Opik SDK by running the `opik configure` command in your terminal:
```bash
opik configure
```
</Step>
<Step>
Wrap your ADK agent with the `OpikTracer`:
```python
from google.adk.agents import Agent
from opik.integrations.adk import OpikTracer, track_adk_agent_recursive
# Create your ADK agent
agent = Agent(
name="helpful_assistant",
model="gemini-2.0-flash",
instruction="You are a helpful assistant that answers user questions."
)
# Wrap your ADK agent with the OpikTracer
opik_tracer = OpikTracer()
track_adk_agent_recursive(agent, opik_tracer)
```
All ADK agent calls will now be logged to Opik.
</Step>
</Steps>
<Steps>
<Step>
Install the Opik SDK:
```bash
pip install opik
```
</Step>
<Step>
Configure the Opik SDK by running the `opik configure` command in your terminal:
```bash
opik configure
```
</Step>
<Step>
Track your LangGraph graph with `track_langgraph`:
```python
from opik.integrations.langchain import OpikTracer, track_langgraph
# Create your LangGraph graph
graph = ...
app = graph.compile(...)
# Create OpikTracer and track the graph once
# The graph visualization is automatically extracted by track_langgraph
opik_tracer = OpikTracer()
app = track_langgraph(app, opik_tracer)
# Now all invocations are automatically tracked!
result = app.invoke({"messages": [HumanMessage(content = "How to use LangGraph ?")]})
```
All LangGraph calls will now be logged to Opik. No need to pass callbacks on every invocation!
</Step>
</Steps>
The pre-built prompt will guide you through the integration process, install the Opik SDK and
instrument your code. It supports both Python and TypeScript codebases, if you are using
another language just let us know and we can help you out.
Once the integration is complete, simply run your application and you will start seeing traces
in your Opik dashboard.
<CardGroup cols={3}>
<Card title="LangChain" href="/v1/integrations/langchain" icon={} iconPosition="left"/>
<Card title="LlamaIndex" href="/v1/integrations/llama_index" icon={} iconPosition="left"/>
<Card title="Anthropic" href="/v1/integrations/anthropic" icon={} iconPosition="left"/>
<Card title="AWS Bedrock" href="/v1/integrations/bedrock" icon={} iconPosition="left"/>
<Card title="Google Gemini" href="/v1/integrations/gemini" icon={} iconPosition="left"/>
<Card title="CrewAI" href="/v1/integrations/crewai" icon={} iconPosition="left"/>
</CardGroup>
**[View all 30+ integrations →](/v1/integrations/overview)**
After running your application, you will start seeing your traces in your Opik dashboard:
<video src="/img/tracing/quickstart.mp4" width="854" height="480" autoPlay muted loop playsInline preload="auto" />
If you don't see traces appearing, reach out to us on Slack or raise an issue on GitHub and we'll help you troubleshoot.
Now that you have logged your first LLM calls and chains to Opik, why not check out: