Back to Opik

Quickstart

apps/opik-documentation/documentation/fern/docs-v2/quickstart.mdx

2.0.24-52629.5 KB
Original Source

This guide helps you integrate the Opik platform with your existing Agent. The goal of this guide is to help you log your first traces and start tracking your prompts and agent configuration in Opik.

<Frame> </Frame>

Prerequisites

Before you begin, you'll need to choose how you want to use Opik:

Logging your first LLM calls

Opik makes it easy to integrate with your existing LLM application. Pick the tab that matches your stack and follow the three steps to log your first trace:

<Tabs> <Tab title="Python SDK" value="python-function-decorator"> If you are using the Python function decorator, you can integrate by:
<Steps>
  <Step>
    Install the Opik Python SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik Python SDK:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your function with the `@track` decorator:

    ```python
    from opik import track

    @track
    def my_function(input: str) -> str:
        return input
    ```

    All calls to the `my_function` will now be logged to Opik. This works well for any function
    even nested ones and is also supported by most integrations (just wrap any parent function
    with the `@track` decorator).
  </Step>
</Steps>
</Tab> <Tab title="TypeScript SDK" value="typescript-sdk"> If you want to use the TypeScript SDK to log traces directly:
<Steps>
  <Step>
    Install the Opik TypeScript SDK:

    ```bash
    npm install opik
    ```
  </Step>
  <Step>
    Configure the Opik TypeScript SDK by running the interactive CLI tool:

    ```bash
    npx opik-ts configure
    ```

    This will detect your project setup, install required dependencies, and help you configure environment variables.
  </Step>
  <Step>
    Log a trace using the Opik client:

    ```typescript
    import { Opik } from "opik";

    const client = new Opik();

    const trace = client.trace({
      name: "My LLM Application",
      input: { prompt: "What is the capital of France?" },
      output: { response: "The capital of France is Paris." },
    });

    trace.end();
    await client.flush();
    ```

    All traces will now be logged to Opik. You can also log spans within traces for more detailed observability.
  </Step>
</Steps>
</Tab> <Tab title="OpenAI (Python)" value="openai-python-sdk"> If you are using the OpenAI Python SDK, you can integrate by:
<Steps>
  <Step>
    Install the Opik Python SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik Python SDK, this will prompt you for your API key if you are using Opik
    Cloud or your Opik server address if you are self-hosting:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your OpenAI client with the `track_openai` function:

    ```python
    from opik.integrations.openai import track_openai
    from openai import OpenAI

    # Wrap your OpenAI client
    client = OpenAI()
    client = track_openai(client)

    # Use the client as normal
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "user", "content": "Hello, how are you?",
            },
        ],
    )
    print(completion.choices[0].message.content)
    ```

    All OpenAI calls made using the `client` will now be logged to Opik. You can combine
    this with the `@track` decorator to log the traces for each step of your agent.

  </Step>
</Steps>
</Tab> <Tab title="OpenAI (TS)" value="openai-ts-sdk"> If you are using the OpenAI TypeScript SDK, you can integrate by:
<Steps>
  <Step>
    Install the Opik TypeScript SDK:

    ```bash
    npm install opik-openai
    ```
  </Step>
  <Step>
    Configure the Opik TypeScript SDK by running the interactive CLI tool:

    ```bash
    npx opik-ts configure
    ```

    This will detect your project setup, install required dependencies, and help you configure environment variables.
  </Step>
  <Step>
    Wrap your OpenAI client with the `trackOpenAI` function:

    ```typescript
    import OpenAI from "openai";
    import { trackOpenAI } from "opik-openai";

    // Initialize the original OpenAI client
    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });

    // Wrap the client with Opik tracking
    const trackedOpenAI = trackOpenAI(openai);

    // Use the tracked client just like the original
    const completion = await trackedOpenAI.chat.completions.create({
      model: "gpt-4",
      messages: [{ role: "user", content: "Hello, how can you help me today?" }],
    });
    console.log(completion.choices[0].message.content);

    // Ensure all traces are sent before your app terminates
    await trackedOpenAI.flush();
    ```

    All OpenAI calls made using the `trackedOpenAI` will now be logged to Opik.

  </Step>
</Steps>
</Tab> <Tab title="LangGraph" value="langgraph"> If you are using LangGraph, you can integrate by:
<Steps>
  <Step>
    Install the Opik SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik SDK by running the `opik configure` command in your terminal:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Track your LangGraph graph with `track_langgraph`:

    ```python
    from opik.integrations.langchain import OpikTracer, track_langgraph

    # Create your LangGraph graph
    graph = ...
    app = graph.compile(...)

    # Create OpikTracer and track the graph once
    # The graph visualization is automatically extracted by track_langgraph
    opik_tracer = OpikTracer()
    app = track_langgraph(app, opik_tracer)

    # Now all invocations are automatically tracked!
    result = app.invoke({"messages": [HumanMessage(content = "How to use LangGraph ?")]})
    ```

    All LangGraph calls will now be logged to Opik. No need to pass callbacks on every invocation!
  </Step>
</Steps>
</Tab> <Tab title="AI integration"> If you already use a coding agent (Claude Code, Codex, Cursor, OpenCode, etc.), you can let it instrument your app for you with the Opik Skill. Requires Node.js installed.
<Steps>
  <Step title="Install the Opik skill">
    ```bash
    npx skills add comet-ml/opik-skills
    ```
  </Step>
  <Step title="Run the integration">
    Once the skill is installed, you can integrate with Opik using the following prompt:
    ```
    Instrument my agent with Opik using the /instrument command.
    ```
  </Step>
</Steps>
</Tab> <Tab title="All integrations"> Opik has **30+ integrations** with popular frameworks and model providers:
<CardGroup cols={3}>
  <Card title="LangChain" href="/integrations/langchain" icon={} iconPosition="left"/>
  <Card title="LlamaIndex" href="/integrations/llama_index" icon={} iconPosition="left"/>
  <Card title="Anthropic" href="/integrations/anthropic" icon={} iconPosition="left"/>
  <Card title="AWS Bedrock" href="/integrations/bedrock" icon={} iconPosition="left"/>
  <Card title="Google Gemini" href="/integrations/gemini" icon={} iconPosition="left"/>
  <Card title="CrewAI" href="/integrations/crewai" icon={} iconPosition="left"/>
</CardGroup>

**[View all 30+ integrations →](/integrations/overview)**
</Tab> </Tabs>

Analyze your traces

After running your application, you will start seeing your traces in Opik and you can use Ollie to analyze them and improve your agent.

<video src="/img/tracing/quickstart.mp4" width="854" height="480" autoPlay muted loop playsInline preload="auto" />

If you don't see traces appearing, reach out to us on Slack or raise an issue on GitHub and we'll help you troubleshoot.

Next steps

Now that you have logged your first Agent calls to, why not check out:

  1. In depth guide on agent observability: Learn how to customize the data that is logged to Opik and how to log conversations.
  2. Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.
  3. Opik's evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.