Back to Opik

Log traces

apps/opik-documentation/documentation/fern/docs-v2/tracing/advanced/log_traces.mdx

2.0.24-526242.3 KB
Original Source
<Tip> If you are just getting started with Opik, we recommend first checking out the [Quickstart](/quickstart) guide that will walk you through the process of logging your first LLM call. </Tip>

LLM applications are complex systems that do more than just call an LLM API, they will often involve retrieval, pre-processing and post-processing steps. Tracing is a tool that helps you understand the flow of your application and identify specific points in your application that may be causing issues.

Opik's tracing functionality allows you to track not just all the LLM calls made by your application but also any of the other steps involved.

<Frame> </Frame>

Opik supports agent observability using our Typescript SDK, Python SDK, first class OpenTelemetry support and our REST API.

<Tip> We recommend starting with one of our integrations to get started quickly, you can find a full list of our integrations in the [integrations overview](/integrations/overview) page. </Tip>

We won't be covering how to track chat conversations in this guide, you can learn more about this in the Logging conversations guide.

Enable agent observability

1. Installing the SDK

Before adding observability to your application, you will first need to install and configure the Opik SDK.

<Tabs> <Tab value="Typescript SDK" title="Typescript SDK" language="typescript">
```bash
npm install opik
```

You can then set the Opik environment variables in your `.env` file:

```bash
# Set OPIK_API_KEY and OPIK_WORKSPACE in your .env file
OPIK_API_KEY=your_api_key_here
OPIK_WORKSPACE=your_workspace_name

# Optional if you are using Opik Cloud:
OPIK_URL_OVERRIDE=https://www.comet.com/opik/api
```

</Tab>
<Tab value="Python SDK" title="Python SDK" language="python">

```bash
# Install the SDK
pip install opik
```

You can then configure the SDK using the `opik configure` CLI command or by calling
[`opik.configure`](https://www.comet.com/docs/opik/python-sdk-reference/configure.html) from
your Jupyter Notebook.

</Tab>
<Tab value="OpenTelemetry" title="OpenTelemetry">

You will need to set the following environment variables for your OpenTelemetry setup:

```bash
export OTEL_EXPORTER_OTLP_ENDPOINT=https://www.comet.com/opik/api/v1/private/otel
export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default'

# If you are using self-hosted instance:
# export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5173/api/v1/private/otel
```

</Tab>
</Tabs> <Tip> Opik is open-source and can be hosted locally using Docker, please refer to the [self-hosting guide](/self-host/overview) to get started. Alternatively, you can use our hosted platform by creating an account on [Comet](https://www.comet.com/signup?from=llm). </Tip>

2. Using an integration

Once you have installed and configured the Opik SDK, you can start using it to track your agent calls:

<Tabs> <Tab title="OpenAI (TS)" value="openai-ts-sdk" language="typescript"> If you are using the OpenAI TypeScript SDK, you can integrate by:
<Steps>
  <Step>
    Install the Opik TypeScript SDK:

    ```bash
    npm install opik-openai
    ```
  </Step>
  <Step>
    Configure the Opik TypeScript SDK using environment variables:

    ```bash
    export OPIK_API_KEY="<your-api-key>" # Only required if you are using the Opik Cloud version
    export OPIK_URL_OVERRIDE="https://www.comet.com/opik/api" # Cloud version
    # export OPIK_URL_OVERRIDE="http://localhost:5173/api" # Self-hosting
    ```
  </Step>
  <Step>
    Wrap your OpenAI client with the `trackOpenAI` function:

    ```typescript
    import OpenAI from "openai";
    import { trackOpenAI } from "opik-openai";

    // Initialize the original OpenAI client
    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });

    // Wrap the client with Opik tracking
    const trackedOpenAI = trackOpenAI(openai);

    // Use the tracked client just like the original
    const completion = await trackedOpenAI.chat.completions.create({
      model: "gpt-4",
      messages: [{ role: "user", content: "Hello, how can you help me today?" }],
    });
    console.log(completion.choices[0].message.content);

    // Ensure all traces are sent before your app terminates
    await trackedOpenAI.flush();
    ```

    All OpenAI calls made using the `trackedOpenAI` will now be logged to Opik.

  </Step>
</Steps>
</Tab> <Tab title="OpenAI (Python)" value="openai-python-sdk" language="python"> If you are using the OpenAI Python SDK, you can integrate by:
<Steps>
  <Step>
    Install the Opik Python SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik Python SDK, this will prompt you for your API key if you are using Opik
    Cloud or your Opik server address if you are self-hosting:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your OpenAI client with the `track_openai` function:

    ```python
    from opik.integrations.openai import track_openai
    from openai import OpenAI

    # Wrap your OpenAI client
    openai_client = OpenAI()
    openai_client = track_openai(openai_client)
    ```

    All OpenAI calls made using the `openai_client` will now be logged to Opik.

  </Step>
</Steps>
</Tab> <Tab title="AI Vercel SDK" value="ai-vercel-sdk" language="typescript"> If you are using the AI Vercel SDK, you can integrate by:
<Steps>
  <Step>
    Install the Opik Vercel integration:

    ```bash
    npm install opik-vercel
    ```
  </Step>
  <Step>
    Configure the Opik AI Vercel SDK using environment variables and set your Opik API key:

    ```bash
    export OPIK_API_KEY="<your-api-key>"
    export OPIK_URL_OVERRIDE="https://www.comet.com/opik/api" # Cloud version
    # export OPIK_URL_OVERRIDE="http://localhost:5173/api" # Self-hosting
    ```
  </Step>
  <Step>
    Initialize the OpikExporter with your AI SDK:

    ```ts
    import { openai } from "@ai-sdk/openai";
    import { generateText } from "ai";
    import { NodeSDK } from "@opentelemetry/sdk-node";
    import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
    import { OpikExporter } from "opik-vercel";

    // Set up OpenTelemetry with Opik
    const sdk = new NodeSDK({
      traceExporter: new OpikExporter(),
      instrumentations: [getNodeAutoInstrumentations()],
    });
    sdk.start();

    // Your AI SDK calls with telemetry enabled
    const result = await generateText({
      model: openai("gpt-4o"),
      prompt: "What is love?",
      experimental_telemetry: { isEnabled: true },
    });

    console.log(result.text);
    ```

    All AI SDK calls with `experimental_telemetry: { isEnabled: true }` will now be logged to Opik.
  </Step>
</Steps>
</Tab> <Tab title="ADK" value="adk-python" language="python"> If you are using the ADK, you can integrate by:
<Steps>
  <Step>
    Install the Opik SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik SDK by running the `opik configure` command in your terminal:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your ADK agent with the `OpikTracer` decorator:

    ```python
    from opik.integrations.adk import OpikTracer, track_adk_agent_recursive

    opik_tracer = OpikTracer()

    # Define your ADK agent

    # Wrap your ADK agent with the OpikTracer
    track_adk_agent_recursive(agent, opik_tracer)
    ```

    All ADK agent calls will now be logged to Opik.
  </Step>
</Steps>
</Tab> <Tab title="LangGraph" value="langgraph" language="python"> If you are using LangGraph, you can integrate by:
<Steps>
  <Step>
    Install the Opik SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik SDK by running the `opik configure` command in your terminal:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your LangGraph graph with the `OpikTracer` decorator:

    ```python
    from opik.integrations.langchain import OpikTracer

    # Create your LangGraph graph
    graph = ...
    app = graph.compile(...)

    # Wrap your LangGraph graph with the OpikTracer
    opik_tracer = OpikTracer(graph=app.get_graph(xray=True))

    # Pass the OpikTracer callback to the invoke functions
    result = app.invoke({"messages": [HumanMessage(content = "How to use LangGraph ?")]},
                  config={"callbacks": [opik_tracer]})
    ```

    All LangGraph calls will now be logged to Opik.
  </Step>
</Steps>
</Tab> <Tab title="Function Decorators" value="python-function-decorator" language="python"> If you are using the Python function decorator, you can integrate by:
<Steps>
  <Step>
    Install the Opik Python SDK:

    ```bash
    pip install opik
    ```
  </Step>
  <Step>
    Configure the Opik Python SDK:

    ```bash
    opik configure
    ```
  </Step>
  <Step>
    Wrap your function with the `@track` decorator:

    ```python
    from opik import track

    @track
    def my_function(input: str) -> str:
        return input
    ```

    All calls to the `my_function` will now be logged to Opik. This works well for any function
    even nested ones and is also supported by most integrations (just wrap any parent function
    with the `@track` decorator).
  </Step>
</Steps>
</Tab> <Tab title="AI Wizard" value="ai-installation"> <div style={{"display": "flex", "flexDirection": "row", "gap": "1rem", "alignItems": "center", "justifyContent": "space-between"}}> <span style={{"& p": {"margin": "0rem"}}}> <p style={{"margin": "0rem", "fontStyle": "italic"}}>Integrate with Opik faster using this pre-built prompt</p> </span> <Button intent="primary" href="cursor:////anysphere.cursor-deeplink/prompt?text=%23+OPIK+Agentic+Onboarding%0A%0A%23%23+Goals%0A%0AYou+must+help+me%3A%0A%0A1.+Integrate+the+Opik+client+with+my+existing+LLM+application%0A2.+Set+up+tracing+for+my+LLM+calls+and+chains%0A%0A%23%23+Rules%0A%0ABefore+you+begin%2C+you+must+understand+and+strictly+adhere+to+these+core+principles%3A%0A%0A1.+Code+Preservation+%26+Integration+Guidelines%3A%0A%0A+++-+Existing+business+logic+must+remain+untouched+and+unmodified%0A+++-+Only+add+Opik-specific+code+%28decorators%2C+imports%2C+handlers%2C+env+vars%29%0A+++-+Integration+must+be+non-invasive+and+backwards+compatible%0A%0A2.+Process+Requirements%3A%0A%0A+++-+Follow+the+workflow+steps+sequentially+without+deviation%0A+++-+Validate+completion+of+each+step+before+proceeding%0A+++-+Request+explicit+approval+for+any+workflow+modifications%0A%0A3.+Documentation+%26+Resources%3A%0A%0A+++-+Reference+official+Opik+documentation+at+https%3A%2F%2Fwww.comet.com%2Fdocs%2Fopik%2Fquickstart.md%0A+++-+Follow+Opik+best+practices+and+recommended+patterns%0A+++-+Maintain+detailed+integration+notes+and+configuration+details%0A%0A4.+Testing+%26+Validation%3A%0A+++-+Verify+Opik+integration+without+impacting+existing+functionality%0A+++-+Validate+tracing+works+correctly+for+all+LLM+interactions%0A+++-+Ensure+proper+error+handling+and+logging%0A%0A%23%23+Integration+Workflow%0A%0A%23%23%23+Step+1%3A+Language+and+Compatibility+Check%0A%0AFirst%2C+analyze+the+codebase+to+identify%3A%0A%0A1.+Primary+programming+language+and+frameworks%0A2.+Existing+LLM+integrations+and+patterns%0A%0ACompatibility+Requirements%3A%0A%0A-+Supported+Languages%3A+Python%2C+JavaScript%2FTypeScript%0A%0AIf+the+codebase+uses+unsupported+languages%3A%0A%0A-+Stop+immediately%0A-+Inform+me+that+the+codebase+is+unsupported+for+AI+integration%0A%0AOnly+proceed+to+Step+2+if%3A%0A%0A-+Language+is+Python+or+JavaScript%2FTypeScript%0A%0A%23%23%23+Step+2%3A+Codebase+Discovery+%26+Entrypoint+Confirmation%0A%0AAfter+verifying+language+compatibility%2C+perform+a+full+codebase+scan+with+the+following+objectives%3A%0A%0A-+LLM+Touchpoints%3A+Locate+all+files+and+functions+that+invoke+or+interface+with+LLMs+or+can+be+a+candidates+for+tracing.%0A-+Entrypoint+Detection%3A+Identify+the+primary+application+entry+point%28s%29+%28e.g.%2C+main+script%2C+API+route%2C+CLI+handler%29.+If+ambiguous%2C+pause+and+request+clarification+on+which+component%28s%29+are+most+important+to+trace+before+proceeding.%0A++%E2%9A%A0%EF%B8%8F+Do+not+proceed+to+Step+3+without+explicit+confirmation+if+the+entrypoint+is+unclear.%0A-+Return+the+LLM+Touchpoints+to+me%0A%0A%23%23%23+Step+3%3A+Discover+Available+Integrations%0A%0AAfter+I+confirm+the+LLM+Touchpoints+and+entry+point%2C+find+the+list+of+supported+integrations+at+https%3A%2F%2Fwww.comet.com%2Fdocs%2Fopik%2Fintegrations%2Foverview.md%0A%0A%23%23%23+Step+4%3A+Deep+Analysis+Confirmed+files+for+LLM+Frameworks+%26+SDKs%0A%0AUsing+the+files+confirmed+in+Step+2%2C+perform+targeted+inspection+to+detect+specific+LLM-related+technologies+in+use%2C+such+as%3A%0ASDKs%3A+openai%2C+anthropic%2C+huggingface%2C+etc.%0AFrameworks%3A+LangChain%2C+LlamaIndex%2C+Haystack%2C+etc.%0A%0A%23%23%23+Step+5%3A+Pre-Implementation+Development+Plan+%28Approval+Required%29%0A%0ADo+not+write+or+modify+code+yet.+You+must+propose+me+a+step-by-step+plan+including%3A%0A%0A-+Opik+packages+to+install%0A-+Files+to+be+modified%0A-+Code+snippets+for+insertion%2C+clearly+scoped+and+annotated%0A-+Where+to+place+Opik+API+keys%2C+with+placeholder+comments+%28Visit+https%3A%2F%2Fcomet.com%2Fopik%2Fyour-workspace-name%2Fget-started+to+copy+your+API+key%29%0A++Wait+for+approval+before+proceeding%21%0A%0A%23%23%23+Step+6%3A+Execute+the+Integration+Plan%0A%0AAfter+approval%3A%0A%0A-+Run+the+package+installation+command+via+terminal+%28pip+install+opik%2C+npm+install+opik%2C+etc.%29.%0A-+Apply+code+modifications+exactly+as+described+in+Step+5.%0A-+Keep+all+additions+minimal+and+non-invasive.%0A++Upon+completion%2C+review+the+changes+made+and+confirm+installation+success.%0A%0A%23%23%23+Step+7%3A+Request+User+Review+and+Wait%0A%0ANotify+me+that+all+integration+steps+are+complete.%0A%22Please+run+the+application+and+verify+if+Opik+is+capturing+traces+as+expected.+Let+me+know+if+you+need+adjustments.%22%0A%0A%23%23%23+Step+8%3A+Debugging+Loop+%28If+Needed%29%0A%0AIf+issues+are+reported%3A%0A%0A1.+Parse+the+error+or+unexpected+behavior+from+feedback.%0A2.+Re-query+the+Opik+docs+using+https%3A%2F%2Fwww.comet.com%2Fdocs%2Fopik%2Fquickstart.md+if+needed.%0A3.+Propose+a+minimal+fix+and+await+approval.%0A4.+Apply+and+revalidate.%0A"> <div style={{"display": "flex", "flexDirection": "row", "gap": "1rem", "alignItems": "center"}}> <svg xmlns="http://www.w3.org/2000/svg" id="Ebene_1" version="1.1" viewBox="0 0 466.73 532.09"> <path style={{"fill": "#edecec"}} class="st0" d="M457.43,125.94L244.42,2.96c-6.84-3.95-15.28-3.95-22.12,0L9.3,125.94c-5.75,3.32-9.3,9.46-9.3,16.11v247.99c0,6.65,3.55,12.79,9.3,16.11l213.01,122.98c6.84,3.95,15.28,3.95,22.12,0l213.01-122.98c5.75-3.32,9.3-9.46,9.3-16.11v-247.99c0-6.65-3.55-12.79-9.3-16.11h-.01ZM444.05,151.99l-205.63,356.16c-1.39,2.4-5.06,1.42-5.06-1.36v-233.21c0-4.66-2.49-8.97-6.53-11.31L24.87,145.67c-2.4-1.39-1.42-5.06,1.36-5.06h411.26c5.84,0,9.49,6.33,6.57,11.39h-.01Z"/> </svg> Open in Cursor </div> </Button> </div>
The pre-built prompt will guide you through the integration process, install the Opik SDK and
instrument your code. It supports both Python and TypeScript codebases, if you are using
another language just let us know and we can help you out.

Once the integration is complete, simply run your application and you will start seeing traces
in your Opik dashboard.
</Tab> <Tab title="Other" value="other" language="other"> Opik has more than 30 integrations with the most popular frameworks and libraries, you can find a full list of integrations [here](/integrations/overview). For example:
- [Dify](/integrations/dify)
- [Agno](/integrations/agno)
- [Ollama](/integrations/ollama)

If you are using a framework or library that is not listed, you can still log your traces
using either the function decorator or the Opik client, check out the
[Log Traces](/tracing/advanced/log_traces) guide for more information.
</Tab> </Tabs> <Tip> Opik has more than 40 integrations with the majority of the popular frameworks and libraries. You can find a full list of integrations in the integrations [overview page](/integrations/overview). </Tip>

If you would like more control over the logging process, you can use the low-level SDKs to log your traces and spans.

3. Analyzing your agents

Now that you have observability enabled for your agents, you can start to review and analyze the agent calls in Opik. In the Opik UI, you can review each agent call, see the agent graph and review all the tool calls made by the agent.

<Frame> </Frame>

As a next step, you can create an offline evaluation to evaluate your agent's performance on a fixed set of samples.

Advanced usage

Using function decorators

Function decorators are a great way to add Opik logging to your existing application. When you add the @track decorator to a function, Opik will create a span for that function call and log the input parameters and function output for that function. If we detect that a decorated function is being called within another decorated function, we will create a nested span for the inner function.

While decorators are most popular in Python, we also support them in our Typescript SDK:

<Tabs> <Tab title="Typescript" value="typescript" language="typescript"> TypeScript started supporting decorators from version 5 but it's use is still not widespread. The Opik typescript SDK also supports decorators but it's currently considered experimental.
    ```typescript maxLines=100
    import { track } from "opik";

    class TranslationService {
        @track({ type: "llm" })
        async generateText() {
            // Your LLM call here
            return "Generated text";
        }

        @track({ name: "translate" })
        async translate(text: string) {
            // Your translation logic here
            return `Translated: ${text}`;
        }

        @track({ name: "process", projectName: "translation-service" })
        async process() {
            const text = await this.generateText();
            return this.translate(text);
        }
    }
```

<Info>
    You can also specify custom `tags`, `metadata`, and/or a `thread_id` for each trace and/or
    span logged for the decorated function. For more information, see
    [Logging additional data using the opik_args parameter](#logging-additional-data)
</Info>

</Tab>
<Tab title="Python" value="python" language="python">
    You can add the `@track` decorator to any function in your application and track not just
    LLM calls but also any other steps in your application:

    ```python maxLines=100
    import opik
    import openai

    client = openai.OpenAI()

    @opik.track
    def retrieve_context(input_text):
        # Your retrieval logic here, here we are just returning a
        # hardcoded list of strings
        context =[
            "What specific information are you looking for?",
            "How can I assist you with your interests today?",
            "Are there any topics you'd like to explore?",
        ]
        return context

    @opik.track
    def generate_response(input_text, context):
        full_prompt = (
            f" If the user asks a non-specific question, use the context to provide a relevant response.\n"
            f"Context: {', '.join(context)}\n"
            f"User: {input_text}\n"
            f"AI:"
        )

        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": full_prompt}]
        )
        return response.choices[0].message.content

    @opik.track(name="my_llm_application")
    def llm_chain(input_text):
        context = retrieve_context(input_text)
        response = generate_response(input_text, context)

        return response

    # Use the LLM chain
    result = llm_chain("Hello, how are you?")
    print(result)
    ```

    When using the track decorator, you can customize the data associated with both the trace
    and the span using either the `opik_args` parameter or the
    [`opik_context`](https://www.comet.com/docs/opik/python-sdk-reference/opik_context/index.html)
    module. This is particularly useful if you want to specify the conversation thread id, tags
    and metadata for example.

    <CodeBlocks>
        ```python title="opik_context module"
        import opik

        @opik.track
        def llm_chain(text: str) -> str:
            opik_context.update_current_trace(
                tags=["llm_chatbot"],
                metadata={"version": "1.0", "method": "simple"},
                thread_id="conversation-123",
                feedback_scores=[
                    {
                        "name": "user_feedback",
                        "value": 1
                    }
                ],
            )
            opik_context.update_current_span(
                metadata={"model": "gpt-4o"},
            )
            return f"Processed: {text}"
        ```

        ```python title="opik_args parameter"
        import opik

        @opik.track
        def llm_chain(text: str) -> str:
            # LLM chain code
            # ...
            return f"Processed: {text}"

        # Call with opik_args - it won't be passed to the function
        result = llm_chain(
            "hello world",
            opik_args={
                "span": {
                    "tags": ["llm", "agent"],
                    "metadata": {"version": "1.0", "method": "simple"}
                },
                "trace": {
                    "thread_id": "conversation-123",
                    "tags": ["user-session"],
                    "metadata": {"user_id": "user-456"}
                }
            }
        )

        print(result)
        ```
    </CodeBlocks>

    <Tip>
        If you specify the opik_args parameter as part of your function call, you can propagate
        the configuration to the nested functions.
    </Tip>
</Tab>
</Tabs>

Using the low-level SDKs

If you need full control over the logging process, you can use the low-level SDKs to log your traces and spans:

<Tabs> <Tab title="Typescript" value="typescript" language="typescript"> You can use the [`Opik`](/reference/typescript-sdk/overview) client to log your traces and spans:
    ```typescript
    import { Opik } from "opik";

    const client = new Opik({
        apiUrl: "https://www.comet.com/opik/api",
        apiKey: "your-api-key", // Only required if you are using Opik Cloud
        projectName: "your-project-name",
        workspaceName: "your-workspace-name", // Optional
    });

    // Log a trace with an LLM span
    const trace = client.trace({
        name: `Trace`,
        input: {
            prompt: `Hello!`,
        },
        output: {
            response: `Hello, world!`,
        },
    });

    const span = trace.span({
        name: `Span`,
        type: "llm",
        input: {
            prompt: `Hello, world!`,
        },
        output: {
            response: `Hello, world!`,
        },
    });

    // Flush the client to send all traces and spans
    await client.flush();
    ```

    <Tip>
        Make sure you define the environment variables for the Opik client in your `.env` file,
        you can find more information about the configuration [here](/tracing/advanced/sdk_configuration).
    </Tip>
</Tab>
<Tab title="Python" value="python" language="python">
    If you want full control over the data logged to Opik, you can use the
    [`Opik`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html) client.


    Logging traces and spans can be achieved by first creating a trace using
    [`Opik.trace`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.trace)
    and then adding spans to the trace using the
    [`Trace.span`](https://www.comet.com/docs/opik/python-sdk-reference/Objects/Trace.html#opik.api_objects.trace.Trace.span)
    method:

    ```python
    from opik import Opik

    client = Opik(project_name="Opik client demo")

    # Create a trace
    trace = client.trace(
        name="my_trace",
        input={"user_question": "Hello, how are you?"},
        output={"response": "Comment ça va?"}
    )

    # Add a span
    trace.span(
        name="Add prompt template",
        input={"text": "Hello, how are you?", "prompt_template": "Translate the following text to French: {text}"},
        output={"text": "Translate the following text to French: hello, how are you?"}
    )

    # Add an LLM call
    trace.span(
        name="llm_call",
        type="llm",
        input={"prompt": "Translate the following text to French: hello, how are you?"},
        output={"response": "Comment ça va?"}
    )

    # End the trace
    trace.end()
    ```

    <Note>
    It is recommended to call `trace.end()` and `span.end()` when you are finished with the trace and span to ensure that
    the end time is logged correctly.
    </Note>

    Opik's logging functionality is designed with production environments in mind. To optimize
    performance, all logging operations are executed in a background thread.

    If you want to ensure all traces are logged to Opik before exiting your program, you can use the `opik.Opik.flush` method:

    ```python
    from opik import Opik

    client = Opik()

    # Log some traces
    client.flush()
    ```

</Tab>
</Tabs>

Logging traces/spans using context managers

If you are using the low-level SDKs, you can use the context managers to log traces and spans. Context managers provide a clean and Pythonic way to manage the lifecycle of traces and spans, ensuring proper cleanup and error handling.

<Tabs> <Tab title="Python" value="python" language="python"> Opik provides two main context managers for logging:
    #### `opik.start_as_current_trace()`

    Use this context manager to create and manage a trace. A trace represents the overall execution flow of your application.

    For detailed API reference, see [`opik.start_as_current_trace`](https://www.comet.com/docs/opik/python-sdk-reference/context_manager/start_as_current_trace.html).

    ```python
    import opik

    # Basic trace creation
    with opik.start_as_current_trace("my-trace", project_name="my-project") as trace:
        # Your application logic here
        trace.input = {"user_query": "What is the weather?"}
        trace.output = {"response": "It's sunny today!"}
        trace.tags = ["weather", "api-call"]
        trace.metadata = {"model": "gpt-4", "temperature": 0.7}
    ```

    **Parameters:**
    - `name` (str): The name of the trace
    - `input` (Dict[str, Any], optional): Input data for the trace
    - `output` (Dict[str, Any], optional): Output data for the trace
    - `tags` (List[str], optional): Tags to categorize the trace
    - `metadata` (Dict[str, Any], optional): Additional metadata
    - `project_name` (str, optional): Project name (falls back to active project context, then client configuration)
    - `thread_id` (str, optional): Thread identifier for multi-threaded applications
    - `flush` (bool, optional): Whether to flush data immediately (default: False)

    #### `opik.start_as_current_span()`

    Use this context manager to create and manage a span within a trace. Spans represent individual operations or function calls.

    For detailed API reference, see [`opik.start_as_current_span`](https://www.comet.com/docs/opik/python-sdk-reference/context_manager/start_as_current_span.html).

    ```python
    import opik

    # Basic span creation
    with opik.start_as_current_span("llm-call", type="llm", project_name="my-project") as span:
        # Your LLM call here
        span.input = {"prompt": "Explain quantum computing"}
        span.output = {"response": "Quantum computing is..."}
        span.model = "gpt-4"
        span.provider = "openai"
        span.usage = {
            "prompt_tokens": 10,
            "completion_tokens": 50,
            "total_tokens": 60
        }
    ```

    **Parameters:**
    - `name` (str): The name of the span
    - `type` (SpanType, optional): Type of span ("general", "tool", "llm", "guardrail", etc.)
    - `input` (Dict[str, Any], optional): Input data for the span
    - `output` (Dict[str, Any], optional): Output data for the span
    - `tags` (List[str], optional): Tags to categorize the span
    - `metadata` (Dict[str, Any], optional): Additional metadata
    - `project_name` (str, optional): Project name
    - `model` (str, optional): Model name for LLM spans
    - `provider` (str, optional): Provider name for LLM spans
    - `flush` (bool, optional): Whether to flush data immediately

    #### Nested Context Managers

    You can nest spans within traces to create hierarchical structures:

    ```python
    import opik

    with opik.start_as_current_trace("chatbot-conversation", project_name="chatbot") as trace:
        trace.input = {"user_message": "Help me with Python"}
        
        # First span: Process user input
        with opik.start_as_current_span("process-input", type="general") as span:
            span.input = {"raw_input": "Help me with Python"}
            span.output = {"processed_input": "Python programming help request"}
        
        # Second span: Generate response
        with opik.start_as_current_span("generate-response", type="llm") as span:
            span.input = {"prompt": "Python programming help request"}
            span.output = {"response": "I'd be happy to help with Python!"}
            span.model = "gpt-4"
            span.provider = "openai"
        
        trace.output = {"final_response": "I'd be happy to help with Python!"}
    ```

    #### Error Handling

    Context managers automatically handle errors and ensure proper cleanup:

    ```python
    import opik

    try:
        with opik.start_as_current_trace("risky-operation", project_name="my-project") as trace:
            trace.input = {"data": "important data"}
            # This will raise an exception
            result = 1 / 0
            trace.output = {"result": result}
    except ZeroDivisionError:
        # The trace is still properly closed and logged
        print("Error occurred, but trace was logged")
    ```

    #### Dynamic Parameter Updates

    You can modify trace and span parameters both inside and outside the context manager:

    ```python
    import opik

    # Parameters set outside the context manager
    with opik.start_as_current_trace(
        "dynamic-trace",
        input={"initial": "data"},
        tags=["initial-tag"],
        project_name="my-project"
    ) as trace:
        # Override parameters inside the context manager
        trace.input = {"updated": "data"}
        trace.tags = ["updated-tag", "new-tag"]
        trace.metadata = {"custom": "metadata"}
        
        # The final trace will use the updated values
    ```

    #### Flush Control

    Control when data is sent to Opik:

    ```python
    import opik

    # Immediate flush
    with opik.start_as_current_trace("immediate-trace", flush=True) as trace:
        trace.input = {"data": "important"}
        # Data is sent immediately when exiting the context

    # Deferred flush (default)
    with opik.start_as_current_trace("deferred-trace", flush=False) as trace:
        trace.input = {"data": "less urgent"}
        # Data will be sent asynchronously later or when the program exits
    ```

</Tab>
</Tabs>

Best Practices

  1. Use descriptive names: Choose clear, descriptive names for your traces and spans that explain what they represent.

  2. Set appropriate types: Use the correct span types ("llm", "retrieval", "general", etc.) to help with filtering and analysis.

  3. Include relevant metadata: Add metadata that will be useful for debugging and analysis, such as model names, parameters, and custom metrics.

  4. Handle errors gracefully: Let the context manager handle cleanup, but ensure your application logic handles errors appropriately.

  5. Use project organization: Organize your traces by project to keep your Opik dashboard clean and organized.

  6. Consider performance: Use flush=True only when immediate data availability is required, as it can slow down your application by triggering a synchronous, immediate data upload.

Logging to a specific project

By default, traces are logged to the Default Project project. You can change the project you want the trace to be logged to in a couple of ways:

<Tabs> <Tab title="Typescript" value="typescript" language="typescript"> You can use the `OPIK_PROJECT_NAME` environment variable to set the project you want the trace to be logged or pass a parameter to the `Opik` client.
```typescript
import { Opik } from "opik";

const client = new Opik({
    projectName: "my_project",
    // apiKey: "my_api_key",
    // apiUrl: "https://www.comet.com/opik/api",
    // workspaceName: "my_workspace",
});
```
</Tab>
<Tab title="Python" value="python" language="python">
    You can use the `OPIK_PROJECT_NAME` environment variable to set the project you want traces
    to be logged to.

    If you are using function decorators, you can set the project as part of the decorator parameters:

    ```python
    @track(project_name="my_project")
    def my_function():
        pass
    ```

    If you are using the low level SDK, you can set the project as part of the `Opik` client constructor:

    ```python
    from opik import Opik

    client = Opik(project_name="my_project")
    ```
</Tab>
</Tabs>

Project name resolution (Python SDK)

The project name is determined differently depending on whether an active project context already exists.

When no project context is active

This applies to the top-level @track-decorated function call, the Opik() client, or a native integration (e.g., track_openai, OpikTracer) used outside any traced context. The project name is resolved in this order:

  1. Explicit project_name argument — passed directly to @track(project_name="..."), Opik(project_name="..."), OpikTracer(project_name="..."), or a client method like client.trace(project_name="...")
  2. Client configuration — from the OPIK_PROJECT_NAME environment variable or ~/.opik.config file
  3. Default — falls back to "Default Project" (a warning is logged once to remind you to configure a project name)

The first @track(project_name="...") or opik.project_context("...") call that runs establishes the active project context for all nested operations.

When a project context is active

Once a project context is established (by a parent @track(project_name="...") or opik.project_context("...")), all nested operations use the context project name. This includes:

  • Nested @track-decorated functions — even if they pass a different project_name, the outer context wins (a warning is logged)
  • Native integrations (e.g., OpikTracer, track_openai) — if initialized inside an active context, the context project overrides the integration's project_name argument (a warning is logged)
  • Opik() client methods — if a method like client.trace(project_name="...") is called with an explicit project_name, the explicit argument wins; if project_name is omitted, the context project is used

This ensures that all traces and spans within a single execution flow are logged to the same project.

@track context propagation

When @track(project_name="...") is used on the top-level function, it sets the project context for the entire call tree:

python
from opik import track

@track(project_name="my-agent")
def agent(query):
    context = retrieve(query)
    return generate(context)

@track
def retrieve(query):
    # Inherits "my-agent" from the parent context
    ...

@track
def generate(context):
    # Also inherits "my-agent" from the parent context
    ...

If a nested function specifies a different project_name, it is ignored and the outer project is preserved:

python
@track(project_name="my-agent")
def agent(query):
    helper(query)  # Still logs to "my-agent", NOT "other-project"

@track(project_name="other-project")
def helper(query):
    # Warning is logged: outer project "my-agent" will be used
    ...

opik.project_context()

The opik.project_context() context manager sets the project name for all Opik operations within a block — @track-decorated functions, native integrations, and Opik() client calls (when project_name is not passed explicitly):

python
import opik

with opik.project_context("customer-support"):
    # @track-decorated functions and native integrations
    # all use "customer-support" as the project name
    my_agent(query)

Nesting rules are the same: the first project_context or @track(project_name=...) to run owns the context. Inner calls with a different project name are ignored (a warning is logged).

<Warning> When a script combines `@track` tracing with other Opik API calls — such as `evaluate()`, `get_or_create_dataset()`, or `Prompt()` — traces and API objects can land in different projects if the project name is not set consistently. Make sure the value passed to `opik.configure(project_name=...)` (which controls where `@track` traces go) matches the `project_name` argument passed explicitly to each API call:
python
import opik

opik.configure(project_name="my-project")

dataset = client.get_or_create_dataset(name="my-dataset", project_name="my-project")

evaluation = evaluate(
    dataset=dataset,
    task=evaluation_task,
    project_name="my-project",  # must match opik.configure value above
    ...
)
</Warning>

Flushing traces and spans

This process is optional and is only needed if you are running a short-lived script or if you are debugging why traces and spans are not being logged to Opik.

<Tabs> <Tab title="Typescript" value="typescript" language="typescript"> As the Typescript SDK has been designed to be used in production environments, we batch traces and spans and send them to Opik in the background.
    If you are running a short-lived script, you can flush the traces to Opik by using the
    `flush` method of the `Opik` client.

    ```typescript
    import { Opik } from "opik";

    const client = new Opik();
    client.flush();
    ```
</Tab>
<Tab title="Python" value="python" language="python">
    As the Python SDK has been designed to be used in production environments, we batch traces
    and spans and send them to Opik in the background.

    If you are running a short-lived script, you can flush the traces to Opik by using the
    `flush` method of the `Opik` client.

    ```python maxLines=100
    from opik import Opik

    client = Opik()
    client.flush()
    ```

    You can also set the `flush` parameter to `True` when you are using the `@track` decorator to make sure
    the traces are flushed to Opik before the program exits.

    ```python
    from opik import track

    @track(flush=True)
    def llm_chain(input_text):
        # LLM chain code
        # ...
        return f"Processed: {input_text}"
    ```
</Tab>
</Tabs>

Disabling the logging process

<Tabs> <Tab title="Typescript" value="typescript" language="typescript"> This is currently not supported in the Typescript SDK. To disable the logging process, </Tab> <Tab title="Python" value="python" language="python"> You can disable the logging process globally using the `OPIK_TRACK_DISABLE` environment variable.
    If you are looking for more control, you can also use the `set_tracing_active` function to
    dynamically disable the logging process.

    ```python
    import opik

    # Check the current state of the tracing flag
    print(opik.is_tracing_active())

    # Disable the logging process
    opik.set_tracing_active(False)

    # re-enable the logging process
    print(opik.set_tracing_active(True))
    ```
</Tab>
</Tabs>

Next steps

Once you have the observability set up for your agent, you can go one step further and: