Back to Opik

Using Opik with Agent Spec

apps/opik-documentation/documentation/docs/cookbook/agentspec.ipynb

2.0.22-6605-merge-20654.3 KB
Original Source

Using Opik with Agent Spec

Agent Spec is a portable configuration language for defining agentic systems (agents, tools, and structured workflows).

In this notebook, we will build a simple Agent Spec agent and use Opik's AgentSpecInstrumentor to capture a trace of the agent's tool and LLM execution.

Creating an account on Comet.com

Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

python
%pip install --upgrade opik "pyagentspec[langgraph]" opentelemetry-sdk opentelemetry-instrumentation
python
import opik

opik.configure(use_local=False)

Preparing our environment

This demo uses OpenAI as the LLM provider. Set your OpenAI API key as an environment variable:

python
import os
import getpass

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

Define an Agent Spec agent

We'll define a small calculator agent with a couple of tools:

python
from pyagentspec.agent import Agent
from pyagentspec.llms import OpenAiConfig
from pyagentspec.property import FloatProperty
from pyagentspec.tools import ServerTool


def build_agentspec_agent() -> Agent:
    tools = [
        ServerTool(
            name="sum",
            description="Sum two numbers",
            inputs=[FloatProperty(title="a"), FloatProperty(title="b")],
            outputs=[FloatProperty(title="result")],
        ),
        ServerTool(
            name="subtract",
            description="Subtract two numbers",
            inputs=[FloatProperty(title="a"), FloatProperty(title="b")],
            outputs=[FloatProperty(title="result")],
        ),
    ]

    return Agent(
        name="calculator_agent",
        description="An agent that provides assistance with tool use.",
        llm_config=OpenAiConfig(name="openai-gpt-5-mini", model_id="gpt-5-mini"),
        system_prompt=(
            "You are a helpful calculator agent.\n"
            "Your duty is to compute the result of the given operation using tools, "
            "and to output the result.\n"
            "It's important that you reply with the result only.\n"
        ),
        tools=tools,
    )

Run the agent with Opik tracing enabled

Wrap the agent execution in AgentSpecInstrumentor().instrument_context(...) to capture traces in Opik.

Agent traces can include prompts, tool inputs/outputs, and messages. If you need to avoid logging sensitive information, set mask_sensitive_information=True.

python
from opik.integrations.agentspec import AgentSpecInstrumentor
from pyagentspec.adapters.langgraph import AgentSpecLoader

agent = build_agentspec_agent()

tool_registry = {
    "sum": lambda a, b: a + b,
    "subtract": lambda a, b: a - b,
}

langgraph_agent = AgentSpecLoader(tool_registry=tool_registry).load_component(agent)

with AgentSpecInstrumentor().instrument_context(
    project_name="agentspec-demo",
    mask_sensitive_information=False,
):
    messages = []

    messages.append({"role": "user", "content": "Compute 13.5 + 2.25 using the sum tool."})
    response = langgraph_agent.invoke(
        input={"messages": messages},
        config={"configurable": {"thread_id": "1"}},
    )
    agent_answer = response["messages"][-1].content.strip()
    print("AGENT >>>", agent_answer)
    messages.append({"role": "assistant", "content": agent_answer})

    messages.append({"role": "user", "content": "Now compute 10 - 3.5 using the subtract tool."})
    response = langgraph_agent.invoke(
        input={"messages": messages},
        config={"configurable": {"thread_id": "1"}},
    )
    agent_answer = response["messages"][-1].content.strip()
    print("AGENT >>>", agent_answer)

After running the cell above, open Opik and navigate to the agentspec-demo project to inspect the trace tree and debug tool usage and LLM generations.