docs/docs/genai/tracing/integrations/listing/openhands.mdx
import ImageBox from "@site/src/components/ImageBox"; import StepHeader from "@site/src/components/StepHeader"; import TilesGrid from "@site/src/components/TilesGrid"; import TileCard from "@site/src/components/TileCard"; import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; import { Users, BookOpen, Scale } from "lucide-react";
MLflow Tracing provides automatic tracing for OpenHands, a leading open-source AI agent framework for autonomous software development. OpenHands agents interact with code, terminals, file systems, and the web, and support multiple LLM providers including Claude, OpenAI, and open-source models.
<ImageBox src="/images/llms/openhands/openhands-trace.png" alt="OpenHands Tracing" />OpenHands emits OpenTelemetry traces natively, and MLflow accepts them out of the box. Whether you use OpenHands through the SDK or the CLI, after setting up the connection, MLflow will automatically capture traces of your OpenHands agent runs. The trace automatically captures information such as:
OpenHands tracing is configured using OpenTelemetry environment variables that point to your MLflow server. Select the SDK or CLI tab below depending on how you use OpenHands.
<StepHeader number={1} title="Install OpenHands" />Install the OpenHands SDK or CLI.
<Tabs> <TabItem value="sdk" label="SDK" default>pip install openhands-sdk
uv tool install openhands --python 3.12
Start an MLflow server if you haven't already. This step is common to both the SDK and CLI.
mlflow server
Or using Docker Compose:
docker compose up -d
Set the following environment variables to connect OpenHands traces to your MLflow server:
import os
# Point OpenTelemetry traces to your MLflow server
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "http://localhost:5000"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = (
"x-mlflow-experiment-id=123" # Replace "123" with your MLflow experiment ID
)
os.environ["OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"] = "http/protobuf"
Set the following environment variables before running the CLI:
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:5000"
export OTEL_EXPORTER_OTLP_HEADERS="x-mlflow-experiment-id=123" # Replace "123" with your MLflow experiment ID
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
With the environment variables set, run your OpenHands agent. Every LLM call, tool invocation, and agent step will be automatically traced.
from openhands.sdk import LLM, Agent, Conversation, Tool
from openhands.tools.file_editor import FileEditorTool
from openhands.tools.task_tracker import TaskTrackerTool
from openhands.tools.terminal import TerminalTool
llm = LLM("openai/gpt-5")
agent = Agent(
llm=llm,
tools=[
Tool(name=TerminalTool.name),
Tool(name=FileEditorTool.name),
Tool(name=TaskTrackerTool.name),
],
)
cwd = os.getcwd()
conversation = Conversation(agent=agent, workspace=cwd)
conversation.send_message("Write 3 facts about the current project into FACTS.txt.")
conversation.run()
print("All done!")
With the environment variables set, run OpenHands from the command line. Every LLM call, tool invocation, and agent step will be automatically traced.
openhands -t "Write 3 facts about the current project into FACTS.txt."
Once the agent finishes, navigate to the MLflow UI (e.g. http://localhost:5000), select the experiment, and open the "Traces" tab to view the recorded traces.
MLflow automatically tracks token usage for each LLM call within OpenHands agent runs. The token usage and cost will be displayed in the Overview dashboard and the trace detail page.
<ImageBox src="/images/llms/openhands/openhands-token-usage.png" alt="OpenHands Token Usage" />:::tip Governing OpenHands Agents with AI Gateway
AI Gateway provides centralized governance for all LLM traffic from OpenHands, including budget control, usage tracking, and secret management.
To route OpenHands LLM calls through MLflow AI Gateway, set the base_url to the AI Gateway endpoint URL:
llm = LLM(
base_url="http://localhost:5000/gateway/mlflow/v1", # MLflow AI Gateway endpoint URL
model="my-openai-endpoint", # Name of the endpoint configured in AI Gateway
)
This gives you:
:::