docs/index.md
The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight, easy-to-use package with very few abstractions. It's a production-ready upgrade of our previous experimentation for agents, Swarm. The Agents SDK has a very small set of primitives:
In combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real-world applications without a steep learning curve. In addition, the SDK comes with built-in tracing that lets you visualize and debug your agentic flows, as well as evaluate them and even fine-tune models for your application.
The SDK has two driving design principles:
Here are the main features of the SDK:
gpt-realtime-1.5, automatic interruption detection, context management, guardrails, and more.The SDK uses the Responses API by default for OpenAI models, but it adds a higher-level runtime around model calls.
Use the Responses API directly when:
Use the Agents SDK when:
You do not need to choose one globally. Many applications use the SDK for managed workflows and call the Responses API directly for lower-level paths.
pip install openai-agents
from agents import Agent, Runner
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)
# Code within the code,
# Functions calling themselves,
# Infinite loop's dance.
(If running this, ensure you set the OPENAI_API_KEY environment variable)
export OPENAI_API_KEY=sk-...
Use this table when you know the job you want to do, but not which page explains it.
| Goal | Start here |
|---|---|
| Build the first text agent and see one complete run | Quickstart |
| Add function tools, hosted tools, or agents as tools | Tools |
| Run a coding, review, or document agent inside a real isolated workspace | Sandbox agents quickstart and Sandbox clients |
| Decide between handoffs and manager-style orchestration | Agent orchestration |
| Keep memory across turns | Running agents and Sessions |
| Use OpenAI models, websocket transport, or non-OpenAI providers | Models |
| Review outputs, run items, interruptions, and resume state | Results |
Build a low-latency voice agent with gpt-realtime-1.5 | Realtime agents quickstart and Realtime transport |
| Build a speech-to-text / agent / text-to-speech pipeline | Voice pipeline quickstart |