README.md
</picture>
Build agents in any framework. Run as a service. Ship to real users.
</p> <div align="center"> <a href="https://docs.agno.com">Docs</a> • <a href="https://github.com/agno-agi/agno/tree/main/cookbook">Cookbook</a> • <a href="https://docs.agno.com/first-agent">Quickstart</a> </div>Agno is the runtime for agentic software. Use it to serve agents as production services.
Build agents using the Agno SDK, Claude Agent SDK, LangGraph, DSPy, or your own framework. Run them as production services with sessions, tracing, scheduling, and RBAC. Manage them from a single control plane.
| Layer | What it does |
|---|---|
| SDK | Build agents, teams, and workflows with memory, knowledge, guardrails, and 100+ integrations. |
| Runtime | Serve agents in production via a stateless, session-scoped FastAPI backend. |
| Control Plane | Test, monitor, and manage your system from the AgentOS UI. |
Wrap a coding agent and serve it as a production API. Same shape across every framework.
Save as workbench.py:
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.os import AgentOS
from agno.tools.workspace import Workspace
workbench = Agent(
name="Workbench",
model="openai:gpt-5.4",
tools=[Workspace(".",
allowed=["read", "list", "search"],
confirm=["write", "edit", "delete", "shell"],
)],
enable_agentic_memory=True,
add_history_to_context=True,
num_history_runs=3,
)
# Serve via AgentOS → streaming, auth, session isolation, API endpoints
agent_os = AgentOS(agents=[workbench], tracing=True, db=SqliteDb(db_file="agno.db"))
app = agent_os.get_app()
Workspace(".") scopes the agent to the current directory. read, list, and search run freely; write, edit, move, delete, and shell require human approval.
from agno.agents.claude import ClaudeAgent
from agno.db.sqlite import SqliteDb
from agno.os import AgentOS
agent = ClaudeAgent(
name="Claude Agent",
model="claude-opus-4-7",
allowed_tools=["Read", "Bash"],
permission_mode="acceptEdits",
)
agent_os = AgentOS(agents=[agent], db=SqliteDb(db_file="agno.db"), tracing=True)
app = agent_os.get_app()
The same wrapping pattern works for LangGraph and DSPy.
<details> <summary><strong>LangGraph</strong></summary>from agno.agents.langgraph import LangGraphAgent
from agno.db.sqlite import SqliteDb
from agno.os import AgentOS
from langchain_openai import ChatOpenAI
from langgraph.graph import MessagesState, StateGraph
def chatbot(state: MessagesState):
return {"messages": [ChatOpenAI(model="gpt-5.4").invoke(state["messages"])]}
graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.set_entry_point("chatbot")
agent = LangGraphAgent(name="LangGraph Chatbot", graph=graph.compile())
agent_os = AgentOS(agents=[agent], db=SqliteDb(db_file="agno.db"), tracing=True)
app = agent_os.get_app()
import dspy
from agno.agents.dspy import DSPyAgent
from agno.db.sqlite import SqliteDb
from agno.os import AgentOS
dspy.configure(lm=dspy.LM("openai/gpt-5.4"))
agent = DSPyAgent(
name="DSPy Assistant",
program=dspy.ChainOfThought("question -> answer"),
)
agent_os = AgentOS(agents=[agent], db=SqliteDb(db_file="agno.db"), tracing=True)
app = agent_os.get_app()
uv pip install -U 'agno[os]' openai
export OPENAI_API_KEY=sk-***
fastapi dev workbench.py
In ~20 lines, you get:
API at http://localhost:8000. OpenAPI spec at http://localhost:8000/docs.
The AgentOS UI is your control plane. Use it to chat with your agents, inspect runs, view traces, manage sessions, and operate the system.
http://localhost:8000).Open Chat, select your agent, and ask:
Tell me more about the project and the key files
The agent reads your workspace and answers grounded in what it actually finds. Try a follow-up like "create a NOTES.md with three key takeaways". The run pauses for your approval before the file is written, since write_file is a confirm-required tool by default.
https://github.com/user-attachments/assets/adb38f55-1d9d-463e-8ca9-966bb6bdc37a
Three reference agents, all open source, all built on the same primitives:
Add Agno docs as a source in your coding tools:
Cursor: Settings → Indexing & Docs → Add https://docs.agno.com/llms-full.txt
Also works with VSCode, Windsurf, and similar tools.
See the contributing guide.
Agno logs which model providers are used to prioritize updates. Disable with AGNO_TELEMETRY=false.