docs/source/concepts/agents.rst
.. _agents:
Agents are autonomous systems that handle wide-ranging, open-ended tasks by calling models in a loop until the work is complete. Unlike deterministic :ref:prompt targets <prompt_target>, agents have access to tools, reason about which actions to take, and adapt their behavior based on intermediate results—making them ideal for complex workflows that require multi-step reasoning, external API calls, and dynamic decision-making.
Plano helps developers build and scale multi-agent systems by managing the orchestration layer—deciding which agent(s) or LLM(s) should handle each request, and in what sequence—while developers focus on implementing agent logic in any language or framework they choose.
Plano-Orchestrator is a family of state-of-the-art routing and orchestration models that decide which agent(s) should handle each request, and in what sequence. Built for real-world multi-agent deployments, it analyzes user intent and conversation context to make precise routing and orchestration decisions while remaining efficient enough for low-latency production use across general chat, coding, and long-context multi-turn conversations.
This allows development teams to:
Plano distinguishes between the inner loop (agent implementation logic) and the outer loop (orchestration and routing):
Inner Loop (Agent Logic) ^^^^^^^^^^^^^^^^^^^^^^^^^
The inner loop is where your agent lives—the business logic that decides which tools to call, how to interpret results, and when the task is complete. You implement this in any language or framework:
Your agent controls:
.. note:: Making LLM Calls from Agents
When your agent needs to call an LLM for reasoning, summarization, or completion, you should route those calls through Plano's Model Proxy rather than calling LLM providers directly. This gives you:
LLM providers <llm_providers>, whether you're using OpenAI, Anthropic, Azure OpenAI, or any OpenAI-compatible provider.model-based, alias-based, or preference-aligned routing <llm_providers> to dynamically select the best model for each task based on cost, performance, or custom policies.By routing LLM calls through the Model Proxy, your agents remain decoupled from specific providers and can benefit from centralized policy enforcement, observability, and intelligent routing—all managed in the outer loop. For a step-by-step guide, see :ref:llm_router in the LLM Router guide.
Outer Loop (Orchestration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
The outer loop is Plano's orchestration layer—it manages the lifecycle of requests across agents and LLMs:
By managing the outer loop, Plano allows you to:
filter chains <filter_chain> (guardrails, context enrichment) before requests reach agents.