docs/a2a.md
The Agent2Agent (A2A) Protocol is an open standard introduced by Google that enables communication and interoperability between AI agents, regardless of the framework or vendor they are built on.
At Pydantic, we built the FastA2A library to make it easier to implement the A2A protocol in Python.
We also built a convenience method that expose Pydantic AI agents as A2A servers - let's have a quick look at how to use it:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be fun!')
app = agent.to_a2a()
You can run the example with uvicorn agent_to_a2a:app --host 0.0.0.0 --port 8000
This will expose the agent as an A2A server, and you can start sending requests to it.
See more about exposing Pydantic AI agents as A2A servers.
FastA2A is an agentic framework agnostic implementation of the A2A protocol in Python. The library is designed to be used with any agentic framework, and is not exclusive to Pydantic AI.
FastA2A is built on top of Starlette, which means it's fully compatible with any ASGI server.
Given the nature of the A2A protocol, it's important to understand the design before using it, as a developer you'll need to provide some components:
Storage][fasta2a.Storage]: to save and load tasks, as well as store context for conversationsBroker][fasta2a.Broker]: to schedule tasksWorker][fasta2a.Worker]: to execute tasksLet's have a look at how those components fit together:
flowchart TB
Server["HTTP Server"] <--> |Sends Requests/
Receives Results| TM
subgraph CC[Core Components]
direction RL
TM["TaskManager
(coordinates)"] --> |Schedules Tasks| Broker
TM <--> Storage
Broker["Broker
(queues & schedules)"] <--> Storage["Storage
(persistence)"]
Broker --> |Delegates Execution| Worker
end
Worker["Worker
(implementation)"]
FastA2A allows you to bring your own [Storage][fasta2a.Storage], [Broker][fasta2a.Broker] and [Worker][fasta2a.Worker].
In the A2A protocol:
Task: Represents one complete execution of an agent. When a client sends a message to the agent, a new task is created. The agent runs until completion (or failure), and this entire execution is considered one task. The final output is stored as a task artifact.
Context: Represents a conversation thread that can span multiple tasks. The A2A protocol uses a context_id to maintain conversation continuity:
context_id, the server generates a new onecontext_id to continue the conversationcontext_id have access to the complete message historyThe [Storage][fasta2a.Storage] component serves two purposes:
This design allows for agents to store rich internal state (e.g., tool calls, reasoning traces) as well as store task-specific A2A-formatted messages and artifacts.
For example, a Pydantic AI agent might store its complete internal message format (including tool calls and responses) in the context storage, while storing only the A2A-compliant messages in the task history.
FastA2A is available on PyPI as fasta2a so installation is as simple as:
pip/uv-add fasta2a
The only dependencies are:
You can install Pydantic AI with the a2a extra to include FastA2A:
pip/uv-add 'pydantic-ai-slim[a2a]'
To expose a Pydantic AI agent as an A2A server, you can use the to_a2a method:
from pydantic_ai import Agent
agent = Agent('openai:gpt-5.2', instructions='Be fun!')
app = agent.to_a2a()
Since app is an ASGI application, it can be used with any ASGI server.
uvicorn agent_to_a2a:app --host 0.0.0.0 --port 8000
Since the goal of to_a2a is to be a convenience method, it accepts the same arguments as the [FastA2A][fasta2a.FastA2A] constructor.
When using to_a2a(), Pydantic AI automatically:
context_id have access to the full conversation historyTextPart artifacts and also appear in the message historyDataPart artifacts with the data wrapped as {"result": <your_data>}