docs/en/learn/a2a-agent-delegation.mdx
CrewAI treats A2A protocol as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents. In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
When an agent is configured with A2A capabilities:
Configure an agent for A2A delegation by setting the a2a parameter:
from crewai import Agent, Crew, Task
from crewai.a2a import A2AClientConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks efficiently",
backstory="Expert at delegating to specialized research agents",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://example.com/.well-known/agent-card.json",
timeout=120,
max_turns=10
)
)
task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()
The A2AClientConfig class accepts the following parameters:
For A2A agents that require authentication, use one of the provided auth schemes:
<Tabs> <Tab title="Bearer Token"> ```python bearer_token_auth.py lines from crewai.a2a import A2AClientConfig from crewai.a2a.auth import BearerTokenAuthagent = Agent( role="Secure Coordinator", goal="Coordinate tasks with secured agents", backstory="Manages secure agent communications", llm="gpt-4o", a2a=A2AClientConfig( endpoint="https://secure-agent.example.com/.well-known/agent-card.json", auth=BearerTokenAuth(token="your-bearer-token"), timeout=120 ) )
</Tab>
<Tab title="API Key">
```python api_key_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import APIKeyAuth
agent = Agent(
role="API Coordinator",
goal="Coordinate with API-based agents",
backstory="Manages API-authenticated communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
auth=APIKeyAuth(
api_key="your-api-key",
location="header", # or "query" or "cookie"
name="X-API-Key"
),
timeout=120
)
)
agent = Agent( role="OAuth Coordinator", goal="Coordinate with OAuth-secured agents", backstory="Manages OAuth-authenticated communications", llm="gpt-4o", a2a=A2AClientConfig( endpoint="https://oauth-agent.example.com/.well-known/agent-card.json", auth=OAuth2ClientCredentials( token_url="https://auth.example.com/oauth/token", client_id="your-client-id", client_secret="your-client-secret", scopes=["read", "write"] ), timeout=120 ) )
</Tab>
<Tab title="HTTP Basic">
```python http_basic_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import HTTPBasicAuth
agent = Agent(
role="Basic Auth Coordinator",
goal="Coordinate with basic auth agents",
backstory="Manages basic authentication communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
auth=HTTPBasicAuth(
username="your-username",
password="your-password"
),
timeout=120
)
)
Configure multiple A2A agents for delegation by passing a list:
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple specialized agents",
backstory="Expert at delegating to the right specialist",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
timeout=120
),
A2AClientConfig(
endpoint="https://data.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="data-token"),
timeout=90
)
]
)
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
Control how agent connection failures are handled using the fail_fast parameter:
from crewai.a2a import A2AClientConfig
# Fail immediately on connection errors (default)
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
fail_fast=True
)
)
# Continue with available agents
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple agents",
backstory="Expert at working with available resources",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://primary.example.com/.well-known/agent-card.json",
fail_fast=False
),
A2AClientConfig(
endpoint="https://backup.example.com/.well-known/agent-card.json",
fail_fast=False
)
]
)
When fail_fast=False:
Control how your agent receives task status updates from remote A2A agents:
<Tabs> <Tab title="Streaming (Default)"> ```python streaming_config.py lines from crewai.a2a import A2AClientConfig from crewai.a2a.updates import StreamingConfigagent = Agent( role="Research Coordinator", goal="Coordinate research tasks", backstory="Expert at delegation", llm="gpt-4o", a2a=A2AClientConfig( endpoint="https://research.example.com/.well-known/agent-card.json", updates=StreamingConfig() ) )
</Tab>
<Tab title="Polling">
```python polling_config.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.updates import PollingConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
updates=PollingConfig(
interval=2.0,
timeout=300.0,
max_polls=100
)
)
)
agent = Agent( role="Research Coordinator", goal="Coordinate research tasks", backstory="Expert at delegation", llm="gpt-4o", a2a=A2AClientConfig( endpoint="https://research.example.com/.well-known/agent-card.json", updates=PushNotificationConfig( url="{base_url}/a2a/callback", token="your-validation-token", timeout=300.0 ) ) )
</Tab>
</Tabs>
## Exposing Agents as A2A Servers
You can expose your CrewAI agents as A2A-compliant servers, allowing other A2A clients to delegate tasks to them.
### Server Configuration
Add an `A2AServerConfig` to your agent to enable server capabilities:
```python a2a_server_agent.py lines
from crewai import Agent
from crewai.a2a import A2AServerConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and provide insights",
backstory="Expert data scientist with statistical analysis skills",
llm="gpt-4o",
a2a=A2AServerConfig(url="https://your-server.com")
)
An agent can act as both client and server by providing both configurations:
from crewai import Agent
from crewai.a2a import A2AClientConfig, A2AServerConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research and serve analysis requests",
backstory="Expert at delegation and analysis",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://specialist.example.com/.well-known/agent-card.json",
timeout=120
),
A2AServerConfig(url="https://your-server.com")
]
)
A2A supports passing files and requesting structured output in both directions.
Client side: When delegating to a remote A2A agent, files from the task's input_files are sent as FileParts in the outgoing message. If response_model is set on the A2AClientConfig, the Pydantic model's JSON schema is embedded in the message metadata, requesting structured output from the remote agent.
Server side: Incoming FileParts are extracted and passed to the agent's task as input_files. If the client included a JSON schema, the server creates a response model from it and applies it to the task. When the agent returns structured data, the response is sent back as a DataPart rather than plain text.
httpx-auth package)For more information about the A2A protocol and reference implementations: