agent_sdks/python/agent_development.md
This guide explains how to build AI agents that generate A2UI interfaces using
agent_sdk. The SDK simplifies schema management, prompt engineering, and
message validation for A2A (Agent-to-Agent/Agent-to-Client) communication.
The agent_sdk revolves around three main classes:
CatalogConfig: Defines the metadata for a component catalog (name,
schema path, examples path).A2uiCatalog: Represents a processed catalog, providing methods for
validation and LLM instruction rendering.A2uiSchemaManager: The central coordinator that loads catalogs, manages
versioning, and generates system prompts.The first step in any A2UI-enabled agent is initializing the
A2uiSchemaManager.
from a2ui.core.schema.constants import VERSION_0_8
from a2ui.core.schema.manager import A2uiSchemaManager, CatalogConfig
from a2ui.basic_catalog.provider import BasicCatalog
schema_manager = A2uiSchemaManager(
version=VERSION_0_8,
catalogs=[
BasicCatalog.get_config(
version=VERSION_0_8,
examples_path="examples"
),
CatalogConfig.from_path(
name="my_custom_catalog",
catalog_path="path/to/catalog.json",
examples_path="path/to/examples"
),
],
)
Notes:
catalogs parameter is optional. If not provided, the schema manager will
use the basic catalog maintained by the A2UI team.Use the generate_system_prompt method to assemble the LLM's system
instructions. This method takes your high-level descriptions (role, workflow, UI
goals) and automatically injects the relevant A2UI JSON Schema and few-shot
examples from your catalog configuration.
instruction = schema_manager.generate_system_prompt(
role_description="You are a helpful assistant...",
workflow_description="Analyze the request and return UI...",
ui_description="Use the following components...",
include_schema=True, # Injects the raw JSON schema
include_examples=True, # Injects few-shot examples
allowed_components=["Heading", "Text", "Button"]
# Optional: prune schema to save tokens
)
Configure your LlmAgent using the generated system instructions. This agent
serves as the core logic for interpreting user queries and deciding when to
generate rich UI responses.
from google.adk.agents.llm_agent import LlmAgent
from google.adk.models.lite_llm import LiteLlm
agent = LlmAgent(
model=LiteLlm(model=LITELLM_MODEL),
name="Your agent name",
description="Your agent description.",
instruction=instruction,
tools=[Your tools],
)
The final step is to build an executor (or a custom streaming handler) that manages the runtime lifecycle of a request: running the LLM, validating the generated JSON, and streaming parts to the client.
Build an agent executor that uses the agent to process requests.
from a2a.server.agent_execution import AgentExecutor
class MyAgentExecutor(AgentExecutor):
def __init__(self, agent: LlmAgent, ...):
self.agent = agent
...
agent_executor = MyAgentExecutor(
agent=agent,
...
)
To ensure reliability, always validate the LLM's JSON output before returning
it. The SDK's A2uiCatalog provides a validator that checks the payload against
the A2UI schema. If the payload is invalid, the validator will attempt to fix
it.
from a2ui.core.parser.parser import parse_response
# Get the catalog for the current request
selected_catalog = schema_manager.get_selected_catalog()
# Parse the LLM's response into parts with simple fixers like removing trailing commas
response_parts = parse_response(text)
for part in response_parts:
if part.a2ui_json:
# Validate the JSON part against the schema
selected_catalog.validator.validate(part.a2ui_json)
After parsing and validating the A2UI JSON payloads, wrap them in an A2A DataPart and stream them to the client.
To ensure the A2UI Renderers on the frontend recognize the data, add
{"mimeType": "application/json+a2ui"} to the DataPart's metadata.
Recommendation: Use the create_a2ui_part helper method to convert A2UI JSON payloads into an A2A DataPart.
The most efficient way to generate structured agent output is to use the
parse_response_to_parts helper. It handles splitting the text, extracting A2UI
JSON, optional validation, and wrapping everything into A2A Part objects.
from a2ui.a2a import parse_response_to_parts
from a2ui.core.schema.constants import A2UI_OPEN_TAG, A2UI_CLOSE_TAG
# Inside your agent's stream method:
final_response_content = f"{text_segment}\n{A2UI_OPEN_TAG}\n{json_payload}\n{A2UI_CLOSE_TAG}"
yield {
"is_task_complete": True,
"parts": parse_response_to_parts(final_response_content,
fallback_text="OK."),
}
For agents with a fixed set of UI capabilities, simply use the schema_manager
to generate the system instruction.
Example Samples: contact_lookup, restaurant_finder
# Generate system prompt
instruction = schema_manager.generate_system_prompt(
role_description="You are a helpful assistant...",
workflow_description="Analyze the request and return UI...",
ui_description="Use the following components...",
include_schema=True,
include_examples=True,
)
# Use with your LLM framework (e.g., ADK)
agent = LlmAgent(instruction=instruction, ...)
Some agents may need to attach different catalogs or examples depending on the user's request, client capabilities, or conversational context. This is common for dashboard-style agents that support multiple distinct visualization types ( e.g., Charts vs. Maps).
Example Sample: rizzcharts
In a dynamic scenario, you don't provide a static catalog to the agent. Instead, you resolve the selected catalog at runtime (e.g., during session preparation) and store it in the session state.
# In your AgentExecutor subclass
async def _prepare_session(self, context, run_request, runner):
session = await super()._prepare_session(context, run_request, runner)
# 1. Determine client capabilities from metadata
capabilities = context.message.metadata.get("a2ui_client_capabilities")
# 2. Get selected catalog and load examples
a2ui_catalog = self.schema_manager.get_selected_catalog(
client_ui_capabilities=capabilities
)
examples = self.schema_manager.load_examples(a2ui_catalog, validate=True)
# 3. Store in session state for tool access
await runner.session_service.append_event(
session,
Event(
actions=EventActions(
state_delta={
"system:a2ui_enabled": True,
"system:a2ui_catalog": a2ui_catalog,
"system:a2ui_examples": examples,
}
),
),
)
return session
The SendA2uiToClientToolset can use Providers—callables that retrieve the
catalog and examples from the current context state at runtime.
# Providers that read from context state
def get_a2ui_catalog(ctx: ReadonlyContext):
return ctx.state.get("system:a2ui_catalog")
def get_a2ui_examples(ctx: ReadonlyContext):
return ctx.state.get("system:a2ui_examples")
# Initialize the toolset with providers
ui_toolset = SendA2uiToClientToolset(
a2ui_enabled=True,
a2ui_catalog=get_a2ui_catalog,
a2ui_examples=get_a2ui_examples,
)
When the LLM calls the UI tool, the toolset uses the dynamic catalog to:
A2uiCatalog object's validator.Orchestrator agents delegate work to sub-agents. They often need to propagate UI capabilities and handle cross-agent UI state.
Example Sample: orchestrator
The orchestrator inspects sub-agent capabilities and aggregates their supported
catalog IDs into its own AgentCard.
# Aggregating capabilities from sub-agents
supported_catalog_ids = set()
for subagent in subagents:
# ... fetch subagent_card ...
for extension in subagent_card.capabilities.extensions:
if extension.uri == A2UI_EXTENSION_URI:
supported_catalog_ids.update(
extension.params.get("supportedCatalogIds") or [])
# Creating the orchestrator's AgentCard
agent_card = AgentCard(
capabilities=AgentCapabilities(
extensions=[
get_a2ui_agent_extension(
supported_catalog_ids=list(supported_catalog_ids))
]
)
)