.agents/skills/adk-debug/SKILL.md
Two debugging modes: adk web (browser UI + API) and adk run (CLI).
[!NOTE] Preference: For most development and debugging tasks,
adk run(CLI) is preferred as it is faster and more convenient. Withinadk run, query mode is preferred over interactive mode because it requires less human intervention. However,adk webis still required for UI-specific issues, session management visualization, or debugging the API server itself.
Best for: visual inspection, session management, multi-turn testing.
Before starting a server, ask the user:
adk web server? If yes, use it
(check with curl -s http://localhost:8000/health).run_in_background so it doesn't
block. Remember to shut it down when debugging is done.# Check if server is already running
curl -s http://localhost:8000/health
# Start server (if not running)
adk web path/to/agents_dir # default: http://localhost:8000
adk web -v path/to/agents_dir # verbose (DEBUG level)
adk web --reload_agents path/to/agents_dir # auto-reload on file changes
# Shut down when done (if you started it)
# Kill the background process or Ctrl+C
[!TIP] Coding Agent Friendly Setup: To allow a coding agent to read the server logs, recommend the user to start the server and redirect output to a file in a location the agent can read (e.g., the conversation's artifact directory or a shared workspace folder):
bashadk web -v path/to/agents_dir 2>&1 | tee path/to/agent_readable_log.logThis ensures both the user and the agent can inspect the full debug logs.
Web UI: http://localhost:8000/dev-ui/
# List sessions
curl -s http://localhost:8000/apps/{app_name}/users/{user_id}/sessions | python3 -m json.tool
# Get full session with events
curl -s http://localhost:8000/apps/{app_name}/users/{user_id}/sessions/{session_id} | python3 -m json.tool
Do NOT delete sessions after debugging — the user may want to inspect them in the web UI.
Fetch the session JSON and write a Python script to summarize it. Do NOT use hardcoded inline scripts — the JSON schema may change. Instead, fetch the raw JSON first:
curl -s http://localhost:8000/apps/{app_name}/users/{user_id}/sessions/{session_id} | python3 -m json.tool
Then write a script based on the actual structure you see.
Key fields to look for in each event: author, branch,
content.parts (text, functionCall, functionResponse),
output, actions (transferToAgent, requestTask, finishTask),
nodeInfo.path.
SESSION=$(curl -s -X POST http://localhost:8000/apps/{app_name}/users/test/sessions \
-H "Content-Type: application/json" -d '{}' | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")
curl -N -X POST http://localhost:8000/run_sse \
-H "Content-Type: application/json" \
-d "{\"app_name\":\"{app_name}\",\"user_id\":\"test\",\"session_id\":\"$SESSION\",
\"new_message\":{\"role\":\"user\",\"parts\":[{\"text\":\"your message here\"}]},
\"streaming\":false}"
# Trace for a specific event
curl -s http://localhost:8000/debug/trace/{event_id} | python3 -m json.tool
# All traces for a session
curl -s http://localhost:8000/debug/trace/session/{session_id} | python3 -m json.tool
# Health check
curl -s http://localhost:8000/health
Fetch trace data and inspect the call_llm spans. The LLM
request/response are in span attributes:
curl -s http://localhost:8000/debug/trace/session/{session_id} | python3 -m json.tool
Look for spans with name: "call_llm" and inspect their
attributes.gcp.vertex.agent.llm_request (JSON string of the
full request including contents, config, model).
| Attribute | Description |
|---|---|
gcp.vertex.agent.llm_request | Full LLM request JSON (contents, config, model) |
gcp.vertex.agent.llm_response | Full LLM response JSON |
gcp.vertex.agent.event_id | Event ID — correlate with session events |
gen_ai.request.model | Model name |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
gen_ai.response.finish_reasons | Stop reason |
Best for: quick testing, scripting, CI/CD, headless debugging.
adk run path/to/my_agent # interactive prompts
adk run -v path/to/my_agent # verbose logging
adk run path/to/my_agent "query" # run with query
adk run --jsonl path/to/my_agent "query" # output structured JSONL (noise reduced)
adk web dev server.--jsonl output to standard tools like jq, grep, or diff.--in_memory for fast, side-effect-free testing (no database updates).[!TIP] Always read the sample's
README.mdfirst to understand expected inputs and behaviors!
Choosing the right testing strategy is crucial for efficiency and coverage:
Use Unit Tests when:
tests/unittests/.Use Sample Agents (Integration Testing) when:
contributing/agent_samples/ (refer to adk-sample-creator).[!IMPORTANT] AI Assistant Reminder: If you create a temporary sample agent for testing, you MUST delete it after verification is complete, unless the user explicitly asks to keep it.
For more options and flags, run:
adk run --help
from google.adk.utils._debug_output import print_event
print_event(event, verbose=False) # text responses only
print_event(event, verbose=True) # tool calls, code execution, inline data
Location: src/google/adk/utils/_debug_output.py
from google.adk import Agent, Runner
from google.adk.sessions import InMemorySessionService
agent = Agent(name="test", model="gemini-2.5-flash", instruction="...")
runner = Runner(app_name="test", agent=agent, session_service=InMemorySessionService())
session = runner.session_service.create_session_sync(app_name="test", user_id="u")
for event in runner.run(user_id="u", session_id=session.id, new_message="hello"):
print(f"{event.author}: {event.content}")
if event.actions.transfer_to_agent:
print(f" -> transfer to {event.actions.transfer_to_agent}")
if event.output:
print(f" -> output: {event.output}")
Shared across both modes.
Set log level with --log_level (DEBUG, INFO, WARNING, ERROR, CRITICAL) or -v for DEBUG.
Logs write to /tmp/agents_log/. Tail latest: tail -F /tmp/agents_log/agent.latest.log
Logger name: google_adk. Setup: src/google/adk/cli/utils/logs.py
| Env Variable | Effect |
|---|---|
ADK_CAPTURE_MESSAGE_CONTENT_IN_SPANS | Include prompt/response in traces (default: true) |
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT | Enable prompt/response in OTEL spans |
GOOGLE_CLOUD_PROJECT | Required for --trace_to_cloud |
Symptom: Agent with output_schema dumps JSON text instead of calling tools.
Cause: output_schema sets response_schema on the LLM config, activating controlled generation (JSON-only mode).
Check: Look for response_mime_type: "application/json" in the LLM request.
Location: src/google/adk/flows/llm_flows/basic.py
Symptom: Events from sub-agents don't appear in plugin callbacks or runner event stream.
Cause: Direct append_event calls inside components bypass the runner's event loop.
Check: Only the runner (runners.py) should call append_event. Components should yield events.
NameError: name 'X' is not defined at runtimeSymptom: {"error": "name 'SomeClass' is not defined"}
Cause: Class imported under TYPE_CHECKING but used at runtime (e.g., isinstance()).
Fix: Move import outside TYPE_CHECKING or use a local import.
Symptom: Sub-agent only sees its own input, not the parent's history.
Cause: Branch isolation — sub-agents on a branch only see events on that branch.
Fix: Write the sub-agent's description to prompt the parent to include context in delegation input.
Symptom: ValueError on agent construction.
Common causes:
"All tools must be set via LlmAgent.tools." — Don't pass tools via generate_content_config"System instruction must be set via LlmAgent.instruction." — Don't set via generate_content_config"Response schema must be set via LlmAgent.output_schema." — Don't set via generate_content_config
Location: src/google/adk/agents/llm_agent.py — validate_generate_content_configSymptom: LlmCallsLimitExceededError: Max number of llm calls limit of N exceeded
Cause: run_config.max_llm_calls limit reached.
Fix: Increase max_llm_calls in RunConfig, or investigate why the agent is looping.
Location: src/google/adk/agents/invocation_context.py
Symptom: Tool call fails but agent continues without expected result.
Cause: Errors are caught and returned as function response text. Set on_tool_error_callback to customize.
Check: Look for error text in function response events.
Symptom: adk web doesn't list the agent, or returns 404.
Cause: Agent directory must follow convention:
my_agent/
__init__.py # MUST contain: from . import agent
agent.py # MUST define: root_agent = Agent(...) OR app = App(...)
Symptom: Agent hangs or becomes very slow. Cause: Sync tools run in a thread pool (max 4 workers). All workers busy → new tool calls block. Fix: Make tools async if they do I/O.
STOP — normal completionMAX_TOKENS — output truncated (increase max_output_tokens)SAFETY — blocked by safety filtersRECITATION — blocked for recitationUser message
-> Runner.run_async()
-> Runner._exec_with_plugin() # persists events, runs plugins
-> agent.run_async() # yields events
-> LlmAgent._run_async_impl()
-> BaseLlmFlow.run_async() # Execution flow
-> _AutoFlow or _SingleFlow # Flow implementations
-> call_llm # LLM request + response
-> execute_tools # tool dispatch (functions.py)
Before model call: PluginManager run_before_model_callback() → agent canonical_before_model_callbacks
After model call: PluginManager run_after_model_callback() → agent canonical_after_model_callbacks
Before/after tool call: PluginManager run_before_tool_callback() / run_after_tool_callback() → agent callbacks
| Area | File |
|---|---|
| Runner event loop | src/google/adk/runners.py |
| LLM request building | src/google/adk/flows/llm_flows/basic.py |
| Tool dispatch | src/google/adk/flows/llm_flows/functions.py |
| Multi-agent orchestration | src/google/adk/workflow/ |
| Content/context building | src/google/adk/flows/llm_flows/contents.py |
| Task support | src/google/adk/agents/llm/task/ |
| Agent config + validation | src/google/adk/agents/llm_agent.py |
| Event model | src/google/adk/events/event.py |
| Session services | src/google/adk/sessions/ |
| Invocation context | src/google/adk/agents/invocation_context.py |
| Web server + debug endpoints | src/google/adk/cli/adk_web_server.py |
| Debug output printer | src/google/adk/utils/_debug_output.py |
-v flag, check /tmp/agents_log/agent.latest.logadk web) or print events (adk run)transfer_to_agent, request_task, finish_task, escalate/debug/trace/session/{id} for model/token usage__init__.py imports, root_agent or app definedSTOP, MAX_TOKENS, SAFETY