docs/hooks/reference.md
This document provides the technical specification for Gemini CLI hooks, including JSON schemas and API details.
stdin for Input (JSON), stdout for Output (JSON), and
stderr for logs and feedback.0: Success. stdout is parsed as JSON. Preferred for all logic.2: System Block. The action is blocked; stderr is used as the rejection
reason.Other: Warning. A non-fatal failure occurred; the CLI continues with a
warning.stdout other than the final JSON.Hooks are defined in settings.json within the hooks object. Each event (for
example, BeforeTool) contains an array of hook definitions.
| Field | Type | Required | Description |
|---|---|---|---|
matcher | string | No | A regex (for tools) or exact string (for lifecycle) to filter when the hook runs. |
sequential | boolean | No | If true, hooks in this group run one after another. If false, they run in parallel. |
hooks | array | Yes | An array of hook configurations. |
| Field | Type | Required | Description |
|---|---|---|---|
type | string | Yes | The execution engine. Currently only "command" is supported. |
command | string | Yes* | The shell command to execute. (Required when type is "command"). |
name | string | No | A friendly name for identifying the hook in logs and CLI commands. |
timeout | number | No | Execution timeout in milliseconds (default: 60000). |
description | string | No | A brief explanation of the hook's purpose. |
All hooks receive these common fields via stdin:
{
"session_id": string, // Unique ID for the current session
"transcript_path": string, // Absolute path to session transcript JSON
"cwd": string, // Current working directory
"hook_event_name": string, // The firing event (for example "BeforeTool")
"timestamp": string // ISO 8601 execution time
}
Most hooks support these fields in their stdout JSON:
| Field | Type | Description |
|---|---|---|
systemMessage | string | Displayed immediately to the user in the terminal. |
suppressOutput | boolean | If true, hides internal hook metadata from logs/telemetry. |
continue | boolean | If false, stops the entire agent loop immediately. |
stopReason | string | Displayed to the user when continue is false. |
decision | string | "allow" or "deny" (alias "block"). Specific impact depends on the event. |
reason | string | The feedback/error message provided when a decision is "deny". |
For BeforeTool and AfterTool events, the matcher field in your settings is
compared against the name of the tool being executed.
read_file,
run_shell_command). See the Tools Reference for a full
list of available tool names.mcp_<server_name>_<tool_name>.matcher: "read_.*" matches all file reading tools).BeforeToolFires before a tool is invoked. Used for argument validation, security checks, and parameter rewriting.
tool_name: (string) The name of the tool being called.tool_input: (object) The raw arguments generated by the model.mcp_context: (object) Optional metadata for MCP-based tools.original_request_name: (string) The original name of the tool being
called, if this is a tail tool call.decision: Set to "deny" (or "block") to prevent the tool from
executing.reason: Required if denied. This text is sent to the agent as a tool
error, allowing it to respond or retry.hookSpecificOutput.tool_input: An object that merges with and
overrides the model's arguments before execution.continue: Set to false to kill the entire agent loop immediately.stderr as the
reason sent to the agent. The turn continues.AfterToolFires after a tool executes. Used for result auditing, context injection, or hiding sensitive output from the agent.
tool_name: (string)tool_input: (object) The original arguments.tool_response: (object) The result containing llmContent,
returnDisplay, and optional error.mcp_context: (object)original_request_name: (string) The original name of the tool being
called, if this is a tail tool call.decision: Set to "deny" to hide the real tool output from the agent.reason: Required if denied. This text replaces the tool result sent
back to the model.hookSpecificOutput.additionalContext: Text that is appended to the
tool result for the agent.hookSpecificOutput.tailToolCallRequest: ({ name: string, args: object })
A request to execute another tool immediately after this one. The result of
this "tail call" will replace the original tool's response. Ideal for
programmatic tool routing.continue: Set to false to kill the entire agent loop immediately.stderr as the
replacement content sent to the agent. The turn continues.BeforeAgentFires after a user submits a prompt, but before the agent begins planning. Used for prompt validation or injecting dynamic context.
prompt: (string) The original text submitted by the user.hookSpecificOutput.additionalContext: Text that is appended to the
prompt for this turn only.decision: Set to "deny" to block the turn and discard the user's
message (it will not appear in history).continue: Set to false to block the turn but save the message to
history.reason: Required if denied or stopped.decision: "deny".AfterAgentFires once per turn after the model generates its final response. Primary use case is response validation and automatic retries.
prompt: (string) The user's original request.prompt_response: (string) The final text generated by the agent.stop_hook_active: (boolean) Indicates if this hook is already running as
part of a retry sequence.decision: Set to "deny" to reject the response and force a retry.reason: Required if denied. This text is sent to the agent as a new
prompt to request a correction.continue: Set to false to stop the session without retrying.hookSpecificOutput.clearContext: If true, clears conversation history
(LLM memory) while preserving UI display.stderr as the feedback prompt.BeforeModelFires before sending a request to the LLM. Operates on a stable, SDK-agnostic request format.
llm_request: (object) Contains model, messages, and config
(generation params).hookSpecificOutput.llm_request: An object that overrides parts of the
outgoing request (for example, changing models or temperature).hookSpecificOutput.llm_response: A Synthetic Response object. If
provided, the CLI skips the LLM call entirely and uses this as the response.decision: Set to "deny" to block the request and abort the turn.stderr as the error message.BeforeToolSelectionFires before the LLM decides which tools to call. Used to filter the available toolset or force specific tool modes.
llm_request: (object) Same format as BeforeModel.hookSpecificOutput.toolConfig.mode: ("AUTO" | "ANY" | "NONE")
"NONE": Disables all tools (Wins over other hooks)."ANY": Forces at least one tool call.hookSpecificOutput.toolConfig.allowedFunctionNames: (string[]) Whitelist
of tool names.decision, continue, or
systemMessage.AfterModelFires immediately after an LLM response chunk is received. Used for real-time redaction or PII filtering.
llm_request: (object) The original request.llm_response: (object) The model's response (or a single chunk during
streaming).hookSpecificOutput.llm_response: An object that replaces the model's
response chunk.decision: Set to "deny" to discard the response chunk and block the
turn.continue: Set to false to kill the entire agent loop immediately.stderr as the error message.SessionStartFires on application startup, resuming a session, or after a /clear command.
Used for loading initial context.
source: ("startup" | "resume" | "clear")hookSpecificOutput.additionalContext: (string)
systemMessage: Shown at the start of the session.continue and decision fields are ignored. Startup
is never blocked.SessionEndFires when the CLI exits or a session is cleared. Used for cleanup or final telemetry.
reason: ("exit" | "clear" | "logout" | "prompt_input_exit" | "other")systemMessage: Displayed to the user during shutdown.continue, decision).NotificationFires when the CLI emits a system alert (for example, Tool Permissions). Used for external logging or cross-platform alerts.
notification_type: ("ToolPermission")message: Summary of the alert.details: JSON object with alert-specific metadata (for example, tool name,
file path).systemMessage: Displayed alongside the system alert.PreCompressFires before the CLI summarizes history to save tokens. Used for logging or state saving.
trigger: ("auto" | "manual")systemMessage: Displayed to the user before compression.Gemini CLI uses these structures to ensure hooks don't break across SDK updates.
LLMRequest:
{
"model": string,
"messages": Array<{
"role": "user" | "model" | "system",
"content": string // Non-text parts are filtered out for hooks
}>,
"config": { "temperature": number, ... },
"toolConfig": { "mode": string, "allowedFunctionNames": string[] }
}
LLMResponse:
{
"candidates": Array<{
"content": { "role": "model", "parts": string[] },
"finishReason": string
}>,
"usageMetadata": { "totalTokenCount": number }
}