showcase/shell-docs/src/content/docs/shared-state/streaming.mdx
By default, agent state only updates between LangGraph node transitions, so a long-running tool call (writing a full document, drafting an email) appears to the UI as one big burst at the end. For agent-native apps, that feels broken: users expect to watch the output materialise.
State streaming forwards the value of a specific tool argument
straight into an agent state key as the argument is being generated.
The UI, subscribed via useAgent, re-renders every token.
Use state streaming whenever a tool's output is long-form text or a growing structured value and you want the user to see it assemble in real time. Common shapes:
Without streaming, the user stares at a spinner. With streaming, they see the answer grow token-by-token.
The canonical pattern for prebuilt agents is StateStreamingMiddleware.
It takes one or more StateItem(...) entries, each mapping a tool
argument to a state key. When the LLM streams that argument, CopilotKit
writes every partial value into shared state before the tool even
finishes executing.
A few things to note:
state_key must exist on your AgentState schema (document: str in this demo).tool and tool_argument name the exact LLM-facing tool and
argument to forward.The UI side is identical to any other shared-state subscription:
useAgent with OnStateChanged gives you a reactive agent.state.
Add OnRunStatusChanged if you want a "LIVE" / "done" indicator.
From there, agent.state.document is just a string that grows on every
token, and agent.isRunning tells you whether to show a streaming
indicator.