Back to Copilotkit

Predictive state updates

showcase/shell-docs/src/content/docs/integrations/adk/shared-state/predictive-state-updates.mdx

1.57.07.3 KB
Original Source

{/* <IframeSwitcher id="predictive-state-updates-example" exampleUrl="https://feature-viewer.copilotkit.ai/adk-middleware/feature/predictive_state_updates?sidebar=false&chatDefaultOpen=false" codeUrl="https://feature-viewer.copilotkit.ai/adk-middleware/feature/predictive_state_updates?view=code&sidebar=false&codeLayout=tabs" exampleLabel="Demo" codeLabel="Code" height="700px" />

<Callout type="info"> This example demonstrates predictive state updates in the [CopilotKit Feature Viewer](https://feature-viewer.copilotkit.ai/adk-middleware/feature/predictive_state_updates). </Callout> */}

What is this?

An ADK agent's state updates discontinuously; only when state changes are explicitly made. But even a single operation often takes many seconds to run and contains sub-steps of interest to the user.

Agent-native applications reflect to the end-user what the agent is doing as continuously as possible.

CopilotKit enables this through its concept of predictive state updates.

When should I use this?

Use predictive state updates when you want to:

  • Keep users engaged by avoiding long loading indicators
  • Build trust by demonstrating what the agent is working on
  • Enable agent steering - allowing users to course-correct the agent if needed

Important Note

When your agent finishes executing, its final state becomes the single source of truth. While intermediate state updates are great for real-time feedback, any changes you want to persist must be explicitly included in the final state. Otherwise, they will be overwritten when the operation completes.

Implementation

<Steps> <Step> ### Define the state We'll be defining an `observed_steps` field in the state, which will be updated as the agent performs different steps of a task.
```python title="agent.py"
from typing import Dict, List
from fastapi import FastAPI
from pydantic import BaseModel
from ag_ui_adk import ADKAgent, add_adk_fastapi_endpoint
from google.adk.agents import LlmAgent
from google.adk.tools import ToolContext


class AgentState(BaseModel):
    """State for the agent."""
    observed_steps: List[str] = []
```
</Step> <Step> ### Emit the intermediate state
<TailoredContent
    id="state-emission"
    header={
        <div>
            <p className="text-xl font-semibold">How would you like to emit state updates?</p>
            <p className="text-base">
                You can either manually emit state updates or configure specific tool calls to emit updates.
            </p>
        </div>
    }
>
    <TailoredContentOption
        id="tool-emission"
        title="Tool-Based Predictive State Updates"
        description="Configure specific tool calls to automatically emit intermediate state updates."
        icon={<FaWrench />}
    >
        For long-running tasks, you can create a tool that updates state and emits it to the frontend. In this example, we'll create a step progress tool that the LLM calls to report its progress.

        ```python title="agent.py"
        from typing import Dict, List
        from fastapi import FastAPI
        from pydantic import BaseModel
        from ag_ui_adk import ADKAgent, add_adk_fastapi_endpoint
        from google.adk.agents import LlmAgent
        from google.adk.tools import ToolContext


        class AgentState(BaseModel):
            """State for the agent."""
            observed_steps: List[str] = []


        def step_progress(tool_context: ToolContext, steps: List[str]) -> Dict[str, str]:
            """Reports the current progress steps.

            Args:
                tool_context (ToolContext): The tool context for accessing state.
                steps (List[str]): The list of steps completed so far.

            Returns:
                Dict[str, str]: A dictionary indicating the progress was received.
            """
            tool_context.state["observed_steps"] = steps
            return {"status": "success", "message": "Progress received."}


        agent = LlmAgent(
            name="my_agent",
            model="gemini-2.5-flash",
            instruction="""
            You are a task performer. When given a task, break it down into steps
            and report your progress using the step_progress tool after completing each step.
            """,
            tools=[step_progress],
        )

        adk_agent = ADKAgent(
            adk_agent=agent,
            app_name="demo_app",
            user_id="demo_user",
            session_timeout_seconds=3600,
            use_in_memory_services=True,
        )

        app = FastAPI()
        add_adk_fastapi_endpoint(app, adk_agent, path="/")

        if __name__ == "__main__":
            uvicorn.run(app, host="0.0.0.0", port=8000)
        ```

        <Callout>
          With this configuration, the agent emits state updates each time it calls the `step_progress` tool, giving the frontend real-time visibility into progress.
        </Callout>
    </TailoredContentOption>
</TailoredContent>
</Step> <Step> ### Observe the predictions These predictions will be emitted as the agent runs, allowing you to track its progress before the final state is determined.
```tsx title="ui/app/page.tsx"
"use client";


// ...
type AgentState = {
    observed_steps: string[];
};

const YourMainContent = () => {
    // Get access to both predicted and final states
    const { agent } = useAgent({ agentId: "my_agent" });

    // Add a state renderer to observe predictions
    useAgent({
        agentId: "my_agent",
        render: ({ state }) => {
            if (!state.observed_steps?.length) return null;
            return (
                <div>
                    <h3>Current Progress:</h3>
                    <ul>
                        {state.observed_steps.map((step, i) => (
                            <li key={i}>{step}</li>
                        ))}
                    </ul>
                </div>
            );
        },
    });

    return (
        <div>
            <h1>Agent Progress</h1>
            {agent.state?.observed_steps?.length > 0 && (
                <div>
                    <h3>Final Steps:</h3>
                    <ul>
                        {agent.state.observed_steps.map((step, i) => (
                            <li key={i}>{step}</li>
                        ))}
                    </ul>
                </div>
            )}
        </div>
    )
}
```

<Callout type="warn" title="Important">
  The `name` parameter must exactly match the agent name you defined in your CopilotRuntime configuration (e.g., `my_agent` from the quickstart).
</Callout>
</Step> <Step> ### Give it a try! Now you'll notice that the state predictions are emitted as the agent makes progress, giving you insight into its work before the final state is determined. You can apply this pattern to any long-running task in your agent. </Step> </Steps>