showcase/shell-docs/src/content/docs/integrations/crewai-flows/shared-state/predictive-state-updates.mdx
<video
src="https://cdn.copilotkit.ai/docs/copilotkit/images/coagents/intermediate-state-render.mp4"
className="rounded-lg shadow-xl"
loop
playsInline
controls
autoPlay
muted
/>
<Callout>
This video shows the result of npx copilotkit@latest init with the
implementation section applied to it!
</Callout>
A CrewAI Flow's state updates discontinuosly; only across function transitions in the flow. But even a single function in the flow often takes many seconds to run and contain sub-steps of interest to the user.
Agent-native applications reflect to the end-user what the agent is doing as continuously possible.
CopilotKit enables this through its concept of predictive state updates.
You can use this when you want to provide the user with feedback about what your agent is doing, specifically to:
When a function in your CrewAI flow finishes executing, its returned state becomes the single source of truth. While intermediate state updates are great for real-time feedback, any changes you want to persist must be explicitly included in the function's final returned state. Otherwise, they will be overwritten when the function completes.
<Tabs groupId="language_crewai-flows_agent" items={["Python"]} persist>
<Tab value="Python">
```python title="agent.py"
from copilotkit.crewai import CopilotKitState
from typing import Literal
class AgentState(CopilotKitState):
observed_steps: list[str] # Array of completed steps
```
</Tab>
</Tabs>
</Step>
<Step>
### Emit the intermediate state
<TailoredContent
className="step"
id="state-emission"
header={
<div>
<p className="text-xl font-semibold">How would you like to emit state updates?</p>
<p className="text-base">
You can either manually emit state updates or configure specific tool calls to emit updates.
</p>
</div>
}
>
<TailoredContentOption
id="manual-emission"
title="Manual Predictive State Updates"
description="Manually emit state updates for maximum control over when updates occur."
icon={<FaArrowUp />}
>
For long-running tasks, you can emit state updates progressively as predictions of the final state. In this example, we simulate a long-running task by executing a series of steps with a one second delay between each update.
<Tabs groupId="language_crewai-flows_agent" items={['Python']} default="Python" persist>
<Tab value="Python">
```python title="agent.py"
from copilotkit.crewai import copilotkit_emit_state # [!code highlight]
from crewai.flow.flow import Flow, start
class SampleAgentFlow(Flow):
# ...
@start()
async def start_flow(self):
# ...
# Simulate executing steps one by one
steps = [
"Analyzing input data...",
"Identifying key patterns...",
"Generating recommendations...",
"Formatting final output..."
]
for step in steps:
self.state["observed_steps"] = self.state.get("observed_steps", []) + [step]
await copilotkit_emit_state(self.state) # [!code highlight]
await asyncio.sleep(1)
# ...
```
</Tab>
</Tabs>
</TailoredContentOption>
<TailoredContentOption
id="tool-emission"
title="Tool-Based Predictive State Updates"
description="Configure specific tool calls to automatically emit intermediate state updates."
icon={<FaWrench />}
>
For long-running tasks, you can configure CopilotKit to automatically predict state updates when specific tool calls are made. In this example, we'll configure CopilotKit to predict state updates whenever the LLM calls the step progress tool.
<Tabs groupId="language_crewai-flows_agent" items={['Python']} default="Python" persist>
<Tab value="Python">
```python
from copilotkit.crewai import copilotkit_predict_state
from crewai.flow.flow import Flow, start
class SampleAgentFlow(Flow):
@start
async def start_flow(self):
# Tell CopilotKit to treat step progress tool calls as predictive of the final state
copilotkit_predict_state({
"observed_steps": {
"tool": "StepProgressTool",
"tool_argument": "steps"
}
})
step_progress_tool = {
"type": "function",
"function": {
"name": "StepProgressTool",
"description": "Records progress by updating the steps array",
"parameters": {
"type": "object",
"properties": {
"steps": {
"type": "array",
"items": {"type": "string"},
"description": "Array of completed steps"
}
},
"required": ["steps"]
}
}
}
# Provide the tool to the LLM and call the model
response = await copilotkit_stream(
completion(
model="openai/gpt-5.4",
messages=[
{
"role": "system",
"content": "You are a task performer. Pretend doing tasks you are given, report the steps using StepProgressTool." # [!code highlight]
},
*self.state.get("messages", [])
],
tools=[step_progress_tool],
stream=True
)
)
```
</Tab>
</Tabs>
</TailoredContentOption>
</TailoredContent>
</Step>
<Step>
### Observe the predictions
These predictions will be emitted as the agent runs, allowing you to track its progress before the final state is determined.
```tsx title="ui/app/page.tsx"
// ...
const YourMainContent = () => {
// Get access to both predicted and final states
const { agent } = useAgent({ agentId: "sample_agent" });
// Add a state renderer to observe predictions
useAgent({
agentId: "sample_agent",
render: ({ state }) => {
if (!state.observed_steps?.length) return null;
return (
<div>
<h3>Current Progress:</h3>
<ul>
{state.observed_steps.map((step, i) => (
<li key={i}>{step}</li>
))}
</ul>
</div>
);
},
});
return (
<div>
<h1>Agent Progress</h1>
{agent.state?.observed_steps?.length > 0 && (
<div>
<h3>Final Steps:</h3>
<ul>
{agent.state.observed_steps.map((step, i) => (
<li key={i}>{step}</li>
))}
</ul>
</div>
)}
</div>
)
}
```
</Step>
<Step>
### Give it a try!
Now you'll notice that the state predictions are emitted as the agent makes progress, giving you insight into its work before the final state is determined.
You can apply this pattern to any long-running task in your agent.
</Step>