showcase/shell-docs/src/content/docs/integrations/deepagents/shared-state/state-inputs-outputs.mdx
Not all state properties are relevant for frontend-backend sharing. This guide shows how to ensure only the right portion of state is communicated back and forth.
This guide is based on LangGraph's Input/Output Schema feature
<Callout type="info"> This pattern uses **LangGraph's input/output schema split**, which applies when you're building a custom LangGraph graph. `createDeepAgent` uses middleware with a single state schema and doesn't expose separate input/output schemas. The Python example below shows the LangGraph custom-graph approach. For the Deep Agents TypeScript equivalent of "shared state between agent and frontend", see the [shared state guides](/deepagents/shared-state/in-app-agent-read). </Callout>Depending on your implementation, some properties are meant to be processed internally, while some others are the way for the UI to communicate user input. In addition, some state properties contain a lot of information. Syncing them back and forth between the agent and UI can be costly, while it might not have any practical benefit.
class AgentState(CopilotKitState):
question: str
answer: str
resources: list[str]
```
```python title="agent.py"
from copilotkit import CopilotKitState
# Divide the state to 3 parts
# Input schema for inputs you are willing to accept from the frontend
class InputState(CopilotKitState):
question: str
# Output schema for output you are willing to pass to the frontend
class OutputState(CopilotKitState):
answer: str
# The full schema, including the inputs, outputs and internal state ("resources" in our case)
class OverallState(InputState, OutputState):
resources: List[str]
async def answer_node(state: OverallState, config: RunnableConfig):
"""
Standard chat node, meant to answer general questions.
"""
model = ChatOpenAI()
# add the input question in the system prompt so it's passed to the LLM
system_message = SystemMessage(
content=f"You are a helpful assistant. Answer the question: {state.get('question')}"
)
response = await model.ainvoke([
system_message,
*state["messages"],
], config)
# ...add the rest of the agent implementation
# extract the answer, which will be assigned to the state soon
answer = response.content
return {
"messages": response,
# include the answer in the returned state
"answer": answer
}
# finally, before compiling the graph, we define the 3 state components
builder = StateGraph(OverallState, input=InputState, output=OutputState)
# add all the different nodes and edges and compile the graph
builder.add_node("answer_node", answer_node)
builder.add_edge(START, "answer_node")
builder.add_edge("answer_node", END)
graph = builder.compile()
```
```tsx
import { useAgent } from "@copilotkit/react-core/v2"; // [!code highlight]
const { agent } = useAgent({
agentId: "sample_agent",
});
const answer = agent.state.answer as string;
console.log(answer) // You can expect seeing "answer" change, while the others are not returned from the agent
```