examples/canvas/pydantic-ai/README.md
This is a starter template for building AI-powered canvas applications using PydanticAI and CopilotKit. It provides a modern Next.js application with an integrated PydanticAI agent that manages a visual canvas of interactive cards with real-time AI synchronization.
https://github.com/user-attachments/assets/2a4ec718-b83b-4968-9cbe-7c1fe082e958
Note: This repository ignores lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, bun.lockb) to avoid conflicts between different package managers. Each developer should generate their own lock file using their preferred package manager. After that, make sure to delete it from the .gitignore.
# Using pnpm (recommended)
pnpm install
# Using npm
npm install
# Using yarn
yarn install
# Using bun
bun install
# Using pnpm
pnpm install:agent
# Using npm
npm run install:agent
# Using yarn
yarn install:agent
# Using bun
bun run install:agent
Note: This will automatically setup a
.venv(virtual environment) inside theagentdirectory.To activate the virtual environment manually, you can run:
bashsource agent/.venv/bin/activate
export OPENAI_API_KEY="your-openai-api-key-here"
# Using pnpm
pnpm dev
# Using npm
npm run dev
# Using yarn
yarn dev
# Using bun
bun run dev
This will start both the UI and agent servers concurrently.
Once the application is running, you can:
Create Cards: Use the "New Item" button or ask the AI to create cards
Edit Cards: Click on any field to edit directly, or ask the AI
Execute Plans: Give the AI multi-step instructions
View JSON: Toggle between the visual canvas and JSON view using the button at the bottom
The following scripts can also be run using your preferred package manager:
dev - Starts both UI and agent servers in development modedev:debug - Starts development servers with debug logging enableddev:ui - Starts only the Next.js UI serverdev:agent - Starts only the PydanticAI agent serverbuild - Builds the Next.js application for productionstart - Starts the production serverlint - Runs ESLint for code lintinginstall:agent - Installs Python dependencies for the agentgraph TB
subgraph "Frontend (Next.js)"
UI[Canvas UI
page.tsx]
Actions[Frontend Actions
useCopilotAction]
State[State Management
useCoAgent]
Chat[CopilotChat]
end
subgraph "Backend (Python)"
Agent[PydanticAI Agent
agent.py]
Tools[Backend Tools
- set_plan
- update_plan_progress
- complete_plan]
AgentState[Canvas State
StateDeps]
Model[LLM
GPT-4o]
end
subgraph "Communication"
Runtime[CopilotKit Runtime
:8000]
end
UI <--> State
State <--> Runtime
Chat <--> Runtime
Actions --> Runtime
Runtime <--> Agent
Agent --> Tools
Agent --> AgentState
Agent --> Model
style UI fill:#e1f5fe
style Agent fill:#fff3e0
style Runtime fill:#f3e5f5
click UI "https://github.com/CopilotKit/CopilotKit/blob/main/examples/canvas/pydantic-ai/src/app/page.tsx"
click Agent "https://github.com/CopilotKit/CopilotKit/blob/main/examples/canvas/pydantic-ai/agent/agent.py"
The main UI component is in src/app/page.tsx. It includes:
useCoAgent hook for real-time state sync with the agentuseCopilotActionuseCopilotAction with renderAndWaitForResponse for disambiguation prompts (e.g., choosing an item or card type)The agent logic is in agent/agent.py. It features:
StateDeps[CanvasState] for typed state management@agent.tool for planning and state updates@agent.instructions to provide context-aware guidanceagent.to_ag_ui() for seamless integrationEach card type has specific fields defined in the agent:
sequenceDiagram
participant User
participant UI as Canvas UI
participant CK as CopilotKit
participant Agent as PydanticAI Agent
participant Tools
User->>UI: Interact with canvas
UI->>CK: Update state via useCoAgent
CK->>Agent: Send state + message
Agent->>Agent: Process with GPT-4o
Agent->>Tools: Execute tools
Tools-->>Agent: Return results
Agent->>CK: Return updated state
CK->>UI: Sync state changes
UI->>User: Display updates
Note over Agent: Maintains ground truth
Note over UI,CK: Real-time bidirectional sync
src/lib/canvas/types.tsCardType unionsrc/components/canvas/CardRenderer.tsxagent/agent.pysrc/app/page.tsxProjectData, EntityData)CardRenderer.tsxset[Type]Field[Number]PydanticAI makes it easy to extend the agent with:
@agent.tool@agent.instructions functionCanvasState modelsrc/app/globals.cssFeel free to submit issues and enhancement requests! This starter is designed to be easily extensible.
This project is licensed under the MIT License - see the LICENSE file for details.
If you see "I'm having trouble connecting to my tools", make sure:
If you see "[Errno 48] Address already in use":
lsof -ti:8000 | xargs kill -9lsof -ti:3000 | xargs kill -9If the canvas and AI seem out of sync:
If you encounter Python import errors:
cd agent
pip install -r requirements.txt
If the virtual environment is not activated properly:
cd agent
source .venv/bin/activate # On macOS/Linux
# or
.venv\Scripts\activate # On Windows
If issues persist, recreate the virtual environment:
cd agent
rm -rf .venv
python -m venv .venv
.venv/bin/pip install --upgrade pip
.venv/bin/pip install -r requirements.txt
[!IMPORTANT] Some features are still under active development and may not yet work as expected. If you encounter a problem using this template, please report an issue to this repository.