showcase/shell-docs/src/content/docs/integrations/deepagents/quickstart.mdx
Before you begin, you'll need the following:
<Tabs groupId="agent_language" items={['Python', 'TypeScript']} persist>
<Tab value="Python">
If you don't already have a Python project set up, create one using `uv`:
```bash
uv init my-agent
cd my-agent
```
</Tab>
<Tab value="TypeScript">
If you don't already have a Node.js project set up, create one using `npm`:
```bash
mkdir my-agent
cd my-agent
npm init -y
```
</Tab>
</Tabs>
</Step>
<Step>
### Add necessary dependencies
<Tabs groupId="agent_language" items={['Python', 'TypeScript']} persist>
<Tab value="Python">
Add the `deepagents`, `langchain-openai`, and `copilotkit` packages:
```bash
uv add deepagents copilotkit langchain-openai
```
</Tab>
<Tab value="TypeScript">
Add the `deepagents`, `@langchain/langgraph`, `@copilotkit/sdk-js`, and `@langchain/openai` packages:
```bash
npm install deepagents @langchain/langgraph @copilotkit/sdk-js @langchain/openai
```
</Tab>
</Tabs>
</Step>
<Step>
### Create your Deep Agent
<Tabs groupId="agent_setup" items={['Python', 'TypeScript', 'FastAPI']}>
<Tab value="Python">
Create a simple Deep Agent:
```python title="main.py"
from deepagents import create_deep_agent
from copilotkit import CopilotKitMiddleware
from langgraph.checkpoint.memory import MemorySaver
def get_weather(location: str):
"""Get weather for a location"""
return f"The weather in {location} is sunny."
agent = create_deep_agent(
model="openai:gpt-4o",
tools=[get_weather],
middleware=[CopilotKitMiddleware()], # for frontend tools and context
system_prompt="You are a helpful research assistant.",
checkpointer=MemorySaver()
)
```
Then to test and deploy with Deep Agent, create a `langgraph.json`:
```sh
touch langgraph.json
```
```json title="langgraph.json"
{
"python_version": "3.12",
"dockerfile_lines": [],
"dependencies": ["."],
"package_manager": "uv",
"graphs": {
"sample_agent": "./main.py:agent"
},
"env": ".env"
}
```
</Tab>
<Tab value="TypeScript">
Create a simple Deep Agent:
```ts title="agent.ts"
import { createDeepAgent } from "deepagents";
import { copilotkitMiddleware } from "@copilotkit/sdk-js/langgraph";
import { tool } from "langchain";
import { z } from "zod";
const getWeather = tool(
async ({ location }) => `The weather in ${location} is sunny.`,
{
name: "get_weather",
description: "Get the weather for a given location.",
schema: z.object({ location: z.string().describe("The location to get the weather for") }),
}
);
export const agent = createDeepAgent({
model: "openai:gpt-4o",
tools: [getWeather],
middleware: [copilotkitMiddleware],
systemPrompt: "You are a helpful research assistant.",
});
```
Then to test and deploy with Deep Agent, create a `langgraph.json`:
```sh
touch langgraph.json
```
```json title="langgraph.json"
{
"node_version": "20",
"dependencies": ["."],
"package_manager": "npm",
"graphs": {
"sample_agent": "./agent.ts:agent"
},
"env": ".env"
}
```
<Callout type="info">
When setting up the Copilot Runtime in the next steps, select the **Deep Agent** tab.
</Callout>
</Tab>
<Tab value="FastAPI">
Add the `ag-ui-langgraph`, `fastapi`, and `uvicorn` packages:
```bash
uv add ag-ui-langgraph fastapi uvicorn
```
Create a Deep Agent and expose it as an AG-UI endpoint:
```python title="main.py"
from ag_ui_langgraph import add_langgraph_fastapi_endpoint
from copilotkit import CopilotKitMiddleware, LangGraphAGUIAgent
from deepagents import create_deep_agent
from fastapi import FastAPI
from langgraph.checkpoint.memory import MemorySaver
app = FastAPI()
def get_weather(location: str):
"""Get weather for a location"""
return f"The weather in {location} is sunny."
agent = create_deep_agent(
model="openai:gpt-4o",
tools=[get_weather],
middleware=[CopilotKitMiddleware()], # for frontend tools and context
system_prompt="You are a helpful research assistant.",
checkpointer=MemorySaver()
)
add_langgraph_fastapi_endpoint(
app=app,
agent=LangGraphAGUIAgent(
name="sample_agent",
description="An example agent to use as a starting point for your own agent.",
graph=agent,
),
path="/",
)
def main():
"""Run the uvicorn server."""
import uvicorn
uvicorn.run(
"main:app",
host="0.0.0.0",
port=8123,
reload=True,
)
if __name__ == "__main__":
main()
```
</Tab>
</Tabs>
<Callout type="info" title="What is AG-UI?">
AG-UI is an open protocol for frontend-agent communication. Deep Agents use it to stream state and tool calls to your frontend in real-time.
</Callout>
</Step>
<Step>
### Configure your environment
Create a `.env` file in your agent directory and add your OpenAI API key:
```plaintext title=".env"
OPENAI_API_KEY=your_openai_api_key
```
<Callout type="info" title="Other models">
Deep Agents support any model available via LangChain. Change the `model` parameter in `create_deep_agent` to switch providers.
</Callout>
</Step>
<Step>
### Create your frontend
CopilotKit works with any React-based frontend. We'll use Next.js for this example.
```bash
npx create-next-app@latest frontend
cd frontend
```
</Step>
<Step>
### Install CopilotKit packages
```npm
npm install @copilotkit/react-ui @copilotkit/react-core @copilotkit/runtime
```
</Step>
<Step>
### Setup Copilot Runtime
Create an API route to connect CopilotKit to your Deep Agent:
```sh
mkdir -p app/api/copilotkit && touch app/api/copilotkit/route.ts
```
<Callout type="info" title="Using Next.js is optional">
If you'd rather skip the Next.js API proxy, see LangChain's
[CopilotKit integration guide](https://docs.langchain.com/oss/python/langchain/frontend/integrations/copilotkit)
for how to add a custom CopilotKit route directly to your LangGraph deployment.
</Callout>
<Tabs groupId="deployment_method" items={['Deep Agent', 'FastAPI']}>
<Tab value="Deep Agent">
```tsx title="app/api/copilotkit/route.ts"
import {
CopilotRuntime,
ExperimentalEmptyAdapter,
copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import { LangGraphAgent } from "@copilotkit/runtime/langgraph";
import { NextRequest } from "next/server";
const serviceAdapter = new ExperimentalEmptyAdapter();
const runtime = new CopilotRuntime({
agents: {
sample_agent: new LangGraphAgent({
deploymentUrl: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
graphId: "sample_agent",
langsmithApiKey: process.env.LANGSMITH_API_KEY || "",
}),
}
});
export const POST = async (req: NextRequest) => {
const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
runtime,
serviceAdapter,
endpoint: "/api/copilotkit",
});
return handleRequest(req);
};
```
</Tab>
<Tab value="FastAPI">
```tsx title="app/api/copilotkit/route.ts"
import {
CopilotRuntime,
ExperimentalEmptyAdapter,
copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import { LangGraphHttpAgent } from "@copilotkit/runtime/langgraph";
import { NextRequest } from "next/server";
const serviceAdapter = new ExperimentalEmptyAdapter();
const runtime = new CopilotRuntime({
agents: {
sample_agent: new LangGraphHttpAgent({
url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
}),
}
});
export const POST = async (req: NextRequest) => {
const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
runtime,
serviceAdapter,
endpoint: "/api/copilotkit",
});
return handleRequest(req);
};
```
</Tab>
</Tabs>
</Step>
<Step>
### Configure CopilotKit Provider
Wrap your application with the CopilotKit provider:
```tsx title="app/layout.tsx"
import { CopilotKit } from "@copilotkit/react-core";
import "@copilotkit/react-ui/v2/styles.css";
// ...
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body>
<CopilotKit runtimeUrl="/api/copilotkit" agent="sample_agent">
{children}
</CopilotKit>
</body>
</html>
);
}
```
</Step>
<Step>
### Add the chat interface
Add the CopilotSidebar component to your page:
```tsx title="app/page.tsx"
"use client";
import { CopilotSidebar } from "@copilotkit/react-core/v2";
import { useDefaultRenderTool } from "@copilotkit/react-core/v2";
export default function Page() {
useDefaultRenderTool({
render: ({ name, status, parameters, result }) => (
<details>
<summary>
{status === "complete" ? `Called ${name}` : `Calling ${name}`}
</summary>
<p>Status: {status}</p>
<p>Args: {JSON.stringify(parameters)}</p>
<p>Result: {JSON.stringify(result)}</p>
</details>
),
});
return (
<main>
<h1>Your App</h1>
<CopilotSidebar />
</main>
);
}
```
</Step>
<Step>
### Start your agent
<Tabs groupId="deployment_method" items={['Deep Agent', 'FastAPI']}>
<Tab value="Deep Agent">
```bash
npx @langchain/langgraph-cli dev --port 8123 --no-browser
```
</Tab>
<Tab value="FastAPI">
```bash
uv run main.py
```
</Tab>
</Tabs>
Your agent will be available at `http://localhost:8123`.
<Callout type="info">
If port 8123 is already in use, change the `--port` flag (or the `port=` argument
in the FastAPI `main.py`) and update `LANGGRAPH_DEPLOYMENT_URL` in Step 7 to match.
</Callout>
</Step>
<Step>
### Start your UI
In a separate terminal, navigate to your frontend directory and start the development server:
<Tabs groupId="package-manager" items={['npm', 'pnpm', 'yarn', 'bun']}>
<Tab value="npm">
```bash
cd frontend && npm run dev
```
</Tab>
<Tab value="pnpm">
```bash
cd frontend && pnpm dev
```
</Tab>
<Tab value="yarn">
```bash
cd frontend && yarn dev
```
</Tab>
<Tab value="bun">
```bash
cd frontend && bun dev
```
</Tab>
</Tabs>
</Step>