showcase/shell-docs/src/content/ag-ui/quickstart/clients.mdx
A client implementation allows you to build conversational applications that leverage AG-UI's event-driven protocol. This approach creates a direct interface between your users and AI agents, demonstrating direct access to the AG-UI protocol.
Building your own client is useful if you want to explore/hack on the AG-UI protocol. For production use, use a full-featured client like CopilotKit.
In this guide, we'll create a CLI client that:
MastraAgent from @ag-ui/mastraLet's get started!
Before we begin, make sure you have:
First, let's set up your API key:
# Set your OpenAI API key
export OPENAI_API_KEY=your-api-key-here
If you don't have pnpm installed:
# Install pnpm
npm install -g pnpm
Create a new directory for your AG-UI client:
mkdir my-ag-ui-client
cd my-ag-ui-client
Initialize a new Node.js project:
pnpm init
Install TypeScript and essential development dependencies:
pnpm add -D typescript @types/node tsx
Create a tsconfig.json file:
{
"compilerOptions": {
"target": "ES2022",
"module": "commonjs",
"lib": ["ES2022"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}
Update your package.json scripts:
{
"scripts": {
"start": "tsx src/index.ts",
"dev": "tsx --watch src/index.ts",
"build": "tsc",
"clean": "rm -rf dist"
}
}
Install the core AG-UI packages and dependencies:
# Core AG-UI packages
pnpm add @ag-ui/client @ag-ui/core @ag-ui/mastra
# Mastra ecosystem packages
pnpm add @mastra/core @mastra/client-js @mastra/memory @mastra/libsql
# Mastra peer dependencies
pnpm add zod
Let's create a basic conversational agent. Create src/agent.ts:
export const agent = new MastraAgent({
resourceId: "cliExample",
agent: new Agent({
id: "ag-ui-assistant",
name: "AG-UI Assistant",
instructions: `
You are a helpful AI assistant. Be friendly, conversational, and helpful.
Answer questions to the best of your ability and engage in natural conversation.
`,
model: "openai/gpt-4o",
memory: new Memory({
storage: new LibSQLStore({
id: "storage-memory",
url: "file:./assistant.db",
}),
}),
}),
threadId: "main-conversation",
});
Now let's create the interactive chat interface. Create src/index.ts:
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
async function chatLoop() {
console.log("🤖 AG-UI Assistant started!");
console.log("Type your messages and press Enter. Press Ctrl+D to quit.\n");
return new Promise<void>((resolve) => {
const promptUser = () => {
rl.question("> ", async (input) => {
if (input.trim() === "") {
promptUser();
return;
}
console.log("");
// Pause input while processing
rl.pause();
// Add user message to conversation
agent.messages.push({
id: randomUUID(),
role: "user",
content: input.trim(),
});
try {
// Run the agent with event handlers
await agent.runAgent(
{}, // No additional configuration needed
{
onTextMessageStartEvent() {
process.stdout.write("🤖 Assistant: ");
},
onTextMessageContentEvent({ event }) {
process.stdout.write(event.delta);
},
onTextMessageEndEvent() {
console.log("\n");
},
},
);
} catch (error) {
console.error("❌ Error:", error);
}
// Resume input
rl.resume();
promptUser();
});
};
// Handle Ctrl+D to quit
rl.on("close", () => {
console.log("\n👋 Thanks for using AG-UI Assistant!");
resolve();
});
promptUser();
});
}
async function main() {
await chatLoop();
}
main().catch(console.error);
Let's run your new AG-UI client:
pnpm dev
You should see:
🤖 AG-UI Assistant started!
Type your messages and press Enter. Press Ctrl+D to quit.
>
Try asking questions like:
You'll see the agent respond with streaming text in real-time!
Let's break down what happens when you send a message:
onTextMessageStartEvent – Agent starts respondingonTextMessageContentEvent – Each chunk of the responseonTextMessageEndEvent – Response is completeNow that you have a working chat interface, let's add some real-world capabilities by creating tools. We'll start with a weather tool.
Let's create a weather tool that your agent can use. Create the directory structure:
mkdir -p src/tools
Create src/tools/weather.tool.ts:
interface GeocodingResponse {
results: {
latitude: number;
longitude: number;
name: string;
}[];
}
interface WeatherResponse {
current: {
time: string;
temperature_2m: number;
apparent_temperature: number;
relative_humidity_2m: number;
wind_speed_10m: number;
wind_gusts_10m: number;
weather_code: number;
};
}
export const weatherTool = createTool({
id: "get-weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
outputSchema: z.object({
temperature: z.number(),
feelsLike: z.number(),
humidity: z.number(),
windSpeed: z.number(),
windGust: z.number(),
conditions: z.string(),
location: z.string(),
}),
execute: async (inputData) => {
return await getWeather(inputData.location);
},
});
const getWeather = async (location: string) => {
const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(
location,
)}&count=1`;
const geocodingResponse = await fetch(geocodingUrl);
const geocodingData = (await geocodingResponse.json()) as GeocodingResponse;
if (!geocodingData.results?.[0]) {
throw new Error(`Location '${location}' not found`);
}
const { latitude, longitude, name } = geocodingData.results[0];
const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`;
const response = await fetch(weatherUrl);
const data = (await response.json()) as WeatherResponse;
return {
temperature: data.current.temperature_2m,
feelsLike: data.current.apparent_temperature,
humidity: data.current.relative_humidity_2m,
windSpeed: data.current.wind_speed_10m,
windGust: data.current.wind_gusts_10m,
conditions: getWeatherCondition(data.current.weather_code),
location: name,
};
};
function getWeatherCondition(code: number): string {
const conditions: Record<number, string> = {
0: "Clear sky",
1: "Mainly clear",
2: "Partly cloudy",
3: "Overcast",
45: "Foggy",
48: "Depositing rime fog",
51: "Light drizzle",
53: "Moderate drizzle",
55: "Dense drizzle",
56: "Light freezing drizzle",
57: "Dense freezing drizzle",
61: "Slight rain",
63: "Moderate rain",
65: "Heavy rain",
66: "Light freezing rain",
67: "Heavy freezing rain",
71: "Slight snow fall",
73: "Moderate snow fall",
75: "Heavy snow fall",
77: "Snow grains",
80: "Slight rain showers",
81: "Moderate rain showers",
82: "Violent rain showers",
85: "Slight snow showers",
86: "Heavy snow showers",
95: "Thunderstorm",
96: "Thunderstorm with slight hail",
99: "Thunderstorm with heavy hail",
};
return conditions[code] || "Unknown";
}
createTool from Mastra to define the tool's
interfaceNow let's update our agent to use the weather tool. Update src/agent.ts:
export const agent = new MastraAgent({
agent: new Agent({
// ...
tools: { weatherTool }, // <--- Add the tool to the agent
// ...
}),
threadId: "main-conversation",
});
Update your CLI interface in src/index.ts to handle tool events:
// Add these new event handlers to your agent.runAgent call:
await agent.runAgent(
{}, // No additional configuration needed
{
// ... existing event handlers ...
onToolCallStartEvent({ event }) {
console.log("🔧 Tool call:", event.toolCallName);
},
onToolCallArgsEvent({ event }) {
process.stdout.write(event.delta);
},
onToolCallEndEvent() {
console.log("");
},
onToolCallResultEvent({ event }) {
if (event.content) {
console.log("🔍 Tool call result:", event.content);
}
},
},
);
Now restart your application and try asking about weather:
pnpm dev
Try questions like:
You'll see the agent use the weather tool to fetch real data and provide detailed responses!
Let's add a web browsing capability. First install the open package:
pnpm add open
Create src/tools/browser.tool.ts:
export const browserTool = createTool({
id: "open-browser",
description: "Open a URL in the default web browser",
inputSchema: z.object({
url: z.url().describe("The URL to open"),
}),
outputSchema: z.object({
success: z.boolean(),
message: z.string(),
}),
execute: async (inputData) => {
try {
await open(inputData.url);
return {
success: true,
message: `Opened ${inputData.url} in your default browser`,
};
} catch (error) {
return {
success: false,
message: `Failed to open browser: ${error}`,
};
}
},
});
Update src/agent.ts to include both tools:
export const agent = new MastraAgent({
resourceId: "cliExample",
agent: new Agent({
id: "ag-ui-assistant",
name: "AG-UI Assistant",
instructions: `
You are a helpful assistant with weather and web browsing capabilities.
For weather queries:
- Always ask for a location if none is provided
- Use the weatherTool to fetch current weather data
For web browsing:
- Always use full URLs (e.g., "https://www.google.com")
- Use the browserTool to open web pages
Be friendly and helpful in all interactions!
`,
model: "openai/gpt-4o",
tools: { weatherTool, browserTool }, // Add both tools
memory: new Memory({
storage: new LibSQLStore({
id: "storage-memory",
url: "file:./assistant.db",
}),
}),
}),
threadId: "main-conversation",
});
Now you can ask your assistant to open websites: "Open Google for me" or "Show me the weather website".
Create a production build:
pnpm build
Add to your package.json:
{
"bin": {
"weather-assistant": "./dist/index.js"
}
}
Add a shebang to your built dist/index.js:
#!/usr/bin/env node
// ... rest of your compiled code
Make it executable:
chmod +x dist/index.js
Install your CLI globally:
pnpm link --global
Now you can run weather-assistant from anywhere!
Your AG-UI client is now a solid foundation. Here are some ideas for enhancement:
chalk for colored outputBuilt something useful? Consider sharing it with the community:
npm installYou've built a complete AG-UI client from scratch! Your weather assistant demonstrates the core concepts:
From here, you can extend your client to support any use case – from simple CLI tools to complex conversational applications. The AG-UI protocol provides the foundation, and your creativity provides the possibilities.
Happy building! 🚀