Back to Copilotkit

Custom Agent

docs/snippets/shared/backend/custom-agent.mdx

1.57.019.4 KB
Original Source

import { Tabs, Tab } from "fumadocs-ui/components/tabs";

BuiltInAgent's factory mode gives you full control over the LLM call. You provide a factory function that talks to any backend — CopilotKit handles converting the stream to AG-UI events, managing lifecycle, and wiring it into the runtime.

When to use Simple Mode vs Factory Mode

Simple ModeFactory Mode
SetupMinimal — pass a model stringYou own the LLM call and stream
Model resolutionBuilt-in ("openai/gpt-4o")You set up the model yourself
Tools, MCP, state toolsAutomatically wiredYou wire them in your factory
Backend supportVercel AI SDK onlyAny backend: AI SDK, TanStack AI, or custom
Best forQuick setup, standard use casesFull control, non-standard backends
<Callout type="info"> If simple mode covers your needs, stick with it — it's simpler. Use factory mode when you need control that simple mode doesn't offer. </Callout>

Quick Start

You have an existing LLM backend and you want a CopilotKit copilot using it. Pick your backend:

<Tabs items={["AI SDK", "TanStack AI", "Custom"]}> <Tab value="AI SDK">

typescript
import {
  CopilotRuntime,
  createCopilotEndpoint,
  InMemoryAgentRunner,
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

const runtime = new CopilotRuntime({
  agents: { default: agent },
  runner: new InMemoryAgentRunner(),
});

const copilotEndpoint = createCopilotEndpoint({
  runtime,
  basePath: "/api/copilotkit",
});
export default copilotEndpoint;
</Tab> <Tab value="TanStack AI"> ```typescript title="src/copilotkit.ts" import { CopilotRuntime, createCopilotEndpoint, InMemoryAgentRunner, BuiltInAgent, convertInputToTanStackAI, } from "@copilotkit/runtime/v2"; import { chat } from "@tanstack/ai"; import { openaiText } from "@tanstack/ai-openai";

const agent = new BuiltInAgent({ type: "tanstack", factory: ({ input, abortController }) => { const { messages, systemPrompts } = convertInputToTanStackAI(input); return chat({ adapter: openaiText("gpt-4o"), messages, systemPrompts, abortController, }); }, });

const runtime = new CopilotRuntime({ agents: { default: agent }, runner: new InMemoryAgentRunner(), });

const copilotEndpoint = createCopilotEndpoint({ runtime, basePath: "/api/copilotkit", }); export default copilotEndpoint;

</Tab>
<Tab value="Custom">
```typescript title="src/copilotkit.ts"
import {
  CopilotRuntime,
  createCopilotEndpoint,
  InMemoryAgentRunner,
  BuiltInAgent,
} from "@copilotkit/runtime/v2";
import { EventType, type BaseEvent } from "@ag-ui/client";

const agent = new BuiltInAgent({
  type: "custom",
  factory: async function* ({ input, abortSignal }) {
    const response = await fetch("https://your-llm-api.com/chat", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ messages: input.messages }),
      signal: abortSignal,
    });

    const reader = response.body!.getReader();
    const decoder = new TextDecoder();
    const messageId = crypto.randomUUID();

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      yield {
        type: EventType.TEXT_MESSAGE_CHUNK,
        role: "assistant",
        messageId,
        delta: decoder.decode(value),
      } as BaseEvent;
    }
  },
});

const runtime = new CopilotRuntime({
  agents: { default: agent },
  runner: new InMemoryAgentRunner(),
});

const copilotEndpoint = createCopilotEndpoint({
  runtime,
  basePath: "/api/copilotkit",
});
export default copilotEndpoint;
</Tab> </Tabs>

The frontend setup is the same as BuiltInAgent — wrap your app with <CopilotKit> and add a chat component.

How It Works

Factory mode accepts a config with two fields:

  • type — which backend you're using: "aisdk", "tanstack", or "custom"
  • factory — a function that receives the raw request and returns a backend-native stream

The factory receives an AgentFactoryContext (from @copilotkit/runtime/v2):

typescript
interface AgentFactoryContext {
  input: RunAgentInput;        // messages, tools, state, context, threadId, runId, forwardedProps
  abortController: AbortController;  // for TanStack AI (requires AbortController)
  abortSignal: AbortSignal;          // preferred for AI SDK, fetch, and custom backends
}

CopilotKit handles everything else: RUN_STARTED and RUN_FINISHED lifecycle events, stream-to-AG-UI conversion, error handling, and abort/cancellation. Your factory never needs to emit lifecycle events.

The factory can be async — return a Promise if you need to do setup before streaming:

typescript
factory: async ({ input, abortSignal }) => {
  const apiKey = await getApiKeyFromVault();
  return streamText({ model: openai("gpt-4o", { apiKey }), ... });
}

Examples

With Tools

<Tabs items={["AI SDK", "TanStack AI", "Custom"]}> <Tab value="AI SDK">

typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  convertToolsToVercelAITools,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const tools = convertToolsToVercelAITools(input.tools);
    return streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      tools,
      abortSignal,
    });
  },
});

convertToolsToVercelAITools converts the frontend-defined tools (from useCopilotAction) into AI SDK's ToolSet format automatically. </Tab> <Tab value="TanStack AI">

typescript
import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2";
import { chat, toolDefinition } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";
import { z } from "zod";

const getWeather = toolDefinition({
  name: "getWeather",
  description: "Get the weather for a city",
  inputSchema: z.object({ city: z.string() }),
}).server(async ({ city }) => ({ temp: 72, city }));

const agent = new BuiltInAgent({
  type: "tanstack",
  factory: ({ input, abortController }) => {
    const { messages, systemPrompts } = convertInputToTanStackAI(input);
    return chat({
      adapter: openaiText("gpt-4o"),
      messages,
      systemPrompts,
      tools: [getWeather],
      abortController,
    });
  },
});
</Tab> <Tab value="Custom"> ```typescript title="src/copilotkit.ts" import { BuiltInAgent } from "@copilotkit/runtime/v2"; import { EventType, type BaseEvent } from "@ag-ui/client";

const agent = new BuiltInAgent({ type: "custom", factory: async function* ({ input }) { const messageId = crypto.randomUUID(); const toolCallId = crypto.randomUUID();

// The LLM decides to call a tool
yield {
  type: EventType.TOOL_CALL_START,
  parentMessageId: messageId,
  toolCallId,
  toolCallName: "getWeather",
} as BaseEvent;

yield {
  type: EventType.TOOL_CALL_ARGS,
  toolCallId,
  delta: JSON.stringify({ city: "San Francisco" }),
} as BaseEvent;

yield {
  type: EventType.TOOL_CALL_END,
  toolCallId,
} as BaseEvent;

// Execute the tool and return the result
yield {
  type: EventType.TOOL_CALL_RESULT,
  role: "tool",
  messageId: crypto.randomUUID(),
  toolCallId,
  content: JSON.stringify({ temp: 72, city: "San Francisco" }),
} as BaseEvent;

// Text response after the tool call
yield {
  type: EventType.TEXT_MESSAGE_CHUNK,
  role: "assistant",
  messageId,
  delta: "The weather in San Francisco is 72°F.",
} as BaseEvent;

}, });


With `type: "custom"`, you yield AG-UI events directly. See the [AG-UI event reference](/built-in-agent/ag-ui) for all available event types.
</Tab>
</Tabs>

### With Reasoning (Thinking Models)

<Tabs items={["AI SDK", "TanStack AI"]}>
<Tab value="AI SDK">
```typescript title="src/copilotkit.ts"
import { BuiltInAgent, convertMessagesToVercelAISDKMessages } from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: anthropic("claude-sonnet-4", {
        thinking: { type: "enabled", budgetTokens: 10000 },
      }),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

Reasoning events (REASONING_START, REASONING_MESSAGE_CONTENT, REASONING_END) are automatically extracted from the AI SDK stream. </Tab> <Tab value="TanStack AI"> <Callout type="warn"> The TanStack AI converter does not surface reasoning events (REASONING_START, REASONING_MESSAGE_CONTENT, REASONING_END). Even if the underlying model supports thinking/reasoning, those events will not be forwarded to the frontend. Use the AI SDK backend if you need reasoning events. </Callout>

typescript
import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2";
import { chat } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";

const agent = new BuiltInAgent({
  type: "tanstack",
  factory: ({ input, abortController }) => {
    const { messages, systemPrompts } = convertInputToTanStackAI(input);
    return chat({
      adapter: anthropicText("claude-sonnet-4"),
      messages,
      systemPrompts,
      modelOptions: { thinking: { type: "enabled", budgetTokens: 10000 } },
      abortController,
    });
  },
});
</Tab> </Tabs>

With System Prompt, Context, and State

<Tabs items={["AI SDK", "TanStack AI"]}> <Tab value="AI SDK">

typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const systemParts: string[] = ["You are a helpful assistant."];

    // Add context from the frontend (useCopilotReadable)
    if (input.context?.length) {
      for (const ctx of input.context) {
        systemParts.push(`${ctx.description}:\n${ctx.value}`);
      }
    }

    // Add shared application state (useCoAgent, etc.)
    if (input.state && Object.keys(input.state).length > 0) {
      systemParts.push(
        `Application State:\n${JSON.stringify(input.state, null, 2)}`,
      );
    }

    const messages = convertMessagesToVercelAISDKMessages(input.messages);
    messages.unshift({ role: "system", content: systemParts.join("\n\n") });

    return streamText({
      model: openai("gpt-4o"),
      messages,
      abortSignal,
    });
  },
});
</Tab> <Tab value="TanStack AI"> ```typescript title="src/copilotkit.ts" import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2"; import { chat } from "@tanstack/ai"; import { openaiText } from "@tanstack/ai-openai";

const agent = new BuiltInAgent({ type: "tanstack", factory: ({ input, abortController }) => { // convertInputToTanStackAI automatically extracts system/developer messages, // context, and state into the systemPrompts array const { messages, systemPrompts } = convertInputToTanStackAI(input);

// Add your own system prompt at the beginning
systemPrompts.unshift("You are a helpful assistant.");

return chat({
  adapter: openaiText("gpt-4o"),
  messages,
  systemPrompts,
  abortController,
});

}, });


`convertInputToTanStackAI` handles system/developer messages, `input.context`, and `input.state` automatically. Prepend your own prompt if needed.
</Tab>
</Tabs>

### With forwardedProps

Let the frontend override model, temperature, or other settings at runtime:

<Tabs items={["AI SDK", "TanStack AI"]}>
<Tab value="AI SDK">
```typescript title="src/copilotkit.ts"
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  resolveModel,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const props = (input.forwardedProps ?? {}) as Record<string, unknown>;

    const model =
      typeof props.model === "string"
        ? resolveModel(props.model)
        : openai("gpt-4o");

    const temperature =
      typeof props.temperature === "number" ? props.temperature : 0.7;

    return streamText({
      model,
      temperature,
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    });
  },
});
</Tab> <Tab value="TanStack AI"> ```typescript title="src/copilotkit.ts" import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2"; import { chat } from "@tanstack/ai"; import { openaiText } from "@tanstack/ai-openai"; import { anthropicText } from "@tanstack/ai-anthropic";

const agent = new BuiltInAgent({ type: "tanstack", factory: ({ input, abortController }) => { const props = (input.forwardedProps ?? {}) as Record<string, unknown>; const { messages, systemPrompts } = convertInputToTanStackAI(input);

const adapter =
  props.model === "anthropic/claude-sonnet-4"
    ? anthropicText("claude-sonnet-4")
    : openaiText((props.model as string) ?? "gpt-4o");

const modelOptions: Record<string, unknown> = {};
if (typeof props.temperature === "number")
  modelOptions.temperature = props.temperature;

return chat({
  adapter,
  messages,
  systemPrompts,
  modelOptions,
  abortController,
});

}, });

</Tab>
</Tabs>

Forward props from the frontend using the `CopilotKit` provider's `properties` prop:

```tsx title="app/page.tsx"
<CopilotKit properties={{ model: "anthropic/claude-sonnet-4", temperature: 0.3 }}>
  <CopilotChat />
</CopilotKit>

With State Tools

Factory mode does not inject state management tools automatically. If your app uses shared state (useCoAgent with state), add the tools yourself:

<Tabs items={["AI SDK", "TanStack AI"]}> <Tab value="AI SDK">

typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  convertToolsToVercelAITools,
} from "@copilotkit/runtime/v2";
import { streamText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const inputTools = convertToolsToVercelAITools(input.tools);
    const stateTools = {
      AGUISendStateSnapshot: tool({
        description: "Replace the entire application state",
        parameters: z.object({ snapshot: z.any() }),
        execute: async ({ snapshot }) => ({ success: true, snapshot }),
      }),
      AGUISendStateDelta: tool({
        description: "Apply incremental state updates via JSON Patch",
        parameters: z.object({
          delta: z.array(
            z.object({
              op: z.enum(["add", "replace", "remove"]),
              path: z.string(),
              value: z.any().optional(),
            }),
          ),
        }),
        execute: async ({ delta }) => ({ success: true, delta }),
      }),
    };

    return streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      tools: { ...inputTools, ...stateTools },
      abortSignal,
    });
  },
});
</Tab> <Tab value="TanStack AI"> ```typescript title="src/copilotkit.ts" import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2"; import { chat, toolDefinition } from "@tanstack/ai"; import { openaiText } from "@tanstack/ai-openai"; import { z } from "zod";

const sendStateSnapshot = toolDefinition({ name: "AGUISendStateSnapshot", description: "Replace the entire application state", inputSchema: z.object({ snapshot: z.any() }), }).server(async ({ snapshot }) => ({ success: true, snapshot }));

const sendStateDelta = toolDefinition({ name: "AGUISendStateDelta", description: "Apply incremental state updates via JSON Patch", inputSchema: z.object({ delta: z.array( z.object({ op: z.enum(["add", "replace", "remove"]), path: z.string(), value: z.any().optional(), }), ), }), }).server(async ({ delta }) => ({ success: true, delta }));

const agent = new BuiltInAgent({ type: "tanstack", factory: ({ input, abortController }) => { const { messages, systemPrompts } = convertInputToTanStackAI(input); return chat({ adapter: openaiText("gpt-4o"), messages, systemPrompts, tools: [sendStateSnapshot, sendStateDelta], abortController, }); }, });

</Tab>
</Tabs>

### With Structured Output

<Tabs items={["AI SDK", "TanStack AI"]}>
<Tab value="AI SDK">
```typescript title="src/copilotkit.ts"
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: openai("gpt-4o", { structuredOutputs: true }),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      tools: {
        getWeather: tool({
          description: "Get weather for a location",
          parameters: z.object({ city: z.string() }),
          execute: async ({ city }) => ({ temp: 72, city }),
        }),
      },
      toolChoice: "required",
      abortSignal,
    }),
});
</Tab> <Tab value="TanStack AI"> ```typescript title="src/copilotkit.ts" import { BuiltInAgent, convertInputToTanStackAI } from "@copilotkit/runtime/v2"; import { chat } from "@tanstack/ai"; import { openaiText } from "@tanstack/ai-openai"; import { z } from "zod";

const agent = new BuiltInAgent({ type: "tanstack", factory: ({ input, abortController }) => { const { messages, systemPrompts } = convertInputToTanStackAI(input); return chat({ adapter: openaiText("gpt-4o"), messages, systemPrompts, outputSchema: z.object({ summary: z.string(), keyPoints: z.array(z.string()), sentiment: z.enum(["positive", "neutral", "negative"]), }), abortController, }); }, });

</Tab>
</Tabs>

## Helper Utilities

These utilities are exported from `@copilotkit/runtime/v2` to help convert between CopilotKit's input format and your backend's expected format:

| Utility | Description |
|---|---|
| `convertInputToTanStackAI(input)` | Converts `RunAgentInput` to `{ messages, systemPrompts }` for TanStack AI's `chat()`. Handles system/developer messages, context, and state. |
| `convertMessagesToVercelAISDKMessages(messages)` | Converts AG-UI messages to Vercel AI SDK's `ModelMessage[]` format. |
| `convertToolsToVercelAITools(tools)` | Converts frontend-defined tools (JSON Schema) to AI SDK's `ToolSet`. |
| `convertToolDefinitionsToVercelAITools(tools)` | Converts `defineTool()` definitions (Standard Schema) to AI SDK's `ToolSet`. |
| `resolveModel(spec)` | Resolves `"openai/gpt-4o"` strings to AI SDK `LanguageModel` instances. |