Back to Ai

LangChain

content/providers/04-adapters/01-langchain.mdx

2.1.1016.1 KB
Original Source

LangChain

LangChain is a framework for building applications powered by large language models. It provides tools and abstractions for working with AI models, prompts, chains, vector stores, and other data sources for retrieval augmented generation (RAG).

LangGraph is a library built on top of LangChain for creating stateful, multi-actor applications. It enables you to define complex agent workflows as graphs, with support for cycles, persistence, and human-in-the-loop patterns.

The @ai-sdk/langchain adapter provides seamless integration between LangChain, LangGraph, and the AI SDK, enabling you to use LangChain models and LangGraph agents with AI SDK UI components.

Installation

<Tabs items={['pnpm', 'npm', 'yarn']}> <Tab> <Snippet text="pnpm add @ai-sdk/langchain @langchain/core" dark /> </Tab> <Tab> <Snippet text="npm install @ai-sdk/langchain @langchain/core" dark /> </Tab> <Tab> <Snippet text="yarn add @ai-sdk/langchain @langchain/core" dark /> </Tab> </Tabs>

<Note>@langchain/core is a required peer dependency.</Note>

Features

  • Convert AI SDK UIMessage to LangChain BaseMessage format using toBaseMessages
  • Transform LangChain/LangGraph streams to AI SDK UIMessageStream using toUIMessageStream
  • Support for streamEvents() output for granular event streaming and observability
  • LangSmithDeploymentTransport for connecting directly to a deployed LangGraph graph
  • Full support for text, tool calls, tool results, and multimodal content
  • Custom data streaming with typed events (data-{type})

Example: Basic Chat

Here is a basic example that uses both the AI SDK and LangChain together with the Next.js App Router.

tsx
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { createUIMessageStreamResponse, UIMessage } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const model = new ChatOpenAI({
    model: 'gpt-4o-mini',
    temperature: 0,
  });

  // Convert AI SDK UIMessages to LangChain messages
  const langchainMessages = await toBaseMessages(messages);

  // Stream the response from the model
  const stream = await model.stream(langchainMessages);

  // Convert the LangChain stream to UI message stream
  return createUIMessageStreamResponse({
    stream: toUIMessageStream(stream),
  });
}

Then, use the AI SDK's useChat hook in the page component:

tsx
'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
  const { messages, sendMessage, status } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.parts.map((part, i) =>
            part.type === 'text' ? <span key={i}>{part.text}</span> : null,
          )}
        </div>
      ))}
      <form
        onSubmit={e => {
          e.preventDefault();
          const input = e.currentTarget.elements.namedItem(
            'message',
          ) as HTMLInputElement;
          sendMessage({ text: input.value });
          input.value = '';
        }}
      >
        <input name="message" placeholder="Say something..." />
        <button type="submit" disabled={status === 'streaming'}>
          Send
        </button>
      </form>
    </div>
  );
}

Example: LangChain Agent with Tools

Create agents with tools using LangChain's createAgent:

tsx
import { createUIMessageStreamResponse, UIMessage } from 'ai';
import { createAgent } from 'langchain';
import { ChatOpenAI, tools } from '@langchain/openai';
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';

export const maxDuration = 60;

const model = new ChatOpenAI({
  model: 'gpt-4o',
  temperature: 0.7,
});

// Image generation tool configuration
const imageGenerationTool = tools.imageGeneration({
  size: '1024x1024',
  quality: 'high',
  outputFormat: 'png',
});

// Create a LangChain agent with tools
const agent = createAgent({
  model,
  tools: [imageGenerationTool],
  systemPrompt: 'You are a creative AI artist assistant.',
});

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const langchainMessages = await toBaseMessages(messages);

  const stream = await agent.stream(
    { messages: langchainMessages },
    { streamMode: ['values', 'messages'] },
  );

  return createUIMessageStreamResponse({
    stream: toUIMessageStream(stream),
  });
}

Example: LangGraph

Use the adapter with LangGraph to build agent workflows:

tsx
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { createUIMessageStreamResponse, UIMessage } from 'ai';
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph';

export const maxDuration = 30;

const model = new ChatOpenAI({
  model: 'gpt-4o-mini',
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  // Create the LangGraph agent
  const graph = new StateGraph(MessagesAnnotation)
    .addNode('agent', callModel)
    .addEdge('__start__', 'agent')
    .addEdge('agent', '__end__')
    .compile();

  // Convert AI SDK UIMessages to LangChain messages
  const langchainMessages = await toBaseMessages(messages);

  // Stream from the graph using LangGraph's streaming format
  const stream = await graph.stream(
    { messages: langchainMessages },
    { streamMode: ['values', 'messages'] },
  );

  // Convert the LangGraph stream to UI message stream
  return createUIMessageStreamResponse({
    stream: toUIMessageStream(stream),
  });
}

Example: Streaming with streamEvents

LangChain's streamEvents() method provides granular, semantic events with metadata. This is useful for debugging, observability, and migrating existing LCEL applications:

tsx
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { createUIMessageStreamResponse, UIMessage } from 'ai';

export const maxDuration = 30;

const model = new ChatOpenAI({
  model: 'gpt-4o-mini',
  temperature: 0,
});

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const langchainMessages = await toBaseMessages(messages);

  // Use streamEvents() for granular event streaming
  // Produces events like on_chat_model_stream, on_tool_start, on_tool_end
  const streamEvents = model.streamEvents(langchainMessages, {
    version: 'v2',
  });

  // The adapter automatically detects and handles streamEvents format
  return createUIMessageStreamResponse({
    stream: toUIMessageStream(streamEvents),
  });
}
<Note> **When to use `streamEvents()` vs `graph.stream()`:** - **`streamEvents()`**: Best for debugging, observability, filtering by event type, agents created with `createAgent`, and migrating existing LCEL applications that rely on callbacks - **`graph.stream()` with `streamMode`**: Best for LangGraph applications where you need structured state updates via `values`, `messages`, or `custom` modes </Note>

Example: Custom Data Streaming

LangChain tools can emit custom data events using config.writer(). The adapter converts these to typed data-{type} parts that can be rendered in the UI or handled via the onData callback:

tsx
import { createUIMessageStreamResponse, UIMessage } from 'ai';
import { createAgent, tool, type ToolRuntime } from 'langchain';
import { ChatOpenAI } from '@langchain/openai';
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
import { z } from 'zod';

export const maxDuration = 60;

const model = new ChatOpenAI({ model: 'gpt-4o-mini' });

// Tool that emits progress updates during execution
const analyzeDataTool = tool(
  async ({ dataSource, analysisType }, config: ToolRuntime) => {
    const steps = ['connecting', 'fetching', 'processing', 'generating'];

    for (let i = 0; i < steps.length; i++) {
      // Emit progress event - becomes 'data-progress' in the UI
      // Include 'id' to persist in message.parts for rendering
      config.writer?.({
        type: 'progress',
        id: `analysis-${Date.now()}`,
        step: steps[i],
        message: `${steps[i]}...`,
        progress: Math.round(((i + 1) / steps.length) * 100),
      });

      await new Promise(resolve => setTimeout(resolve, 500));
    }

    // Emit completion status
    config.writer?.({
      type: 'status',
      id: `status-${Date.now()}`,
      status: 'complete',
      message: 'Analysis finished',
    });

    return JSON.stringify({ result: 'Analysis complete', confidence: 0.94 });
  },
  {
    name: 'analyze_data',
    description: 'Analyze data with progress updates',
    schema: z.object({
      dataSource: z.enum(['sales', 'inventory', 'customers']),
      analysisType: z.enum(['trends', 'anomalies', 'summary']),
    }),
  },
);

const agent = createAgent({
  model,
  tools: [analyzeDataTool],
});

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();
  const langchainMessages = await toBaseMessages(messages);

  // Enable 'custom' stream mode to receive custom data events
  const stream = await agent.stream(
    { messages: langchainMessages },
    { streamMode: ['values', 'messages', 'custom'] },
  );

  return createUIMessageStreamResponse({
    stream: toUIMessageStream(stream),
  });
}

Handle custom data on the client with the onData callback or render persistent data parts:

tsx
'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
  const { messages, sendMessage } = useChat({
    onData: dataPart => {
      // Handle transient data events (without 'id')
      console.log('Received:', dataPart.type, dataPart.data);
    },
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.parts.map((part, i) => {
            if (part.type === 'text') {
              return <span key={i}>{part.text}</span>;
            }
            // Render persistent custom data parts (with 'id')
            if (part.type === 'data-progress') {
              return (
                <div key={i}>
                  Progress: {part.data.progress}% - {part.data.message}
                </div>
              );
            }
            if (part.type === 'data-status') {
              return <div key={i}>Status: {part.data.message}</div>;
            }
            return null;
          })}
        </div>
      ))}
    </div>
  );
}
<Note> **Custom data behavior:** - Data with an `id` field is **persistent** (added to `message.parts` for rendering) - Data without an `id` is **transient** (only delivered via the `onData` callback) - The `type` field determines the event name: `{ type: 'progress' }` → `data-progress` </Note>

Example: LangSmith Deployment Transport

Connect directly to a LangGraph deployment from the browser using LangSmithDeploymentTransport, bypassing the need for a backend API route:

tsx
'use client';

import { useChat } from '@ai-sdk/react';
import { LangSmithDeploymentTransport } from '@ai-sdk/langchain';
import { useMemo } from 'react';

export default function LangSmithChat() {
  const transport = useMemo(
    () =>
      new LangSmithDeploymentTransport({
        // Local development server
        url: 'http://localhost:2024',
        // Or for LangSmith deployment:
        // url: 'https://your-deployment.us.langgraph.app',
        // apiKey: process.env.NEXT_PUBLIC_LANGSMITH_API_KEY,
      }),
    [],
  );

  const { messages, sendMessage, status } = useChat({
    transport,
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.parts.map((part, i) =>
            part.type === 'text' ? <span key={i}>{part.text}</span> : null,
          )}
        </div>
      ))}
      <form
        onSubmit={e => {
          e.preventDefault();
          const input = e.currentTarget.elements.namedItem(
            'message',
          ) as HTMLInputElement;
          sendMessage({ text: input.value });
          input.value = '';
        }}
      >
        <input name="message" placeholder="Send a message..." />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

The LangSmithDeploymentTransport constructor accepts the following options:

  • url: The LangSmith deployment URL or local server URL (required)
  • apiKey: API key for authentication (optional for local development)
  • graphId: The ID of the graph to connect to (defaults to 'agent')

API Reference

toBaseMessages(messages)

Converts AI SDK UIMessage objects to LangChain BaseMessage objects.

ts
import { toBaseMessages } from '@ai-sdk/langchain';

const langchainMessages = await toBaseMessages(uiMessages);

Parameters:

  • messages: UIMessage[] - Array of AI SDK UI messages

Returns: Promise<BaseMessage[]>

convertModelMessages(modelMessages)

Converts AI SDK ModelMessage objects to LangChain BaseMessage objects. Useful when you already have model messages from convertToModelMessages.

ts
import { convertModelMessages } from '@ai-sdk/langchain';

const langchainMessages = convertModelMessages(modelMessages);

Parameters:

  • modelMessages: ModelMessage[] - Array of model messages

Returns: BaseMessage[]

toUIMessageStream(stream)

Converts a LangChain/LangGraph stream to an AI SDK UIMessageStream. Automatically detects the stream type and handles direct model streams, LangGraph streams, and streamEvents() output.

ts
import { toUIMessageStream } from '@ai-sdk/langchain';
import { createUIMessageStreamResponse } from 'ai';

// Works with direct model streams
const modelStream = await model.stream(messages);
return createUIMessageStreamResponse({
  stream: toUIMessageStream(modelStream),
});

// Works with LangGraph streams
const graphStream = await graph.stream(
  { messages },
  { streamMode: ['values', 'messages'] },
);
return createUIMessageStreamResponse({
  stream: toUIMessageStream(graphStream),
});

// Works with streamEvents() output
const streamEvents = model.streamEvents(messages, { version: 'v2' });
return createUIMessageStreamResponse({
  stream: toUIMessageStream(streamEvents),
});

Parameters:

  • stream: AsyncIterable<AIMessageChunk> | ReadableStream - LangChain model stream, LangGraph stream, or streamEvents() output

Returns: ReadableStream<UIMessageChunk>

LangSmithDeploymentTransport

A ChatTransport implementation for LangSmith/LangGraph deployments. Use this with the useChat hook's transport option.

ts
import { LangSmithDeploymentTransport } from '@ai-sdk/langchain';
import { useChat } from '@ai-sdk/react';
import { useMemo } from 'react';

const transport = useMemo(
  () =>
    new LangSmithDeploymentTransport({
      url: 'https://your-deployment.us.langgraph.app',
      apiKey: 'your-api-key',
    }),
  [],
);

const { messages, sendMessage } = useChat({
  transport,
});

Constructor Parameters:

  • options: LangSmithDeploymentTransportOptions
    • url: string - LangSmith deployment URL or local server URL (required)
    • apiKey?: string - API key for authentication (optional)
    • graphId?: string - The ID of the graph to connect to (defaults to 'agent')

Implements: ChatTransport

More Examples

You can find additional examples in the AI SDK examples/next-langchain folder.