examples/next-langchain/langgraph-server/README.md
This is a simple LangGraph agent for local development and testing with the @ai-sdk/langchain adapter.
Install dependencies:
pnpm install
Create a .env file with your OpenAI API key:
OPENAI_API_KEY=your-openai-api-key
Start the development server:
pnpm dev
# Or directly:
npx @langchain/langgraph-cli dev
The server will start at http://localhost:2024.
Note: When running the full example with
pnpm devfrom the parent directory, both Next.js and this LangGraph server start automatically.
The agent includes two tools:
This example uses createAgent from LangChain for simplicity. However, the LangGraph CLI can serve any LangGraph application, including:
createAgent (like this one)StateGraphFor more advanced use cases, you can use the low-level LangGraph APIs:
import {
StateGraph,
MessagesAnnotation,
START,
END,
} from '@langchain/langgraph';
import { ToolNode } from '@langchain/langgraph/prebuilt';
const workflow = new StateGraph(MessagesAnnotation)
.addNode('agent', callModel)
.addNode('tools', new ToolNode(tools))
.addEdge(START, 'agent')
.addConditionalEdges('agent', shouldContinue)
.addEdge('tools', 'agent');
export const graph = workflow.compile();
See the LangGraph documentation for more examples.
Connect to this server from the frontend using LangSmithDeploymentTransport:
import { LangSmithDeploymentTransport } from '@ai-sdk/langchain';
import { useChat } from '@ai-sdk/react';
const transport = new LangSmithDeploymentTransport({
url: 'http://localhost:2024',
});
function Chat() {
const { messages, sendMessage } = useChat({ transport });
// ...
}
The langgraph.json file configures the LangGraph CLI:
{
"graphs": {
"agent": "./src/agent.ts:graph"
},
"env": ".env"
}