apps/docs/content/docs/guides/integrations/ai-sdk.mdx
Prisma ORM streamlines database access with type-safe queries, and when paired with Next.js and AI SDK, it creates a powerful foundation for building AI-powered chat applications with persistent storage.
In this guide, you'll learn to build a chat application using AI SDK with Next.js and Prisma ORM to store chat sessions and messages in a Prisma Postgres database. You can find a complete example of this guide on GitHub.
To get started, you'll need to create a new Next.js project.
npx create-next-app@latest ai-sdk-prisma
It will prompt you to customize your setup. Choose the defaults:
:::info
YesYesYessrc/ directory? NoYesnext dev? Yes@/_by default)?*No:::
Navigate to the project directory:
cd ai-sdk-prisma
To get started with Prisma, you'll need to install a few dependencies:
npm install prisma tsx @types/pg --save-dev
npm install @prisma/client @prisma/adapter-pg dotenv pg
:::info
If you are using a different database provider (MySQL, SQL Server, SQLite), install the corresponding driver adapter package instead of @prisma/adapter-pg. For more information, see Database drivers.
:::
Once installed, initialize Prisma in your project:
npx prisma init --db --output ../app/generated/prisma
:::info You'll need to answer a few questions while setting up your Prisma Postgres database. Select the region closest to your location and a memorable name for your database like "My Next.js AI SDK Project" :::
This will create:
prisma directory with a schema.prisma file.prisma.config.ts file for configuring Prisma.env file containing the DATABASE_URL at the project root.output field specifies where the generated Prisma Client will be stored.In the prisma/schema.prisma file, add the following models:
generator client {
provider = "prisma-client"
output = "../app/generated/prisma"
}
datasource db {
provider = "postgresql"
}
model Session { // [!code ++]
id String @id // [!code ++]
createdAt DateTime @default(now()) // [!code ++]
updatedAt DateTime @updatedAt // [!code ++]
messages Message[] // [!code ++]
} // [!code ++]
// [!code ++]
model Message { // [!code ++]
id String @id @default(cuid()) // [!code ++]
role MessageRole // [!code ++]
content String // [!code ++]
createdAt DateTime @default(now()) // [!code ++]
sessionId String // [!code ++]
session Session @relation(fields: [sessionId], references: [id], onDelete: Cascade) // [!code ++]
} // [!code ++]
// [!code ++]
enum MessageRole { // [!code ++]
USER // [!code ++]
ASSISTANT // [!code ++]
} // [!code ++]
This creates three models: Session, Message, and MessageRole.
dotenv to prisma.config.tsTo get access to the variables in the .env file, they can either be loaded by your runtime, or by using dotenv.
Include an import for dotenv at the top of the prisma.config.ts
import "dotenv/config"; // [!code ++]
import { defineConfig, env } from "prisma/config";
export default defineConfig({
schema: "prisma/schema.prisma",
migrations: {
path: "prisma/migrations",
},
datasource: {
url: env("DATABASE_URL"),
},
});
Now, run the following command to create the database tables and generate the Prisma Client:
npx prisma migrate dev --name init
npx prisma generate
Create a /lib directory and a prisma.ts file inside it. This file will be used to create and export your Prisma Client instance.
mkdir lib
touch lib/prisma.ts
Set up the Prisma client like this:
import { PrismaClient } from "../app/generated/prisma/client";
import { PrismaPg } from "@prisma/adapter-pg";
const adapter = new PrismaPg({
connectionString: process.env.DATABASE_URL!,
});
const globalForPrisma = global as unknown as {
prisma: PrismaClient;
};
const prisma =
globalForPrisma.prisma ||
new PrismaClient({
adapter,
});
if (process.env.NODE_ENV !== "production") globalForPrisma.prisma = prisma;
export default prisma;
:::warning We recommend using a connection pooler (like Prisma Accelerate) to manage database connections efficiently.
If you choose not to use one, avoid instantiating PrismaClient globally in long-lived environments. Instead, create and dispose of the client per request to prevent exhausting your database connections.
:::
Install the AI SDK package:
npm install ai @ai-sdk/react @ai-sdk/openai zod
To use AI SDK, you'll need to obtain an API key from OpenAI.
Create new secret keyNext.js AI SDK ProjectAll accessCreate secret key.env file:DATABASE_URL=<YOUR_DATABASE_URL_HERE>
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY_HERE>
You need to create a route handler to handle the AI SDK requests. This handler will process chat messages and stream AI responses back to the client.
mkdir -p app/api/chat
touch app/api/chat/route.ts
Set up the basic route handler:
import { openai } from "@ai-sdk/openai";
import { streamText, UIMessage, convertToModelMessages } from "ai";
export const maxDuration = 300;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
This route handler:
To save chat sessions and messages to the database, we need to:
id parameter to the requestonFinish callback in the responseid and messages parameters to the saveChat function (which we'll build next)import { openai } from "@ai-sdk/openai";
import { streamText, UIMessage, convertToModelMessages } from "ai";
import { saveChat } from "@/lib/save-chat"; // [!code ++]
export const maxDuration = 300;
export async function POST(req: Request) {
const { messages, id }: { messages: UIMessage[]; id: string } = await req.json(); // [!code highlight]
const result = streamText({
model: openai("gpt-4o"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse({
originalMessages: messages, // [!code ++]
onFinish: async ({ messages }) => {
// [!code ++]
await saveChat(messages, id); // [!code ++]
}, // [!code ++]
});
}
saveChat functionCreate a new file at lib/save-chat.ts to save the chat sessions and messages to the database:
touch lib/save-chat.ts
To start, create a basic function called saveChat that will be used to save the chat sessions and messages to the database.
Pass into it the messages and id parameters typed as UIMessage[] and string respectively:
import { UIMessage } from "ai";
export async function saveChat(messages: UIMessage[], id: string) {}
Now, add the logic to create a session with the given id:
import prisma from "./prisma"; // [!code ++]
import { UIMessage } from "ai";
export async function saveChat(messages: UIMessage[], id: string) {
const session = await prisma.session.upsert({
// [!code ++]
where: { id }, // [!code ++]
update: {}, // [!code ++]
create: { id }, // [!code ++]
}); // [!code ++]
// [!code ++]
if (!session) throw new Error("Session not found"); // [!code ++]
}
Add the logic to save the messages to the database. You'll only be saving the last two messages (Users and Assistants last messages) to avoid any overlapping messages.
import prisma from "./prisma";
import { UIMessage } from "ai";
export async function saveChat(messages: UIMessage[], id: string) {
const session = await prisma.session.upsert({
where: { id },
update: {},
create: { id },
});
if (!session) throw new Error("Session not found");
const lastTwoMessages = messages.slice(-2); // [!code ++]
// [!code ++]
for (const msg of lastTwoMessages) {
// [!code ++]
let content = JSON.stringify(msg.parts); // [!code ++]
if (msg.role === "assistant") {
// [!code ++]
const textParts = msg.parts.filter((part) => part.type === "text"); // [!code ++]
content = JSON.stringify(textParts); // [!code ++]
} // [!code ++]
// [!code ++]
await prisma.message.create({
// [!code ++]
data: {
// [!code ++]
role: msg.role === "user" ? "USER" : "ASSISTANT", // [!code ++]
content: content, // [!code ++]
sessionId: session.id, // [!code ++]
}, // [!code ++]
}); // [!code ++]
} // [!code ++]
}
This function:
id to create a session if it doesn't existsessionIdCreate a new file at app/api/messages/route.ts to fetch the messages from the database:
mkdir -p app/api/messages
touch app/api/messages/route.ts
Create a basic API route to fetch the messages from the database.
import { NextResponse } from "next/server";
import prisma from "@/lib/prisma";
export async function GET() {
try {
const messages = await prisma.message.findMany({
orderBy: { createdAt: "asc" },
});
const uiMessages = messages.map((msg) => ({
id: msg.id,
role: msg.role.toLowerCase(),
parts: JSON.parse(msg.content),
}));
return NextResponse.json({ messages: uiMessages });
} catch (error) {
console.error("Error fetching messages:", error);
return NextResponse.json({ messages: [] });
}
}
Replace the content of the app/page.tsx file with the following:
"use client";
export default function Page() {}
Start by importing the required dependencies and setting up the state variables that will manage the chat interface:
"use client";
import { useChat } from "@ai-sdk/react"; // [!code ++]
import { useState, useEffect } from "react"; // [!code ++]
export default function Chat() {
const [input, setInput] = useState(""); // [!code ++]
const [isLoading, setIsLoading] = useState(true); // [!code ++]
// [!code ++]
const { messages, sendMessage, setMessages } = useChat(); // [!code ++]
}
Create a useEffect hook that will automatically fetch and display any previously saved messages when the chat component loads:
"use client";
import { useChat } from "@ai-sdk/react";
import { useState, useEffect } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(true);
const { messages, sendMessage, setMessages } = useChat();
useEffect(() => {
// [!code ++]
fetch("/api/messages") // [!code ++]
.then((res) => res.json()) // [!code ++]
.then((data) => {
// [!code ++]
if (data.messages && data.messages.length > 0) {
// [!code ++]
setMessages(data.messages); // [!code ++]
} // [!code ++]
setIsLoading(false); // [!code ++]
}) // [!code ++]
.catch(() => setIsLoading(false)); // [!code ++]
}, [setMessages]); // [!code ++]
}
This loads any existing messages from your database when the component first mounts, so users can see their previous conversation history.
Build the UI components that will show a loading indicator while fetching data and render the chat messages with proper styling:
'use client';
import { useChat } from '@ai-sdk/react';
import { useState, useEffect } from 'react';
export default function Chat() {
const [input, setInput] = useState('');
const [isLoading, setIsLoading] = useState(true);
const { messages, sendMessage, setMessages } = useChat();
useEffect(() => {
fetch('/api/messages')
.then(res => res.json())
.then(data => {
if (data.messages && data.messages.length > 0) {
setMessages(data.messages);
}
setIsLoading(false);
})
.catch(() => setIsLoading(false));
}, [setMessages]);
if (isLoading) { // [!code ++]
return <div className="flex justify-center items-center h-screen">Loading...</div>; // [!code ++]
} // [!code ++]
// [!code ++]
return ( // [!code ++]
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch"> // [!code ++]
{messages.map(message => ( // [!code ++]
<div key={message.id} className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'} mb-4`}> // [!code ++]
<div className={`max-w-[80%] rounded-lg px-4 py-3 ${ // [!code ++]
message.role === 'user' // [!code ++]
? 'bg-neutral-600 text-white' // [!code ++]
: 'bg-neutral-200 dark:bg-neutral-800 text-neutral-900 dark:text-neutral-100' // [!code ++]
}`}> // [!code ++]
<div className="whitespace-pre-wrap"> // [!code ++]
<p className="text-xs font-extralight mb-1 opacity-70">{message.role === 'user' ? 'YOU ' : 'AI '}</p> // [!code ++]
{message.parts.map((part, i) => { // [!code ++]
switch (part.type) { // [!code ++]
case 'text': // [!code ++]
return <div key={`${message.id}-${i}`}>{part.text}</div>; // [!code ++]
} // [!code ++]
})} // [!code ++]
</div> // [!code ++]
</div> // [!code ++]
</div> // [!code ++]
))} // [!code ++]
The message rendering logic handles different message types and applies appropriate styling - user messages appear on the right with a dark background, while AI responses appear on the left with a light background.
Now we need to create the input interface that allows users to type and send messages to the AI:
"use client";
import { useChat } from "@ai-sdk/react";
import { useState, useEffect } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(true);
const { messages, sendMessage, setMessages } = useChat();
useEffect(() => {
fetch("/api/messages")
.then((res) => res.json())
.then((data) => {
if (data.messages && data.messages.length > 0) {
setMessages(data.messages);
}
setIsLoading(false);
})
.catch(() => setIsLoading(false));
}, [setMessages]);
if (isLoading) {
return <div className="flex justify-center items-center h-screen">Loading...</div>;
}
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((message) => (
<div
key={message.id}
className={`flex ${message.role === "user" ? "justify-end" : "justify-start"} mb-4`}
>
<div
className={`max-w-[80%] rounded-lg px-4 py-3 ${
message.role === "user"
? "bg-neutral-600 text-white"
: "bg-neutral-200 dark:bg-neutral-800 text-neutral-900 dark:text-neutral-100"
}`}
>
<div className="whitespace-pre-wrap">
<p className="text-xs font-extralight mb-1 opacity-70">
{message.role === "user" ? "YOU " : "AI "}
</p>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
</div>
</div>
))}
<form // [!code ++]
onSubmit={(e) => {
// [!code ++]
e.preventDefault(); // [!code ++]
sendMessage({ text: input }); // [!code ++]
setInput(""); // [!code ++]
}} // [!code ++]
>
{" "}
// [!code ++]
<input // [!code ++]
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl" // [!code ++]
value={input} // [!code ++]
placeholder="Say something..." // [!code ++]
onChange={(e) => setInput(e.currentTarget.value)} // [!code ++]
/>{" "}
// [!code ++]
</form>{" "}
// [!code ++]
</div>
);
}
To test your application, run the following command:
npm run dev
Open your browser and navigate to http://localhost:3000 to see your application in action.
Test it by sending a message to the AI and see if it's saved to the database. Check Prisma Studio to see the messages in the database.
npx prisma studio
You're done! You've just created a AI SDK chat application with Next.js and Prisma. Below are some next steps to explore, as well as some more resources to help you get started expanding your project.
Now that you have a working AI SDK chat application connected to a Prisma Postgres database, you can: