content/providers/03-community-providers/30-ollama.mdx
The AI SDK supports Ollama through two community providers:
Both provide language model support for the AI SDK with different approaches and feature sets.
The AI SDK ecosystem offers multiple Ollama providers, each optimized for different use cases:
nordwestt/ollama-ai-provider-v2 provides straightforward access to Ollama models with direct HTTP API calls, making it ideal for basic text generation and getting started quickly.
ai-sdk-ollama by jagreehal is recommended when you need:
mirostat, repeat_penalty, num_ctx for fine-tuned controlKey technical advantages:
Ollama JavaScript client libraryBoth providers implement the AI SDK specification and offer excellent TypeScript support. Choose based on your project's complexity and feature requirements.
Choose and install your preferred Ollama provider:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add ollama-ai-provider-v2" dark /> </Tab> <Tab> <Snippet text="npm install ollama-ai-provider-v2" dark /> </Tab> <Tab> <Snippet text="yarn add ollama-ai-provider-v2" dark /> </Tab> <Tab> <Snippet text="bun add ollama-ai-provider-v2" dark /> </Tab> </Tabs>
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add ai-sdk-ollama" dark /> </Tab> <Tab> <Snippet text="npm install ai-sdk-ollama" dark /> </Tab> <Tab> <Snippet text="yarn add ai-sdk-ollama" dark /> </Tab> <Tab> <Snippet text="bun add ai-sdk-ollama" dark /> </Tab> </Tabs>
You can import the default provider instance ollama from ollama-ai-provider-v2:
import { ollama } from 'ollama-ai-provider-v2';
If you need a customized setup, you can import createOllama from ollama-ai-provider-v2 and create a provider instance with your settings:
import { createOllama } from 'ollama-ai-provider-v2';
const ollama = createOllama({
// optional settings, e.g.
baseURL: 'https://api.ollama.com',
});
You can use the following optional settings to customize the Ollama provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is http://localhost:11434/api.
headers Record<string,string>
Custom headers to include in the requests.
You can create models that call the Ollama Chat Completion API using the provider instance.
The first argument is the model id, e.g. phi3. Some models have multi-modal capabilities.
const model = ollama('phi3');
You can find more models on the Ollama Library homepage.
This provider is capable of using hybrid reasoning models such as qwen3, allowing toggling of reasoning between messages.
import { ollama } from 'ollama-ai-provider-v2';
import { generateText } from 'ai';
const { text } = await generateText({
model: ollama('qwen3:4b'),
providerOptions: { ollama: { think: true } },
prompt:
'Write a vegetarian lasagna recipe for 4 people, but really think about it',
});
You can create models that call the Ollama embeddings API
using the .embeddingModel() factory method.
const model = ollama.embeddingModel('nomic-embed-text');
const { embeddings } = await embedMany({
model: model,
values: ['sunny day at the beach', 'rainy afternoon in the city'],
});
console.log(
`cosine similarity: ${cosineSimilarity(embeddings[0], embeddings[1])}`,
);