content/providers/03-community-providers/09-browser-ai.mdx
jakobhoeg/browser-ai is a community provider that serves as the base AI SDK provider for client side in-browser AI models. It currently provides a model provider for Chrome & Edge's native browser AI models via the JavaScript Prompt API, as well as a model provider for using open-source in-browser models with both 🤗 Transformers.js and WebLLM.
<Note>We support both v5 and v6 of the AI SDK.</Note>
The @browser-ai/core (formerly @built-in-ai) package is the AI SDK provider for Chrome and Edge browser's built-in AI models. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @browser-ai/core" dark /> </Tab> <Tab> <Snippet text="npm install @browser-ai/core" dark /> </Tab> <Tab> <Snippet text="yarn add @browser-ai/core" dark /> </Tab> <Tab> <Snippet text="bun add @browser-ai/core" dark /> </Tab> </Tabs>
The @browser-ai/web-llm package is the AI SDK provider for popular open-source models using the WebLLM inference engine. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @browser-ai/web-llm" dark /> </Tab> <Tab> <Snippet text="npm install @browser-ai/web-llm" dark /> </Tab> <Tab> <Snippet text="yarn add @browser-ai/web-llm" dark /> </Tab> <Tab> <Snippet text="bun add @browser-ai/web-llm" dark /> </Tab> </Tabs>
The @browser-ai/transformers-js package is the AI SDK provider for popular open-source models using Transformers.js. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @browser-ai/transformers-js" dark /> </Tab> <Tab> <Snippet text="npm install @browser-ai/transformers-js" dark /> </Tab> <Tab> <Snippet text="yarn add @browser-ai/transformers-js" dark /> </Tab> <Tab> <Snippet text="bun add @browser-ai/transformers-js" dark /> </Tab> </Tabs>
@browser-ai/coreYou can import the default provider instance browserAI from @browser-ai/core:
import { browserAI } from '@browser-ai/core';
const model = browserAI();
You can use the following optional settings to customize the model:
temperature number
Controls randomness in the model's responses. For most models, 0 means almost deterministic results, and higher values mean more randomness.
topK number
Control the diversity and coherence of generated text by limiting the selection of the next token.
@browser-ai/web-llmYou can import the default provider instance webLLM from @browser-ai/web-llm:
import { webLLM } from '@browser-ai/web-llm';
const model = webLLM();
@browser-ai/transformers-jsYou can import the default provider instance transformersJS from @browser-ai/transformers-js:
import { transformersJS } from '@browser-ai/transformers-js';
const model = transformersJS();
@browser-ai/coreThe provider will automatically work in all browsers that support the Prompt API since the browser handles model orchestration. For instance, if your client uses Edge, it will use Phi4-mini, and for Chrome it will use Gemini Nano.
@browser-ai/web-llmThe provider allows using a ton of popular open-source models such as Llama3 and Qwen3. To see a complete list, please refer to the official WebLLM documentation
@browser-ai/transformers-jsThe provider allows using a ton of popular open-source models from Huggingface with the Transformers.js library.
@browser-ai/coreimport { streamText } from 'ai';
import { browserAI } from '@browser-ai/core';
const result = streamText({
model: browserAI(), // will default to the specific browser model
prompt: 'Hello, how are you',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
@browser-ai/web-llmimport { streamText } from 'ai';
import { webLLM } from '@browser-ai/web-llm';
const result = streamText({
model: webLLM('Qwen3-0.6B-q0f16-MLC'),
prompt: 'Hello, how are you',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
@browser-ai/transformers-jsimport { streamText } from 'ai';
import { transformersJS } from '@browser-ai/transformers-js';
const result = streamText({
model: transformersJS('HuggingFaceTB/SmolLM2-360M-Instruct'),
prompt: 'Hello, how are you',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
For more examples and API reference, check out the documentation.