content/providers/01-ai-sdk-providers/24-togetherai.mdx
The Together.ai provider contains support for 200+ open-source models through the Together.ai API.
The Together.ai provider is available via the @ai-sdk/togetherai module. You can
install it with
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @ai-sdk/togetherai" dark /> </Tab> <Tab> <Snippet text="npm install @ai-sdk/togetherai" dark /> </Tab> <Tab> <Snippet text="yarn add @ai-sdk/togetherai" dark /> </Tab>
<Tab> <Snippet text="bun add @ai-sdk/togetherai" dark /> </Tab> </Tabs>You can import the default provider instance togetherai from @ai-sdk/togetherai:
import { togetherai } from '@ai-sdk/togetherai';
If you need a customized setup, you can import createTogetherAI from @ai-sdk/togetherai
and create a provider instance with your settings:
import { createTogetherAI } from '@ai-sdk/togetherai';
const togetherai = createTogetherAI({
apiKey: process.env.TOGETHER_API_KEY ?? '',
});
You can use the following optional settings to customize the Together.ai provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is https://api.together.xyz/v1.
apiKey string
API key that is being sent using the Authorization header. It defaults to
the TOGETHER_API_KEY environment variable.
headers Record<string,string>
Custom headers to include in the requests.
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation.
Defaults to the global fetch function.
You can use it as a middleware to intercept requests,
or to provide a custom fetch implementation for e.g. testing.
You can create Together.ai models using a provider instance. The first argument is the model id, e.g. google/gemma-2-9b-it.
const model = togetherai('google/gemma-2-9b-it');
Together.ai exposes the thinking of deepseek-ai/DeepSeek-R1 in the generated text using the <think> tag.
You can use the extractReasoningMiddleware to extract this reasoning and expose it as a reasoning property on the result:
import { togetherai } from '@ai-sdk/togetherai';
import { wrapLanguageModel, extractReasoningMiddleware } from 'ai';
const enhancedModel = wrapLanguageModel({
model: togetherai('deepseek-ai/DeepSeek-R1'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
});
You can then use that enhanced model in functions like generateText and streamText.
You can use Together.ai language models to generate text with the generateText function:
import { togetherai } from '@ai-sdk/togetherai';
import { generateText } from 'ai';
const { text } = await generateText({
model: togetherai('meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
Together.ai language models can also be used in the streamText function
(see AI SDK Core).
The Together.ai provider also supports completion models via (following the above example code) togetherai.completionModel() and embedding models via togetherai.embeddingModel().
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|---|---|---|---|---|
moonshotai/Kimi-K2.5 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
Qwen/Qwen3.5-397B-A17B | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
MiniMaxAI/MiniMax-M2.5 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
zai-org/GLM-5 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
deepseek-ai/DeepSeek-V3.1 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
openai/gpt-oss-120b | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
openai/gpt-oss-20b | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
You can create Together.ai image models using the .image() factory method.
For more on image generation with the AI SDK see generateImage().
import { togetherai } from '@ai-sdk/togetherai';
import { generateImage } from 'ai';
const { images } = await generateImage({
model: togetherai.image('black-forest-labs/FLUX.1-dev'),
prompt: 'A delighted resplendent quetzal mid flight amidst raindrops',
});
You can pass optional provider-specific request parameters using the providerOptions argument.
import {
togetherai,
type TogetherAIImageModelOptions,
} from '@ai-sdk/togetherai';
import { generateImage } from 'ai';
const { images } = await generateImage({
model: togetherai.image('black-forest-labs/FLUX.1-dev'),
prompt: 'A delighted resplendent quetzal mid flight amidst raindrops',
size: '512x512',
// Optional additional provider-specific request parameters
providerOptions: {
togetherai: {
steps: 40,
} satisfies TogetherAIImageModelOptions,
},
});
The following provider options are available:
steps number
Number of generation steps. Higher values can improve quality.
guidance number
Guidance scale for image generation.
negative_prompt string
Negative prompt to guide what to avoid.
disable_safety_checker boolean
Disable the safety checker for image generation. When true, the API will not reject images flagged as potentially NSFW. Not available for Flux Schnell Free and Flux Pro models.
Together AI supports image editing through FLUX Kontext models. Pass input images via prompt.images to transform or edit existing images.
Transform an existing image using text prompts:
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: togetherai.image('black-forest-labs/FLUX.1-kontext-pro'),
prompt: {
text: 'Turn the cat into a golden retriever dog',
images: [imageBuffer],
},
size: '1024x1024',
providerOptions: {
togetherai: {
steps: 28,
} satisfies TogetherAIImageModelOptions,
},
});
You can also pass image URLs directly:
const { images } = await generateImage({
model: togetherai.image('black-forest-labs/FLUX.1-kontext-pro'),
prompt: {
text: 'Make the background a lush rainforest',
images: ['https://example.com/photo.png'],
},
size: '1024x1024',
providerOptions: {
togetherai: {
steps: 28,
} satisfies TogetherAIImageModelOptions,
},
});
| Model | Description |
|---|---|
black-forest-labs/FLUX.1-kontext-pro | Production quality, balanced speed |
black-forest-labs/FLUX.1-kontext-max | Maximum image fidelity |
black-forest-labs/FLUX.1-kontext-dev | Development and experimentation |
Together.ai image models support various image dimensions that vary by model. Common sizes include 512x512, 768x768, and 1024x1024, with some models supporting up to 1792x1792. The default size is 1024x1024.
| Available Models |
|---|
stabilityai/stable-diffusion-xl-base-1.0 |
black-forest-labs/FLUX.1-dev |
black-forest-labs/FLUX.1-dev-lora |
black-forest-labs/FLUX.1-schnell |
black-forest-labs/FLUX.1-canny |
black-forest-labs/FLUX.1-depth |
black-forest-labs/FLUX.1-redux |
black-forest-labs/FLUX.1.1-pro |
black-forest-labs/FLUX.1-pro |
black-forest-labs/FLUX.1-schnell-Free |
black-forest-labs/FLUX.1-kontext-pro |
black-forest-labs/FLUX.1-kontext-max |
black-forest-labs/FLUX.1-kontext-dev |
You can create Together.ai embedding models using the .embeddingModel() factory method.
For more on embedding models with the AI SDK see embed().
import { togetherai } from '@ai-sdk/togetherai';
import { embed } from 'ai';
const { embedding } = await embed({
model: togetherai.embeddingModel('togethercomputer/m2-bert-80M-2k-retrieval'),
value: 'sunny day at the beach',
});
| Model | Dimensions | Max Tokens |
|---|---|---|
BAAI/bge-large-en-v1.5 | 1024 | 512 |
Alibaba-NLP/gte-modernbert-base | 768 | 8192 |
intfloat/multilingual-e5-large-instruct | 1024 | 514 |
You can create Together.ai reranking models using the .reranking() factory method.
For more on reranking with the AI SDK see rerank().
import { togetherai } from '@ai-sdk/togetherai';
import { rerank } from 'ai';
const documents = [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
];
const { ranking } = await rerank({
model: togetherai.reranking('mixedbread-ai/Mxbai-Rerank-Large-V2'),
documents,
query: 'talk about rain',
topN: 2,
});
console.log(ranking);
// [
// { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },
// { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }
// ]
Together.ai reranking models support additional provider options for object documents. You can specify which fields to use for ranking:
import {
togetherai,
type TogetherAIRerankingModelOptions,
} from '@ai-sdk/togetherai';
import { rerank } from 'ai';
const documents = [
{
from: 'Paul Doe',
subject: 'Follow-up',
text: 'We are happy to give you a discount of 20%.',
},
{
from: 'John McGill',
subject: 'Missing Info',
text: 'Here is the pricing from Oracle: $5000/month',
},
];
const { ranking } = await rerank({
model: togetherai.reranking('mixedbread-ai/Mxbai-Rerank-Large-V2'),
documents,
query: 'Which pricing did we get from Oracle?',
providerOptions: {
togetherai: {
rankFields: ['from', 'subject', 'text'], // Specify which fields to rank by
} satisfies TogetherAIRerankingModelOptions,
},
});
The following provider options are available:
rankFields string[]
Array of field names to use for ranking when documents are JSON objects. If not specified, all fields are used.
| Model |
|---|
mixedbread-ai/Mxbai-Rerank-Large-V2 |