docs/src/content/en/models/embeddings.mdx
Mastra's model router supports embedding models using the same provider/model string format as language models. This provides a unified interface for both chat and embedding models with TypeScript autocomplete support.
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
import { embedMany } from "ai";
// Generate embeddings
const { embeddings } = await embedMany({
model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
values: ["Hello world", "Semantic search is powerful"],
});
text-embedding-3-small - 1536 dimensions, 8191 max tokenstext-embedding-3-large - 3072 dimensions, 8191 max tokenstext-embedding-ada-002 - 1536 dimensions, 8191 max tokensconst embedder = new ModelRouterEmbeddingModel("openai/text-embedding-3-small");
gemini-embedding-001 - 768 dimensions, 2048 max tokensconst embedder = new ModelRouterEmbeddingModel("google/gemini-embedding-001");
The model router automatically detects API keys from environment variables:
OPENAI_API_KEYGOOGLE_GENERATIVE_AI_API_KEY# .env
OPENAI_API_KEY=sk-...
GOOGLE_GENERATIVE_AI_API_KEY=...
You can use any OpenAI-compatible embedding endpoint with a custom URL:
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
const embedder = new ModelRouterEmbeddingModel({
providerId: "ollama",
modelId: "nomic-embed-text",
url: "http://localhost:11434/v1",
apiKey: "not-needed", // Some providers don't require API keys
});
The embedding model router integrates seamlessly with Mastra's memory system:
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
const agent = new Agent({
id: "my-agent",
name: "my-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5.1",
memory: new Memory({
embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
}),
});
:::info
The embedder field accepts:
EmbeddingModelId (string with autocomplete)EmbeddingModel<string> (AI SDK v1)EmbeddingModelV2<string> (AI SDK v2):::
Use embedding models for document chunking and retrieval:
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
import { embedMany } from "ai";
// Embed document chunks
const { embeddings } = await embedMany({
model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
values: chunks.map((chunk) => chunk.text),
});
// Store embeddings in your vector database
await vectorStore.upsert(
chunks.map((chunk, i) => ({
id: chunk.id,
vector: embeddings[i],
metadata: chunk.metadata,
})),
);
The model router provides full TypeScript autocomplete for embedding model IDs:
import type { EmbeddingModelId } from "@mastra/core";
// Type-safe embedding model selection
const modelId: EmbeddingModelId = "openai/text-embedding-3-small";
// ^ Autocomplete shows all supported models
const embedder = new ModelRouterEmbeddingModel(modelId);
The model router validates provider and model IDs at construction time:
try {
const embedder = new ModelRouterEmbeddingModel("invalid/model");
} catch (error) {
console.error(error.message);
// "Unknown provider: invalid. Available providers: openai, google"
}
Missing API keys are also caught early:
try {
const embedder = new ModelRouterEmbeddingModel(
"openai/text-embedding-3-small",
);
// Throws if OPENAI_API_KEY is not set
} catch (error) {
console.error(error.message);
// "API key not found for provider openai. Set OPENAI_API_KEY environment variable."
}