Back to Mastra

Embedding models

docs/src/content/en/models/embeddings.mdx

2025-12-184.2 KB
Original Source

Embedding models

Mastra's model router supports embedding models using the same provider/model string format as language models. This provides a unified interface for both chat and embedding models with TypeScript autocomplete support.

Quickstart

typescript
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
import { embedMany } from "ai";

// Generate embeddings
const { embeddings } = await embedMany({
  model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
  values: ["Hello world", "Semantic search is powerful"],
});

Supported models

OpenAI

  • text-embedding-3-small - 1536 dimensions, 8191 max tokens
  • text-embedding-3-large - 3072 dimensions, 8191 max tokens
  • text-embedding-ada-002 - 1536 dimensions, 8191 max tokens
typescript
const embedder = new ModelRouterEmbeddingModel("openai/text-embedding-3-small");

Google

  • gemini-embedding-001 - 768 dimensions, 2048 max tokens
typescript
const embedder = new ModelRouterEmbeddingModel("google/gemini-embedding-001");

Authentication

The model router automatically detects API keys from environment variables:

  • OpenAI: OPENAI_API_KEY
  • Google: GOOGLE_GENERATIVE_AI_API_KEY
bash
# .env
OPENAI_API_KEY=sk-...
GOOGLE_GENERATIVE_AI_API_KEY=...

Custom Providers

You can use any OpenAI-compatible embedding endpoint with a custom URL:

typescript
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";

const embedder = new ModelRouterEmbeddingModel({
  providerId: "ollama",
  modelId: "nomic-embed-text",
  url: "http://localhost:11434/v1",
  apiKey: "not-needed", // Some providers don't require API keys
});

Usage with Memory

The embedding model router integrates seamlessly with Mastra's memory system:

typescript
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";

const agent = new Agent({
  id: "my-agent",
  name: "my-agent",
  instructions: "You are a helpful assistant",
  model: "openai/gpt-5.1",
  memory: new Memory({
    embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
  }),
});

:::info

The embedder field accepts:

  • EmbeddingModelId (string with autocomplete)
  • EmbeddingModel<string> (AI SDK v1)
  • EmbeddingModelV2<string> (AI SDK v2)

:::

Usage with RAG

Use embedding models for document chunking and retrieval:

typescript
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
import { embedMany } from "ai";

// Embed document chunks
const { embeddings } = await embedMany({
  model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
  values: chunks.map((chunk) => chunk.text),
});

// Store embeddings in your vector database
await vectorStore.upsert(
  chunks.map((chunk, i) => ({
    id: chunk.id,
    vector: embeddings[i],
    metadata: chunk.metadata,
  })),
);

TypeScript Support

The model router provides full TypeScript autocomplete for embedding model IDs:

typescript
import type { EmbeddingModelId } from "@mastra/core";

// Type-safe embedding model selection
const modelId: EmbeddingModelId = "openai/text-embedding-3-small";
//                                  ^ Autocomplete shows all supported models

const embedder = new ModelRouterEmbeddingModel(modelId);

Error handling

The model router validates provider and model IDs at construction time:

typescript
try {
  const embedder = new ModelRouterEmbeddingModel("invalid/model");
} catch (error) {
  console.error(error.message);
  // "Unknown provider: invalid. Available providers: openai, google"
}

Missing API keys are also caught early:

typescript
try {
  const embedder = new ModelRouterEmbeddingModel(
    "openai/text-embedding-3-small",
  );
  // Throws if OPENAI_API_KEY is not set
} catch (error) {
  console.error(error.message);
  // "API key not found for provider openai. Set OPENAI_API_KEY environment variable."
}

Next Steps