Back to Mastra

Inference | Models

docs/src/content/en/models/providers/inference.mdx

2025-12-184.4 KB
Original Source

Inference

Access 9 Inference models through Mastra's model router. Authentication is handled automatically using the INFERENCE_API_KEY environment variable.

Learn more in the Inference documentation.

bash
INFERENCE_API_KEY=your-api-key
typescript
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
  id: "my-agent",
  name: "My Agent",
  instructions: "You are a helpful assistant",
  model: "inference/google/gemma-3"
});

// Generate a response
const response = await agent.generate("Hello!");

// Stream a response
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream) {
  console.log(chunk);
}

:::info

Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the Inference documentation for details.

:::

Models

<ProviderModelsTable models={[ { "model": "inference/google/gemma-3", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 125000, "maxOutput": 4096, "inputCost": 0.15, "outputCost": 0.3 }, { "model": "inference/meta/llama-3.1-8b-instruct", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 16000, "maxOutput": 4096, "inputCost": 0.025, "outputCost": 0.025 }, { "model": "inference/meta/llama-3.2-11b-vision-instruct", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 16000, "maxOutput": 4096, "inputCost": 0.055, "outputCost": 0.055 }, { "model": "inference/meta/llama-3.2-1b-instruct", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 16000, "maxOutput": 4096, "inputCost": 0.01, "outputCost": 0.01 }, { "model": "inference/meta/llama-3.2-3b-instruct", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 16000, "maxOutput": 4096, "inputCost": 0.02, "outputCost": 0.02 }, { "model": "inference/mistral/mistral-nemo-12b-instruct", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 16000, "maxOutput": 4096, "inputCost": 0.038, "outputCost": 0.1 }, { "model": "inference/osmosis/osmosis-structure-0.6b", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 4000, "maxOutput": 2048, "inputCost": 0.1, "outputCost": 0.5 }, { "model": "inference/qwen/qwen-2.5-7b-vision-instruct", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": false, "contextWindow": 125000, "maxOutput": 4096, "inputCost": 0.2, "outputCost": 0.2 }, { "model": "inference/qwen/qwen3-embedding-4b", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": false, "reasoning": false, "contextWindow": 32000, "maxOutput": 2048, "inputCost": 0.01, "outputCost": null } ]} />

Advanced configuration

Custom headers

typescript
const agent = new Agent({
  id: "custom-agent",
  name: "custom-agent",
  model: {
    url: "https://inference.net/v1",
    id: "inference/google/gemma-3",
    apiKey: process.env.INFERENCE_API_KEY,
    headers: {
      "X-Custom-Header": "value"
    }
  }
});

Dynamic model selection

typescript
const agent = new Agent({
  id: "dynamic-agent",
  name: "Dynamic Agent",
  model: ({ requestContext }) => {
    const useAdvanced = requestContext.task === "complex";
    return useAdvanced
      ? "inference/qwen/qwen3-embedding-4b"
      : "inference/google/gemma-3";
  }
});