Back to Mastra

Vivgrid | Models

docs/src/content/en/models/providers/vivgrid.mdx

2025-12-184.2 KB
Original Source

Vivgrid

Access 8 Vivgrid models through Mastra's model router. Authentication is handled automatically using the VIVGRID_API_KEY environment variable.

Learn more in the Vivgrid documentation.

bash
VIVGRID_API_KEY=your-api-key
typescript
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
  id: "my-agent",
  name: "My Agent",
  instructions: "You are a helpful assistant",
  model: "vivgrid/deepseek-v3.2"
});

// Generate a response
const response = await agent.generate("Hello!");

// Stream a response
const stream = await agent.stream("Tell me a story");
for await (const chunk of stream) {
  console.log(chunk);
}

:::info

Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the Vivgrid documentation for details.

:::

Models

<ProviderModelsTable models={[ { "model": "vivgrid/deepseek-v3.2", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 128000, "maxOutput": 128000, "inputCost": 0.28, "outputCost": 0.42 }, { "model": "vivgrid/gemini-3-flash-preview", "imageInput": true, "audioInput": true, "videoInput": true, "toolUsage": true, "reasoning": true, "contextWindow": 1048576, "maxOutput": 65536, "inputCost": 0.5, "outputCost": 3 }, { "model": "vivgrid/gemini-3-pro-preview", "imageInput": true, "audioInput": true, "videoInput": true, "toolUsage": true, "reasoning": true, "contextWindow": 1048576, "maxOutput": 65536, "inputCost": 2, "outputCost": 12 }, { "model": "vivgrid/glm-5", "imageInput": false, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 202752, "maxOutput": 131000, "inputCost": 1, "outputCost": 3.2 }, { "model": "vivgrid/gpt-5-mini", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 272000, "maxOutput": 128000, "inputCost": 0.25, "outputCost": 2 }, { "model": "vivgrid/gpt-5.1-codex", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 400000, "maxOutput": 128000, "inputCost": 1.25, "outputCost": 10 }, { "model": "vivgrid/gpt-5.1-codex-max", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 400000, "maxOutput": 128000, "inputCost": 1.25, "outputCost": 10 }, { "model": "vivgrid/gpt-5.2-codex", "imageInput": true, "audioInput": false, "videoInput": false, "toolUsage": true, "reasoning": true, "contextWindow": 400000, "maxOutput": 128000, "inputCost": 1.75, "outputCost": 14 } ]} />

Advanced configuration

Custom headers

typescript
const agent = new Agent({
  id: "custom-agent",
  name: "custom-agent",
  model: {
    url: "https://api.vivgrid.com/v1",
    id: "vivgrid/deepseek-v3.2",
    apiKey: process.env.VIVGRID_API_KEY,
    headers: {
      "X-Custom-Header": "value"
    }
  }
});

Dynamic model selection

typescript
const agent = new Agent({
  id: "dynamic-agent",
  name: "Dynamic Agent",
  model: ({ requestContext }) => {
    const useAdvanced = requestContext.task === "complex";
    return useAdvanced
      ? "vivgrid/gpt-5.2-codex"
      : "vivgrid/deepseek-v3.2";
  }
});

Direct provider installation

This provider can also be installed directly as a standalone package, which can be used instead of the Mastra model router string. View the package documentation for more details.

bash
npm install @ai-sdk/openai