packages/web/src/content/docs/go.mdx
import config from "../../../config.mjs"
export const console = config.console
export const email = mailto:${config.email}
OpenCode Go is a low cost subscription — $5 for your first month, then $10/month — that gives you reliable access to popular open coding models.
:::note OpenCode Go is currently in beta. :::
Go works like any other provider in OpenCode. You subscribe to OpenCode Go and get your API key. It's completely optional and you don't need to use it to use OpenCode.
It is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access.
Open models have gotten really good. They now reach performance close to proprietary models for coding tasks. And because many providers can serve them competitively, they are usually far cheaper.
However, getting reliable, low latency access to them can be difficult. Providers vary in quality and availability.
:::tip We tested a select group of models and providers that work well with OpenCode. :::
To fix this, we did a couple of things:
OpenCode Go gives you access to these models for $5 for your first month, then $10/month.
OpenCode Go works like any other provider in OpenCode.
/connect command in the TUI, select OpenCode Go, and paste
your API key./models in the TUI to see the list of models available through Go.:::note Only one member per workspace can subscribe to OpenCode Go. :::
The current list of models includes:
The list of models may change as we test and add new ones.
OpenCode Go includes the following limits:
Limits are defined in dollar value. This means your actual request count depends on the model you use. Cheaper models like Qwen3.5 Plus allow for more requests, while higher-cost models like GLM-5.1 allow for fewer.
The table below provides an estimated request count based on typical Go usage patterns:
| Model | requests per 5 hour | requests per week | requests per month |
|---|---|---|---|
| GLM-5.1 | 880 | 2,150 | 4,300 |
| GLM-5 | 1,150 | 2,880 | 5,750 |
| Kimi K2.5 | 1,850 | 4,630 | 9,250 |
| Kimi K2.6 | 1,150 | 2,880 | 5,750 |
| MiMo-V2-Pro | 1,290 | 3,225 | 6,450 |
| MiMo-V2-Omni | 2,150 | 5,450 | 10,900 |
| MiMo-V2.5-Pro | 1,290 | 3,225 | 6,450 |
| MiMo-V2.5 (≤ 256K) | 2,150 | 5,450 | 10,900 |
| MiniMax M2.7 | 3,400 | 8,500 | 17,000 |
| MiniMax M2.5 | 6,300 | 15,900 | 31,800 |
| Qwen3.6 Plus | 3,300 | 8,200 | 16,300 |
| Qwen3.5 Plus | 10,200 | 25,200 | 50,500 |
| DeepSeek V4 Pro | 3,450 | 8,550 | 17,150 |
| DeepSeek V4 Flash | 31,650 | 79,050 | 158,150 |
Estimates are based on observed average request patterns:
You can track your current usage in the <a href={console}>console</a>.
:::tip If you reach the usage limit, you can continue using the free models. :::
Usage limits may change as we learn from early usage and feedback.
If you also have credits on your Zen balance, you can enable the Use balance option in the console. When enabled, Go will fall back to your Zen balance after you've reached your usage limits instead of blocking requests.
You can also access Go models through the following API endpoints.
| Model | Model ID | Endpoint | AI SDK Package |
|---|---|---|---|
| GLM-5.1 | glm-5.1 | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| GLM-5 | glm-5 | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2.5 | kimi-k2.5 | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2.6 | kimi-k2.6 | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| DeepSeek V4 Pro | deepseek-v4-pro | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| DeepSeek V4 Flash | deepseek-v4-flash | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| MiMo-V2-Pro | mimo-v2-pro | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| MiMo-V2-Omni | mimo-v2-omni | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| MiMo-V2.5-Pro | mimo-v2.5-pro | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| MiMo-V2.5 | mimo-v2.5 | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/openai-compatible |
| MiniMax M2.7 | minimax-m2.7 | https://opencode.ai/zen/go/v1/messages | @ai-sdk/anthropic |
| MiniMax M2.5 | minimax-m2.5 | https://opencode.ai/zen/go/v1/messages | @ai-sdk/anthropic |
| Qwen3.6 Plus | qwen3.6-plus | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/alibaba |
| Qwen3.5 Plus | qwen3.5-plus | https://opencode.ai/zen/go/v1/chat/completions | @ai-sdk/alibaba |
The model id in your OpenCode config
uses the format opencode-go/<model-id>. For example, for Kimi K2.6, you would
use opencode-go/kimi-k2.6 in your config.
You can fetch the full list of available models and their metadata from:
https://opencode.ai/zen/go/v1/models
The plan is designed primarily for international users, with models hosted in the US, EU, and Singapore for stable global access. Our providers follow a zero-retention policy and do not use your data for model training.
We created OpenCode Go to: