packages/web/src/content/docs/providers.mdx
import config from "../../../config.mjs" export const console = config.console
OpenCode uses the AI SDK and Models.dev to support 75+ LLM providers and it supports running local models.
To add a provider you need to:
/connect command.When you add a provider's API keys with the /connect command, they are stored
in ~/.local/share/opencode/auth.json.
You can customize the providers through the provider section in your OpenCode
config.
You can customize the base URL for any provider by setting the baseURL option. This is useful when using proxy services or custom endpoints.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anthropic": {
"options": {
"baseURL": "https://api.anthropic.com/v1"
}
}
}
}
OpenCode Zen is a list of models provided by the OpenCode team that have been tested and verified to work well with OpenCode. Learn more.
:::tip If you are new, we recommend starting with OpenCode Zen. :::
Run the /connect command in the TUI, select OpenCode Zen, and head to opencode.ai/auth.
/connect
Sign in, add your billing details, and copy your API key.
Paste your API key.
┌ API key
│
│
└ enter
Run /models in the TUI to see the list of models we recommend.
/models
It works like any other provider in OpenCode and is completely optional to use.
OpenCode Go is a low cost subscription plan that provides reliable access to popular open coding models provided by the OpenCode team that have been tested and verified to work well with OpenCode.
Run the /connect command in the TUI, select OpenCode Go, and head to opencode.ai/auth.
/connect
Sign in, add your billing details, and copy your API key.
Paste your API key.
┌ API key
│
│
└ enter
Run /models in the TUI to see the list of models we recommend.
/models
It works like any other provider in OpenCode and is completely optional to use.
Let's look at some of the providers in detail. If you'd like to add a provider to the list, feel free to open a PR.
:::note Don't see a provider here? Submit a PR. :::
Head over to the 302.AI console, create an account, and generate an API key.
Run the /connect command and search for 302.AI.
/connect
Enter your 302.AI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
To use Amazon Bedrock with OpenCode:
Head over to the Model catalog in the Amazon Bedrock console and request access to the models you want.
:::tip You need to have access to the model you want in Amazon Bedrock. :::
Configure authentication using one of the following methods:
Set one of these environment variables while running opencode:
# Option 1: Using AWS access keys
AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY opencode
# Option 2: Using named AWS profile
AWS_PROFILE=my-profile opencode
# Option 3: Using Bedrock bearer token
AWS_BEARER_TOKEN_BEDROCK=XXX opencode
Or add them to your bash profile:
export AWS_PROFILE=my-dev-profile
export AWS_REGION=us-east-1
For project-specific or persistent configuration, use opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"amazon-bedrock": {
"options": {
"region": "us-east-1",
"profile": "my-aws-profile"
}
}
}
}
Available options:
region - AWS region (e.g., us-east-1, eu-west-1)profile - AWS named profile from ~/.aws/credentialsendpoint - Custom endpoint URL for VPC endpoints (alias for generic baseURL option):::tip Configuration file options take precedence over environment variables. :::
If you're using VPC endpoints for Bedrock:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"amazon-bedrock": {
"options": {
"region": "us-east-1",
"profile": "production",
"endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"
}
}
}
}
:::note
The endpoint option is an alias for the generic baseURL option, using AWS-specific terminology. If both endpoint and baseURL are specified, endpoint takes precedence.
:::
AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY: Create an IAM user and generate access keys in the AWS ConsoleAWS_PROFILE: Use named profiles from ~/.aws/credentials. First configure with aws configure --profile my-profile or aws sso loginAWS_BEARER_TOKEN_BEDROCK: Generate long-term API keys from the Amazon Bedrock consoleAWS_WEB_IDENTITY_TOKEN_FILE / AWS_ROLE_ARN: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation. These environment variables are automatically injected by Kubernetes when using service account annotations.Amazon Bedrock uses the following authentication priority:
AWS_BEARER_TOKEN_BEDROCK environment variable or token from /connect command:::note
When a bearer token is set (via /connect or AWS_BEARER_TOKEN_BEDROCK), it takes precedence over all AWS credential methods including configured profiles.
:::
Run the /models command to select the model you want.
/models
:::note
For custom inference profiles, use the model and provider name in the key and set the id property to the arn. This ensures correct caching.
:::
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"amazon-bedrock": {
// ...
"models": {
"anthropic-claude-sonnet-4.5": {
"id": "arn:aws:bedrock:us-east-1:xxx:application-inference-profile/yyy"
}
}
}
}
}
Once you've signed up, run the /connect command and select Anthropic.
/connect
Here you can select the Claude Pro/Max option and it'll open your browser and ask you to authenticate.
┌ Select auth method
│
│ Manually enter API Key
└
Now all the Anthropic models should be available when you use the /models command.
/models
:::info There are plugins that allow you to use your Claude Pro/Max models with OpenCode. Anthropic explicitly prohibits this.
Previous versions of OpenCode came bundled with these plugins but that is no longer the case as of 1.3.0
Other companies support freedom of choice with developer tooling - you can use the following subscriptions in OpenCode with zero setup:
You can configure opencode to use local models through Atomic Chat, a desktop application that runs local LLMs behind an OpenAI-compatible API server (default endpoint http://127.0.0.1:1337/v1).
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"atomic-chat": {
"npm": "@ai-sdk/openai-compatible",
"name": "Atomic Chat (local)",
"options": {
"baseURL": "http://127.0.0.1:1337/v1"
},
"models": {
"<your-model-id>": {
"name": "<your-model-name>"
}
}
}
}
}
In this example:
atomic-chat is the custom provider ID. This can be any string you want.npm specifies the package to use for this provider. Here, @ai-sdk/openai-compatible is used for any OpenAI-compatible API.name is the display name for the provider in the UI.options.baseURL is the endpoint for the local server. Change the host and port to match your Atomic Chat setup.models is a map of model IDs to their display names. Each ID must match the id returned by GET /v1/models — run curl http://127.0.0.1:1337/v1/models to list the ids currently loaded in Atomic Chat.:::tip If tool calls aren't working well, pick a loaded model with strong tool-calling support (for example, a Qwen-Coder or DeepSeek-Coder variant). :::
:::note If you encounter "I'm sorry, but I cannot assist with that request" errors, try changing the content filter from DefaultV2 to Default in your Azure resource. :::
Head over to the Azure portal and create an Azure OpenAI resource. You'll need:
https://RESOURCE_NAME.openai.azure.com/)KEY 1 or KEY 2 from your resourceGo to Azure AI Foundry and deploy a model.
:::note The deployment name must match the model name for opencode to work properly. :::
Run the /connect command and search for Azure.
/connect
Enter your API key.
┌ API key
│
│
└ enter
Set your resource name as an environment variable:
AZURE_RESOURCE_NAME=XXX opencode
Or add it to your bash profile:
export AZURE_RESOURCE_NAME=XXX
Run the /models command to select your deployed model.
/models
Head over to the Azure portal and create an Azure OpenAI resource. You'll need:
https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/)KEY 1 or KEY 2 from your resourceGo to Azure AI Foundry and deploy a model.
:::note The deployment name must match the model name for opencode to work properly. :::
Run the /connect command and search for Azure Cognitive Services.
/connect
Enter your API key.
┌ API key
│
│
└ enter
Set your resource name as an environment variable:
AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX opencode
Or add it to your bash profile:
export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX
Run the /models command to select your deployed model.
/models
Head over to the Baseten, create an account, and generate an API key.
Run the /connect command and search for Baseten.
/connect
Enter your Baseten API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
Head over to the Cerebras console, create an account, and generate an API key.
Run the /connect command and search for Cerebras.
/connect
Enter your Cerebras API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Qwen 3 Coder 480B.
/models
Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With Unified Billing you don't need separate API keys for each provider.
Head over to the Cloudflare dashboard, navigate to AI > AI Gateway, and create a new gateway. Note your Account ID and Gateway ID.
Run the /connect command and search for Cloudflare AI Gateway.
/connect
Enter your Account ID when prompted.
┌ Enter your Cloudflare Account ID
│
│
└ enter
Enter your Gateway ID when prompted.
┌ Enter your Cloudflare AI Gateway ID
│
│
└ enter
Enter your Cloudflare API token.
┌ Gateway API token
│
│
└ enter
Run the /models command to select a model.
/models
You can also add models through your opencode config.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"cloudflare-ai-gateway": {
"models": {
"openai/gpt-4o": {},
"anthropic/claude-sonnet-4": {}
}
}
}
}
Alternatively, you can set environment variables instead of using /connect.
export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
export CLOUDFLARE_GATEWAY_ID=your-gateway-id
export CLOUDFLARE_API_TOKEN=your-api-token
Cloudflare Workers AI lets you run AI models on Cloudflare's global network directly via REST API, with no separate provider accounts needed for supported models.
Head over to the Cloudflare dashboard, navigate to Workers AI, and select Use REST API to get your Account ID and create an API token.
Run the /connect command and search for Cloudflare Workers AI.
/connect
Enter your Account ID when prompted.
┌ Enter your Cloudflare Account ID
│
│
└ enter
Enter your Cloudflare API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
Alternatively, you can set environment variables instead of using /connect.
export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
export CLOUDFLARE_API_KEY=your-api-token
Head over to the Cortecs console, create an account, and generate an API key.
Run the /connect command and search for Cortecs.
/connect
Enter your Cortecs API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Kimi K2 Instruct.
/models
Head over to the DeepSeek console, create an account, and click Create new API key.
Run the /connect command and search for DeepSeek.
/connect
Enter your DeepSeek API key.
┌ API key
│
│
└ enter
Run the /models command to select a DeepSeek model like DeepSeek V4 Pro.
/models
Head over to the Deep Infra dashboard, create an account, and generate an API key.
Run the /connect command and search for Deep Infra.
/connect
Enter your Deep Infra API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
Head over to the FrogBot dashboard, create an account, and generate an API key.
Run the /connect command and search for FrogBot.
/connect
Enter your FrogBot API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
Head over to the Fireworks AI console, create an account, and click Create API Key.
Run the /connect command and search for Fireworks AI.
/connect
Enter your Fireworks AI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Kimi K2 Instruct.
/models
:::caution[Experimental] GitLab Duo support in OpenCode is experimental. Features, configuration, and behavior may change in future releases. :::
OpenCode integrates with the GitLab Duo Agent Platform, providing AI-powered agentic chat with native tool calling capabilities.
:::note[License requirements] GitLab Duo Agent Platform requires a Premium or Ultimate GitLab subscription. It is available on GitLab.com and GitLab Self-Managed. See GitLab Duo Agent Platform prerequisites for full requirements. :::
Run the /connect command and select GitLab.
/connect
Choose your authentication method:
┌ Select auth method
│
│ OAuth (Recommended)
│ Personal Access Token
└
Select OAuth and your browser will open for authorization.
OpenCode, Scopes: apiglpat-)Run the /models command to see available models.
/models
Three Claude-based models are available:
:::note You can also specify 'GITLAB_TOKEN' environment variable if you don't want to store token in opencode auth storage. :::
:::note[compliance note]
OpenCode uses a small model for some AI tasks like generating the session title.
It is configured to use gpt-5-nano by default, hosted by Zen. To lock OpenCode
to only use your own GitLab-hosted instance, add the following to your
opencode.json file. It is also recommended to disable session sharing.
{
"$schema": "https://opencode.ai/config.json",
"small_model": "gitlab/duo-chat-haiku-4-5",
"share": "disabled"
}
:::
For self-hosted GitLab instances:
export GITLAB_INSTANCE_URL=https://gitlab.company.com
export GITLAB_TOKEN=glpat-...
If your instance runs a custom AI Gateway:
GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
Or add to your bash profile:
export GITLAB_INSTANCE_URL=https://gitlab.company.com
export GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
export GITLAB_TOKEN=glpat-...
:::note Your GitLab administrator must:
In order to make Oauth working for your self-hosted instance, you need to create
a new application (Settings → Applications) with the
callback URL http://127.0.0.1:8080/callback and following scopes:
Then expose application ID as environment variable:
export GITLAB_OAUTH_CLIENT_ID=your_application_id_here
More documentation on opencode-gitlab-auth homepage.
Customize through opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"gitlab": {
"options": {
"instanceUrl": "https://gitlab.com"
}
}
}
}
DAP workflow models provide an alternative execution path that routes tool calls
through GitLab's Duo Workflow Service (DWS) instead of the standard agentic chat.
When a duo-workflow-* model is selected, OpenCode will:
Available DAP workflow models follow the duo-workflow-* naming convention and
are dynamically discovered from your GitLab instance.
To access GitLab tools (merge requests, issues, pipelines, CI/CD, etc.):
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-gitlab-plugin"]
}
This plugin provides comprehensive GitLab repository management capabilities including MR reviews, issue tracking, pipeline monitoring, and more.
To use your GitHub Copilot subscription with opencode:
:::note Some models might need a Pro+ subscription to use. :::
Run the /connect command and search for GitHub Copilot.
/connect
Navigate to github.com/login/device and enter the code.
┌ Login with GitHub Copilot
│
│ https://github.com/login/device
│
│ Enter code: 8F43-6FCF
│
└ Waiting for authorization...
Now run the /models command to select the model you want.
/models
To use Google Vertex AI with OpenCode:
Head over to the Model Garden in the Google Cloud Console and check the models available in your region.
:::note You need to have a Google Cloud project with Vertex AI API enabled. :::
Set the required environment variables:
GOOGLE_CLOUD_PROJECT: Your Google Cloud project IDVERTEX_LOCATION (optional): The region for Vertex AI (defaults to global)GOOGLE_APPLICATION_CREDENTIALS: Path to your service account JSON key filegcloud auth application-default loginSet them while running opencode.
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id opencode
Or add them to your bash profile.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
export GOOGLE_CLOUD_PROJECT=your-project-id
export VERTEX_LOCATION=global
:::tip
The global region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., us-central1) for data residency requirements. Learn more
:::
Run the /models command to select the model you want.
/models
Head over to the Groq console, click Create API Key, and copy the key.
Run the /connect command and search for Groq.
/connect
Enter the API key for the provider.
┌ API key
│
│
└ enter
Run the /models command to select the one you want.
/models
Hugging Face Inference Providers provides access to open models supported by 17+ providers.
Head over to Hugging Face settings to create a token with permission to make calls to Inference Providers.
Run the /connect command and search for Hugging Face.
/connect
Enter your Hugging Face token.
┌ API key
│
│
└ enter
Run the /models command to select a model like Kimi-K2-Instruct or GLM-4.6.
/models
Helicone is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
Head over to Helicone, create an account, and generate an API key from your dashboard.
Run the /connect command and search for Helicone.
/connect
Enter your Helicone API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
For more providers and advanced features like caching and rate limiting, check the Helicone documentation.
In the event you see a feature or model from Helicone that isn't configured automatically through opencode, you can always configure it yourself.
Here's Helicone's Model Directory, you'll need this to grab the IDs of the models you want to add.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"helicone": {
"npm": "@ai-sdk/openai-compatible",
"name": "Helicone",
"options": {
"baseURL": "https://ai-gateway.helicone.ai",
},
"models": {
"gpt-4o": {
// Model ID (from Helicone's model directory page)
"name": "GPT-4o", // Your own custom name for the model
},
"claude-sonnet-4-20250514": {
"name": "Claude Sonnet 4",
},
},
},
},
}
Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using options.headers:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"helicone": {
"npm": "@ai-sdk/openai-compatible",
"name": "Helicone",
"options": {
"baseURL": "https://ai-gateway.helicone.ai",
"headers": {
"Helicone-Cache-Enabled": "true",
"Helicone-User-Id": "opencode",
},
},
},
},
}
Helicone's Sessions feature lets you group related LLM requests together. Use the opencode-helicone-session plugin to automatically log each OpenCode conversation as a session in Helicone.
npm install -g opencode-helicone-session
Add it to your config.
{
"plugin": ["opencode-helicone-session"]
}
The plugin injects Helicone-Session-Id and Helicone-Session-Name headers into your requests. In Helicone's Sessions page, you'll see each OpenCode conversation listed as a separate session.
| Header | Description |
|---|---|
Helicone-Cache-Enabled | Enable response caching (true/false) |
Helicone-User-Id | Track metrics by user |
Helicone-Property-[Name] | Add custom properties (e.g., Helicone-Property-Environment) |
Helicone-Prompt-Id | Associate requests with prompt versions |
See the Helicone Header Directory for all available headers.
You can configure opencode to use local models through llama.cpp's llama-server utility
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"llama.cpp": {
"npm": "@ai-sdk/openai-compatible",
"name": "llama-server (local)",
"options": {
"baseURL": "http://127.0.0.1:8080/v1"
},
"models": {
"qwen3-coder:a3b": {
"name": "Qwen3-Coder: a3b-30b (local)",
"limit": {
"context": 128000,
"output": 65536
}
}
}
}
}
}
In this example:
llama.cpp is the custom provider ID. This can be any string you want.npm specifies the package to use for this provider. Here, @ai-sdk/openai-compatible is used for any OpenAI-compatible API.name is the display name for the provider in the UI.options.baseURL is the endpoint for the local server.models is a map of model IDs to their configurations. The model name will be displayed in the model selection list.IO.NET offers 17 models optimized for various use cases:
Head over to the IO.NET console, create an account, and generate an API key.
Run the /connect command and search for IO.NET.
/connect
Enter your IO.NET API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
You can configure opencode to use local models through LM Studio.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"lmstudio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio (local)",
"options": {
"baseURL": "http://127.0.0.1:1234/v1"
},
"models": {
"google/gemma-3n-e4b": {
"name": "Gemma 3n-e4b (local)"
}
}
}
}
}
In this example:
lmstudio is the custom provider ID. This can be any string you want.npm specifies the package to use for this provider. Here, @ai-sdk/openai-compatible is used for any OpenAI-compatible API.name is the display name for the provider in the UI.options.baseURL is the endpoint for the local server.models is a map of model IDs to their configurations. The model name will be displayed in the model selection list.To use Kimi K2 from Moonshot AI:
Head over to the Moonshot AI console, create an account, and click Create API key.
Run the /connect command and search for Moonshot AI.
/connect
Enter your Moonshot API key.
┌ API key
│
│
└ enter
Run the /models command to select Kimi K2.
/models
Head over to the MiniMax API Console, create an account, and generate an API key.
Run the /connect command and search for MiniMax.
/connect
Enter your MiniMax API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like M2.1.
/models
NVIDIA provides access to Nemotron models and many other open models through build.nvidia.com for free.
Head over to build.nvidia.com, create an account, and generate an API key.
Run the /connect command and search for NVIDIA.
/connect
Enter your NVIDIA API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like nemotron-3-super-120b-a12b.
/models
You can also use NVIDIA models locally via NVIDIA NIM by setting a custom base URL.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"nvidia": {
"options": {
"baseURL": "http://localhost:8000/v1"
}
}
}
}
Alternatively, set your API key as an environment variable.
export NVIDIA_API_KEY=nvapi-your-key-here
Head over to the Nebius Token Factory console, create an account, and click Add Key.
Run the /connect command and search for Nebius Token Factory.
/connect
Enter your Nebius Token Factory API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Kimi K2 Instruct.
/models
You can configure opencode to use local models through Ollama.
:::tip Ollama can automatically configure itself for OpenCode. See the Ollama integration docs for details. :::
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"llama2": {
"name": "Llama 2"
}
}
}
}
}
In this example:
ollama is the custom provider ID. This can be any string you want.npm specifies the package to use for this provider. Here, @ai-sdk/openai-compatible is used for any OpenAI-compatible API.name is the display name for the provider in the UI.options.baseURL is the endpoint for the local server.models is a map of model IDs to their configurations. The model name will be displayed in the model selection list.:::tip
If tool calls aren't working, try increasing num_ctx in Ollama. Start around 16k - 32k.
:::
To use Ollama Cloud with OpenCode:
Head over to https://ollama.com/ and sign in or create an account.
Navigate to Settings > Keys and click Add API Key to generate a new API key.
Copy the API key for use in OpenCode.
Run the /connect command and search for Ollama Cloud.
/connect
Enter your Ollama Cloud API key.
┌ API key
│
│
└ enter
Important: Before using cloud models in OpenCode, you must pull the model information locally:
ollama pull gpt-oss:20b-cloud
Run the /models command to select your Ollama Cloud model.
/models
We recommend signing up for ChatGPT Plus or Pro.
Once you've signed up, run the /connect command and select OpenAI.
/connect
Here you can select the ChatGPT Plus/Pro option and it'll open your browser and ask you to authenticate.
┌ Select auth method
│
│ ChatGPT Plus/Pro
│ Manually enter API Key
└
Now all the OpenAI models should be available when you use the /models command.
/models
If you already have an API key, you can select Manually enter API Key and paste it in your terminal.
OpenCode Zen is a list of tested and verified models provided by the OpenCode team. Learn more.
Sign in to <a href={console}>OpenCode Zen</a> and click Create API Key.
Run the /connect command and search for OpenCode Zen.
/connect
Enter your OpenCode API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Qwen 3 Coder 480B.
/models
Head over to the OpenRouter dashboard, click Create API Key, and copy the key.
Run the /connect command and search for OpenRouter.
/connect
Enter the API key for the provider.
┌ API key
│
│
└ enter
Many OpenRouter models are preloaded by default, run the /models command to select the one you want.
/models
You can also add additional models through your opencode config.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"openrouter": {
"models": {
"somecoolnewmodel": {}
}
}
}
}
You can also customize them through your opencode config. Here's an example of specifying a provider
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"openrouter": {
"models": {
"moonshotai/kimi-k2": {
"options": {
"provider": {
"order": ["baseten"],
"allow_fallbacks": false
}
}
}
}
}
}
}
Head over to the LLM Gateway dashboard, click Create API Key, and copy the key.
Run the /connect command and search for LLM Gateway.
/connect
Enter the API key for the provider.
┌ API key
│
│
└ enter
Many LLM Gateway models are preloaded by default, run the /models command to select the one you want.
/models
You can also add additional models through your opencode config.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"llmgateway": {
"models": {
"somecoolnewmodel": {}
}
}
}
}
You can also customize them through your opencode config. Here's an example of specifying a provider
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"llmgateway": {
"models": {
"glm-4.7": {
"name": "GLM 4.7"
},
"gpt-5.2": {
"name": "GPT-5.2"
},
"gemini-2.5-pro": {
"name": "Gemini 2.5 Pro"
},
"claude-3-5-sonnet-20241022": {
"name": "Claude 3.5 Sonnet"
}
}
}
}
}
SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform.
Go to your SAP BTP Cockpit, navigate to your SAP AI Core service instance, and create a service key.
:::tip
The service key is a JSON object containing clientid, clientsecret, url, and serviceurls.AI_API_URL. You can find your AI Core instance under Services > Instances and Subscriptions in the BTP Cockpit.
:::
Run the /connect command and search for SAP AI Core.
/connect
Enter your service key JSON.
┌ Service key
│
│
└ enter
Or set the AICORE_SERVICE_KEY environment variable:
AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' opencode
Or add it to your bash profile:
export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}'
Optionally set deployment ID and resource group:
AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group opencode
:::note These settings are optional and should be configured according to your SAP AI Core setup. :::
Run the /models command to select from 40+ available models.
/models
STACKIT AI Model Serving provides fully managed sovereign hosting environment for AI models, focusing on LLMs like Llama, Mistral, and Qwen, with maximum data sovereignty on European infrastructure.
Head over to STACKIT Portal, navigate to AI Model Serving, and create an auth token for your project.
:::tip You need a STACKIT customer account, user account, and project before creating auth tokens. :::
Run the /connect command and search for STACKIT.
/connect
Enter your STACKIT AI Model Serving auth token.
┌ API key
│
│
└ enter
Run the /models command to select from available models like Qwen3-VL 235B or Llama 3.3 70B.
/models
Head over to the OVHcloud panel. Navigate to the Public Cloud section, AI & Machine Learning > AI Endpoints and in API Keys tab, click Create a new API key.
Run the /connect command and search for OVHcloud AI Endpoints.
/connect
Enter your OVHcloud AI Endpoints API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like gpt-oss-120b.
/models
To use Scaleway Generative APIs with Opencode:
Head over to the Scaleway Console IAM settings to generate a new API key.
Run the /connect command and search for Scaleway.
/connect
Enter your Scaleway API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like devstral-2-123b-instruct-2512 or gpt-oss-120b.
/models
Head over to the Together AI console, create an account, and click Add Key.
Run the /connect command and search for Together AI.
/connect
Enter your Together AI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Kimi K2 Instruct.
/models
Head over to the Venice AI console, create an account, and generate an API key.
Run the /connect command and search for Venice AI.
/connect
Enter your Venice AI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Llama 3.3 70B.
/models
Vercel AI Gateway lets you access models from OpenAI, Anthropic, Google, xAI, and more through a unified endpoint. Models are offered at list price with no markup.
Head over to the Vercel dashboard, navigate to the AI Gateway tab, and click API keys to create a new API key.
Run the /connect command and search for Vercel AI Gateway.
/connect
Enter your Vercel AI Gateway API key.
┌ API key
│
│
└ enter
Run the /models command to select a model.
/models
You can also customize models through your opencode config. Here's an example of specifying provider routing order.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"vercel": {
"models": {
"anthropic/claude-sonnet-4": {
"options": {
"order": ["anthropic", "vertex"]
}
}
}
}
}
}
Some useful routing options:
| Option | Description |
|---|---|
order | Provider sequence to try |
only | Restrict to specific providers |
zeroDataRetention | Only use providers with zero data retention policies |
Head over to the xAI console, create an account, and generate an API key.
Run the /connect command and search for xAI.
/connect
Enter your xAI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like Grok Beta.
/models
Head over to the Z.AI API console, create an account, and click Create a new API key.
Run the /connect command and search for Z.AI.
/connect
If you are subscribed to the GLM Coding Plan, select Z.AI Coding Plan.
Enter your Z.AI API key.
┌ API key
│
│
└ enter
Run the /models command to select a model like GLM-4.7.
/models
Head over to the ZenMux dashboard, click Create API Key, and copy the key.
Run the /connect command and search for ZenMux.
/connect
Enter the API key for the provider.
┌ API key
│
│
└ enter
Many ZenMux models are preloaded by default, run the /models command to select the one you want.
/models
You can also add additional models through your opencode config.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"zenmux": {
"models": {
"somecoolnewmodel": {}
}
}
}
}
To add any OpenAI-compatible provider that's not listed in the /connect command:
:::tip You can use any OpenAI-compatible provider with opencode. Most modern AI providers offer OpenAI-compatible APIs. :::
Run the /connect command and scroll down to Other.
$ /connect
┌ Add credential
│
◆ Select provider
│ ...
│ ● Other
└
Enter a unique ID for the provider.
$ /connect
┌ Add credential
│
◇ Enter provider id
│ myprovider
└
:::note Choose a memorable ID, you'll use this in your config file. :::
Enter your API key for the provider.
$ /connect
┌ Add credential
│
▲ This only stores a credential for myprovider - you will need to configure it in opencode.json, check the docs for examples.
│
◇ Enter your API key
│ sk-...
└
Create or update your opencode.json file in your project directory:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"myprovider": {
"npm": "@ai-sdk/openai-compatible",
"name": "My AI ProviderDisplay Name",
"options": {
"baseURL": "https://api.myprovider.com/v1"
},
"models": {
"my-model-name": {
"name": "My Model Display Name"
}
}
}
}
}
Here are the configuration options:
@ai-sdk/openai-compatible for OpenAI-compatible providers (for /v1/chat/completions). If your provider/model uses /v1/responses, use @ai-sdk/openai.More on the advanced options in the example below.
Run the /models command and your custom provider and models will appear in the selection list.
Here's an example setting the apiKey, headers, and model limit options.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"myprovider": {
"npm": "@ai-sdk/openai-compatible",
"name": "My AI ProviderDisplay Name",
"options": {
"baseURL": "https://api.myprovider.com/v1",
"apiKey": "{env:ANTHROPIC_API_KEY}",
"headers": {
"Authorization": "Bearer custom-token"
}
},
"models": {
"my-model-name": {
"name": "My Model Display Name",
"limit": {
"context": 200000,
"output": 65536
}
}
}
}
}
}
Configuration details:
env variable syntax, learn more.The limit fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
If you are having trouble with configuring a provider, check the following:
Check the auth setup: Run opencode auth list to see if the credentials
for the provider are added to your config.
This doesn't apply to providers like Amazon Bedrock, that rely on environment variables for their auth.
For custom providers, check the opencode config and:
/connect command matches the ID in your opencode config.@ai-sdk/cerebras for Cerebras. And for all other OpenAI-compatible providers, use @ai-sdk/openai-compatible (for /v1/chat/completions); if a model uses /v1/responses, use @ai-sdk/openai. For mixed setups under one provider, you can override per model via provider.npm.options.baseURL field.