docs/user-manual/en/2-providers/2.1-add.md
Click the + button in the top-right corner of the main interface to open the Add Provider panel.
The panel has two tabs:
Presets are pre-configured provider templates that only require an API Key to use.
| Preset Name | Description |
|---|---|
| Claude Official | Log in with an Anthropic official account |
| DeepSeek | DeepSeek model |
| Zhipu GLM | Zhipu AI GLM model |
| Zhipu GLM en | Zhipu AI (English version) |
| Bailian | Alibaba Cloud Bailian (Qwen) |
| Kimi | Moonshot Kimi model |
| Kimi For Coding | Kimi coding-specific model |
| StepFun | StepFun model |
| ModelScope | ModelScope community |
| KAT-Coder | KAT-Coder model |
| Longcat | Longcat AI |
| MiniMax | MiniMax model |
| MiniMax en | MiniMax (English version) |
| DouBaoSeed | DouBao Seed model |
| BaiLing | BaiLing AI |
| AiHubMix | AiHubMix aggregation service |
| SiliconFlow | SiliconFlow |
| SiliconFlow en | SiliconFlow (English version) |
| DMXAPI | DMXAPI proxy service |
| PackyCode | PackyCode proxy service |
| Cubence | Cubence service |
| AIGoCode | AIGoCode service |
| RightCode | RightCode service |
| AICodeMirror | AICodeMirror service |
| OpenRouter | Aggregation routing service |
| Nvidia | Nvidia AI service |
| Xiaomi MiMo | Xiaomi MiMo model |
The preset list may be updated with new versions. Refer to the actual list shown in the app.
| Preset Name | Description |
|---|---|
| OpenAI Official | Log in with an OpenAI official account |
| Azure OpenAI | Azure OpenAI service |
| AiHubMix | AiHubMix aggregation service |
| DMXAPI | DMXAPI proxy service |
| PackyCode | PackyCode proxy service |
| Cubence | Cubence service |
| AIGoCode | AIGoCode service |
| RightCode | RightCode service |
| AICodeMirror | AICodeMirror service |
| OpenRouter | Aggregation routing service |
| Preset Name | Description |
|---|---|
| Google Official | Log in with Google OAuth |
| PackyCode | PackyCode proxy service |
| Cubence | Cubence service |
| AIGoCode | AIGoCode service |
| AICodeMirror | AICodeMirror service |
| OpenRouter | Aggregation routing service |
| Custom | Manually configure all parameters |
| Preset Name | Description |
|---|---|
| DeepSeek | DeepSeek model |
| Zhipu GLM | Zhipu AI GLM model |
| Zhipu GLM en | Zhipu AI (English version) |
| Bailian | Alibaba Cloud Bailian |
| Kimi k2.5 | Moonshot Kimi-k2.5 model |
| Kimi For Coding | Kimi coding-specific model |
| StepFun | StepFun model |
| ModelScope | ModelScope community |
| KAT-Coder | KAT-Coder model |
| Longcat | Longcat AI |
| MiniMax | MiniMax model |
| MiniMax en | MiniMax (English version) |
| DouBaoSeed | DouBao Seed model |
| BaiLing | BaiLing AI |
| Xiaomi MiMo | Xiaomi MiMo model |
| AiHubMix | AiHubMix aggregation service |
| DMXAPI | DMXAPI proxy service |
| OpenRouter | Aggregation routing service |
| Nvidia | Nvidia AI service |
| PackyCode | PackyCode proxy service |
| Cubence | Cubence service |
| AIGoCode | AIGoCode service |
| RightCode | RightCode service |
| AICodeMirror | AICodeMirror service |
| OpenAI Compatible | OpenAI-compatible interface |
| Oh My OpenCode | Oh My OpenCode service |
The preset list is continuously updated. Refer to the actual list shown in the app.
| Preset Name | Description |
|---|---|
| DeepSeek | DeepSeek model |
| Zhipu GLM | Zhipu AI GLM model |
| Zhipu GLM en | Zhipu AI (English version) |
| Qwen Coder | Qwen coding model |
| Kimi k2.5 | Moonshot Kimi-k2.5 model |
| Kimi For Coding | Kimi coding-specific model |
| StepFun | StepFun model |
| MiniMax | MiniMax model |
| MiniMax en | MiniMax (English version) |
| KAT-Coder | KAT-Coder model |
| Longcat | Longcat AI |
| DouBaoSeed | DouBao Seed model |
| BaiLing | BaiLing AI |
| Xiaomi MiMo | Xiaomi MiMo model |
| AiHubMix | AiHubMix aggregation service |
| DMXAPI | DMXAPI proxy service |
| OpenRouter | Aggregation routing service |
| ModelScope | ModelScope community |
| SiliconFlow | SiliconFlow |
| SiliconFlow en | SiliconFlow (English version) |
| Nvidia | Nvidia AI service |
| PackyCode | PackyCode proxy service |
| Cubence | Cubence service |
| AIGoCode | AIGoCode service |
| RightCode | RightCode service |
| AICodeMirror | AICodeMirror service |
| AICoding | AICoding service |
| CrazyRouter | CrazyRouter service |
| SSSAiCode | SSSAiCode service |
| AWS Bedrock | AWS Bedrock service |
| OpenAI Compatible | OpenAI-compatible interface |
When adding or editing a provider, you can automatically discover available models from the provider's endpoint — eliminating the tedious copy-and-paste of model IDs.
/v1/models endpointThis feature covers all five apps — Claude / Codex / Gemini / OpenCode / OpenClaw — and works for any provider that supports the /v1/models endpoint.
Common errors:
/v1/models endpoint; fall back to manual model ID entryAfter selecting the "Custom" preset, you need to manually edit the JSON configuration.
{
"env": {
"ANTHROPIC_API_KEY": "your-api-key",
"ANTHROPIC_BASE_URL": "https://api.example.com"
}
}
| Field | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY | Yes | API key |
ANTHROPIC_BASE_URL | No | Custom endpoint URL |
ANTHROPIC_AUTH_TOKEN | No | Alternative authentication method to API_KEY |
Codex uses two configuration files:
1. auth.json (~/.codex/auth.json) - Stores API key:
{
"OPENAI_API_KEY": "your-api-key"
}
2. config.toml (~/.codex/config.toml) - Stores model and endpoint configuration:
# Basic configuration
model_provider = "custom"
model = "gpt-5.2"
model_reasoning_effort = "high"
disable_response_storage = true
# Custom provider configuration
[model_providers.custom]
name = "custom"
base_url = "https://api.example.com/v1"
wire_api = "responses"
requires_openai_auth = true
auth.json field descriptions:
| Field | Required | Description |
|---|---|---|
OPENAI_API_KEY | Yes | API key |
config.toml field descriptions:
| Field | Required | Description |
|---|---|---|
model_provider | Yes | Model provider name (must match [model_providers.xxx]) |
model | Yes | Model to use (e.g., gpt-5.2, gpt-4o) |
model_reasoning_effort | No | Reasoning effort: low / medium / high |
disable_response_storage | No | Whether to disable response storage |
base_url | Yes | API endpoint URL |
wire_api | No | API protocol type (usually responses) |
requires_openai_auth | No | Whether to use OpenAI authentication |
{
"env": {
"GEMINI_API_KEY": "your-api-key",
"GOOGLE_GEMINI_BASE_URL": "https://api.example.com"
}
}
| Field | Required | Description |
|---|---|---|
GEMINI_API_KEY | Yes | API key |
GOOGLE_GEMINI_BASE_URL | No | Custom endpoint URL |
GEMINI_MODEL | No | Specify model |
Authentication type is automatically detected by CC Switch (PackyCode API proxy / Google OAuth / generic API Key), no manual configuration needed.
Universal providers can share configurations across Claude/Codex/Gemini/OpenCode/OpenClaw, suitable for proxy services that support multiple API formats.
Universal providers automatically sync to the selected apps:
When editing a universal provider, you can choose:
| Action | Description |
|---|---|
| Save | Save configuration only, without immediate sync |
| Save and Sync | Save configuration and immediately sync to all enabled apps |
If you need to manually trigger a sync:
CC Switch supports two ways to import provider configurations:
One-click import via ccswitch:// protocol links:
Getting deep links:
Batch import from SQL backup files:
.sql backup fileImported contents:
Note: Importing will overwrite the existing database. It is recommended to export your current configuration as a backup first. The exported file name format is
cc-switch-export-{timestamp}.sql.
Starting from v3.13.0, CC Switch adds a Codex OAuth reverse proxy path that lets you reuse your ChatGPT account's Codex service inside Claude Code.
Location hint: This feature appears as a new Claude provider card type, not as a Codex-side preset. Once added, it sits alongside regular API-Key providers in the Claude provider list.
auth.openai.com and chatgpt.comYou can start from either entry point:
No matter which entry point you use, the login flow is the same:
ABCD-1234)https://auth.openai.com/codex/device⏱️ Verification codes are valid for about 15 minutes. If it expires, the UI shows "Device Code has expired" — click Retry to get a new one.
After adding and saving a Codex OAuth provider:
Under the hood: CC Switch routes requests to
https://chatgpt.com/backend-api/codex, with the base URL forcibly rewritten — you do not need to manually fill in the endpoint. The API format is fixed toopenai_responses.
The Codex OAuth preset's default model mapping:
| Role | Default Model |
|---|---|
| Main model | gpt-5.4 |
| Sonnet role | gpt-5.4 |
| Opus role | gpt-5.4 |
| Haiku role | gpt-5.4-mini |
You can override the ANTHROPIC_MODEL and related environment variables in the provider's JSON editor to customize.
The OAuth Auth Center supports managing multiple ChatGPT accounts at the same time:
| Action | Description |
|---|---|
| Add another account | Click Add Another Account to repeat the login flow |
| Set as default | Click Set as Default on an account row — new providers use it |
| Choose for a provider | In the provider form, use the Select Account dropdown |
| Remove account | Click the red × next to an account (the token is cleared) |
| Log out all accounts | The Log Out All Accounts button at the bottom clears all |
Use case: If you share a dev machine with teammates, create one provider per member's ChatGPT account and switch between them via the tray menu.
After login and enabling the provider, the bottom of the provider card automatically shows the account quota:
| Display Element | Example | Color Rules |
|---|---|---|
| Usage percentage | 45% | < 70% green, 70–89% orange, ≥ 90% red |
| Reset countdown | 7d12h until reset | ChatGPT account's sliding window or daily limit |
| Refresh button | Circular arrow | Manually re-query quota |
⚠️ Session Expired: If the token fails to refresh, the card displays a yellow "Session Expired" warning. Go to the OAuth Auth Center, remove the account, and log in again.
| Scenario | Symptom | Resolution |
|---|---|---|
| Verification code timeout | "Device Code has expired" shown | Click Retry to get a new code |
| Authorization denied | "User denied authorization" | Retry and click "Authorize" in the browser |
| Network error | Specific error details shown | Check network, confirm access to OpenAI domains |
| Not logged in before adding | "Please log in to ChatGPT first" | Complete login in OAuth Auth Center first |
| Token refresh failed | "Session Expired" in quota box | Remove the account and log in again |
| Quota query failed | "Query failed" in quota box | Click the Refresh button to retry |
The Codex OAuth reverse proxy accesses your ChatGPT account's Codex service through a reverse-engineered OAuth flow. Before enabling, please make sure you understand the following risks:
By enabling this feature, you assume all risks. CC Switch is not responsible for any account restrictions, warnings, or service suspensions resulting from its use.
📖 See the full disclaimer and background in the v3.13.0 Release Notes.
When adding a Claude provider that uses a third-party API, you may need to select the correct API Format in the Advanced Options section:
| Format | Description | When to Use |
|---|---|---|
| Anthropic Messages | Native Anthropic API format (default) | Direct Anthropic API or compatible proxies |
| OpenAI Chat Completions | OpenAI Chat API format, auto-converted by proxy | Provider only supports OpenAI Chat format |
| OpenAI Responses API | OpenAI Responses API format, auto-converted by proxy | Provider only supports OpenAI Responses format |
Note: API format conversion is handled by the proxy service. When using non-Anthropic formats, the proxy must be running with takeover enabled for correct request/response conversion. See 4.1 Proxy Service for details.
The Advanced Options section auto-expands when a non-default API format is configured.
Added in v3.13.0. By default, CC Switch treats the configured base_url as a prefix and appends fixed paths like /v1/chat/completions. For some vendors (such as third-party services with non-standard URL layouts), this path concatenation causes requests to fail.
How to enable:
base_urlExample comparison:
| Mode | base_url value | Actual request target |
|---|---|---|
| Default (prefix concat) | https://api.example.com | https://api.example.com/v1/chat/completions |
| Full URL Mode | https://api.example.com/custom/path/messages | https://api.example.com/custom/path/messages |
When to use:
/v1/chat/completions)Note: Both proxy forwarding and Stream Check respect the Full URL Mode setting, so no extra adjustments are needed after enabling. Disabling this option restores default path concatenation.
When editing Claude providers, a set of quick toggles is available above the JSON editor:
| Toggle | Effect | Config Change |
|---|---|---|
| Hide Attribution | Clears commit/PR attribution metadata | Sets attribution: {commit: "", pr: ""} |
| Enable Teammates | Enables the agent teams feature | Sets env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS = "1" |
| Enable Tool Search | Enables tool search functionality | Sets env.ENABLE_TOOL_SEARCH = "true" |
| Max Effort | Sets effort level to max | Sets effortLevel = "max" |
| Disable Auto Upgrade | Prevents Claude Code auto-updates | Sets env.DISABLE_AUTOUPDATER = "1" |
When a toggle is unchecked, its corresponding config entry is removed entirely. Changes are reflected in the JSON editor in real-time.
Additionally, the Write Common Config checkbox enables merging a global config snippet into the provider. Click Edit Common Config to customize the shared snippet.
When adding a Codex provider, an Enable 1M Context Window toggle is available:
model_context_window = 1000000 and auto-fills model_auto_compact_token_limit = 900000 in config.tomlThe auto-compact limit can be customized in the text field that appears when the toggle is on.
Click the icon area to the left of the name to:
Enter the provider's website or console URL for quick access:
Add notes such as:
Notes are displayed on the provider card and are searchable.
After adding a provider, you can speed-test API endpoints:
Test results: