docs/en/models/custom.mdx
For models accessed via OpenAI-compatible APIs, such as:
{
"bot_type": "custom",
"model": "deepseek-v4-flash",
"custom_api_key": "YOUR_API_KEY",
"custom_api_base": "https://{your-proxy.com}/v1"
}
| Parameter | Description |
|---|---|
bot_type | Must be set to custom |
model | Model name, any model supported by your proxy service |
custom_api_key | API key provided by your proxy service |
custom_api_base | API base URL, must be OpenAI-compatible |
Local models typically don't require an API key — just set the API base:
{
"bot_type": "custom",
"model": "qwen3.5:27b",
"custom_api_base": "http://localhost:11434/v1"
}
Common local deployment tools and their default addresses:
| Tool | Default API Base |
|---|---|
| Ollama | http://localhost:11434/v1 |
| vLLM | http://localhost:8000/v1 |
| LocalAI | http://localhost:8080/v1 |
Under the Custom provider, switching models only changes model without affecting bot_type or the API address:
/config model qwen3.5:27b