docs/customize/deep-dives/model-capabilities.mdx
Continue needs to know what features your models support to provide the best experience. This guide explains how model capabilities work and how to configure them.
Model capabilities tell Continue what features a model supports:
tool_use - Whether the model can use tools and functionsimage_input - Whether the model can process imagesWithout proper capability configuration, you may encounter issues like:
Continue uses a two-tier system for determining model capabilities:
Continue automatically detects capabilities based on your provider and model name. For example:
This works well for popular models, but may not cover custom deployments or newer models.
For implementation details, see:
You can add capabilities to models that Continue doesn't automatically detect in your config.yaml.
models:
- name: my-custom-gpt4
provider: openai
apiBase: https://my-deployment.com/v1
model: gpt-4-custom
capabilities:
- tool_use
- image_input
Add capabilities when:
Add tool support for a model that Continue doesn't recognize:
models:
- name: custom-model
provider: openai
model: my-fine-tuned-gpt4
capabilities:
- tool_use
Explicitly set no capabilities (autodetection will still apply):
models:
- name: limited-claude
provider: anthropic
model: claude-4.0-sonnet
capabilities: [] # Empty array doesn't disable autodetection
Enable both tools and image support:
models:
- name: multimodal-gpt
provider: openai
model: gpt-4-vision-preview
capabilities:
- tool_use
- image_input
Some providers and custom deployments may require explicit capability configuration:
Example configuration:
models:
- name: custom-deployment
provider: openai
apiBase: https://custom-api.company.com/v1
model: custom-gpt
capabilities:
- tool_use # If supports function calling
- image_input # If supports vision
For troubleshooting capability-related issues like Agent mode being unavailable or tools not working, see the Troubleshooting guide.
Remember: Setting capabilities only adds to autodetection. Continue will still use its built-in knowledge about your model in addition to your specified capabilities.
This matrix shows which models support tool use and image input capabilities. Continue auto-detects these capabilities, but you can override them if needed.
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| GPT-5.1 | Yes | No | 400k |
| GPT-5 | Yes | No | 400k |
| o3 | Yes | No | 128k |
| o3-mini | Yes | No | 128k |
| GPT-4o | Yes | Yes | 128k |
| GPT-4 Turbo | Yes | Yes | 128k |
| GPT-4 | Yes | No | 8k |
| GPT-3.5 Turbo | Yes | No | 16k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Claude 4 Sonnet | Yes | Yes | 200k |
| Claude 3.5 Sonnet | Yes | Yes | 200k |
| Claude 3.5 Haiku | Yes | Yes | 200k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Command A | Yes | No | 256k |
| Command A Reasoning | Yes | No | 256k |
| Command A Translate | Yes | No | 8k |
| Command A Vision | No | Yes | 128k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Gemini 2.5 Pro | Yes | Yes | 2M |
| Gemini 2.0 Flash | Yes | Yes | 1M |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Devstral Medium | Yes | No | 32k |
| Mistral | Yes | No | 32k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| DeepSeek V3 | Yes | No | 128k |
| DeepSeek Coder V2 | Yes | No | 128k |
| DeepSeek Chat | Yes | No | 64k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Grok Code Fast 1 | Yes | Yes | 256k |
| Grok 4 Fast Reasoning | Yes | Yes | 2M |
| Grok 4 Fast Non-Reasoning | Yes | Yes | 2M |
| Grok 4 | Yes | Yes | 256k |
| Grok 3 | Yes | Yes | 131k |
| Grok 3 Mini | Yes | Yes | 131k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Kimi K2 | Yes | Yes | 128k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Qwen Coder 3 480B | Yes | No | 128k |
| Model | Tool Use | Image Input | Context Window |
|---|---|---|---|
| Qwen 3 Coder | Yes | No | 32k |
| Qwen 2.5 VL | No | Yes | 128k |
| Devstral Small | Yes | No | 32k |
| Llama 3.1 | Yes | No | 128k |
| Llama 3 | Yes | No | 8k |
| Mistral | Yes | No | 32k |
| Codestral | Yes | No | 32k |
| Gemma 3 4B | Yes | Yes | 128k |
Is your model missing or incorrect? Help improve this documentation! You can edit this page on GitHub using the link below.