docs/customize/model-providers/overview.mdx
These are the most commonly used model providers that offer a wide range of capabilities:
| Provider | Description | Capabilities |
|---|---|---|
| Anthropic | Providers of Claude models, known for long context windows and strong reasoning | Chat, Edit, Apply, Embeddings |
| OpenAI | Creators of GPT models with strong coding capabilities | Chat, Edit, Apply, Embeddings |
| Azure | Microsoft's cloud platform offering OpenAI models | Chat, Edit, Apply, Embeddings |
| Amazon Bedrock | AWS service offering access to various foundation models | Chat, Edit, Apply, Embeddings |
| Ollama | Run open-source models locally with a simple interface | Chat, Edit, Apply, Embeddings, Autocomplete |
| Google Gemini | Google's multimodal AI models | Chat, Edit, Apply, Embeddings |
| DeepSeek | Specialized code models with strong performance | Chat, Edit, Apply |
| Mistral | High-performance open models with commercial offerings | Chat, Edit, Apply, Embeddings |
| xAI | Grok models from xAI | Chat, Edit, Apply |
| Vertex AI | Google Cloud's machine learning platform | Chat, Edit, Apply, Embeddings |
| Inception | On-premises open-source model runners | Chat, Edit, Apply |
| HuggingFace | Platform for open source models with inference providers and endpoints | Chat, Edit, Apply, Embeddings |
Beyond the top-level providers, Continue supports many other options:
| Provider | Description |
|---|---|
| Groq | Ultra-fast inference for various open models |
| Together AI | Platform for running a variety of open models |
| DeepInfra | Hosting for various open source models |
| OpenRouter | Gateway to multiple model providers |
| Tetrate Agent Router Service | Gateway with intelligent routing across multiple model providers |
| Cohere | Models specialized for semantic search and text generation |
| NVIDIA | GPU-accelerated model hosting |
| Cloudflare | Edge-based AI inference services |
| Provider | Description |
|---|---|
| LM Studio | Desktop app for running models locally |
| llama.cpp | Optimized C++ implementation for running LLMs |
| LlamaStack | Stack for running Llama models locally |
| llamafile | Self-contained executable model files |
| Provider | Description |
|---|---|
| SambaNova | Enterprise AI platform |
| Watson x | IBM's enterprise AI platform |
| Sagemaker | AWS machine learning platform |
| Nebius | Cloud-based machine learning platform |
When selecting a model provider, consider:
You can add models to your config.yaml file like this:
models:
- name: Claude 4 Sonnet
provider: anthropic # Choose provider from the lists above
model: claude-sonnet-4-20250514 # Specific model name
apiKey: ${{ secrets.OPENAI_API_KEY }}
roles:
- chat
- edit
- apply
For more detailed configuration, visit the specific provider pages linked above.
Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you’re not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you’d like. Learn more about model providers.