Back to Continue

How to Configure Ollama with Continue

docs/customize/model-providers/top-level/ollama.mdx

1.5.452.1 KB
Original Source
<Tip> **Discover Ollama models [here](https://continue.dev/lmstudio)** </Tip> <Info> Get started with [Ollama](https://ollama.com/download) </Info>

Configuration

<Tabs> <Tab title="YAML"> ```yaml title="config.yaml" name: My Config version: 0.0.1 schema: v1
models:
  - name: <MODEL_NAME>
    provider: ollama
    model: <MODEL_ID>
    apiBase: http://<my endpoint>:11434 # if running a remote instance of Ollama
```
</Tab>
<Tab title="JSON (Deprecated)">
```json title="config.json"
{
  "models": [
    {
      "title": "<MODEL_NAME>",
      "provider": "ollama",
      "model": "<MODEL_ID>"
      "apiBase": "http://<my endpoint>:11434" // if running a remote instance of Ollama
    }
  ]
}
```
</Tab>
</Tabs> <Info> **Check out a more advanced configuration [here](https://continue.dev/ollama/qwen3-coder-30b?view=config)** </Info>

How to Configure Model Capabilities in Ollama

Ollama models usually have their capabilities auto-detected correctly. However, if you're using custom model names or experiencing issues with tools/images not working, you can explicitly set capabilities:

<Tabs> <Tab title="YAML"> ```yaml title="config.yaml" name: My Config version: 0.0.1 schema: v1
models:
  - name: <CUSTOM_MODEL_NAME>
    provider: ollama
    model: <CUSTOM_MODEL_ID>
    capabilities:
      - tool_use      # Enable if your model supports function calling
      - image_input   # Enable for vision models
```
</Tab>
<Tab title="JSON (Deprecated)">
```json title="config.json"
{
  "models": [
    {
      "title": "<CUSTOM_MODEL_NAME>",
      "provider": "ollama",
      "model": "<CUSTOM_MODEL_ID>",
      "capabilities": {
        "tools": true, // Enable if your model supports function calling
        "uploadImage": true // Enable for vision models
      }
    }
  ]
}
```
</Tab>
</Tabs> <Note> Many Ollama models support tool use by default. Vision models often also support image input </Note>