Back to Continue

Using Instinct with Ollama in Continue

docs/guides/instinct.mdx

1.5.451.0 KB
Original Source
<Warning> Instinct is a 7 billion parameter model. You should expect slow responses if running on a laptop. To learn how to inference Instinct on a GPU, see our [HuggingFace model card](https://huggingface.co/continuedev/instinct). </Warning>

We recently released Instinct, a state-of-the-art open Next Edit model. Robustly fine-tuned from Qwen2.5-Coder-7B, Instinct intelligently predicts your next move to keep you in flow. To learn more about the model, check out our blog post.

<Frame> </Frame>

1. Install Ollama

If you haven't already installed Ollama, see our guide here.

2. Download Instinct

bash
ollama run nate/instinct

3. Update your config.yaml

Open your config.yaml and add Instinct to the models section:

yaml
# ... rest of config.yaml ...

models:
  - uses: continuedev/instinct

Alternatively, you can just click to add the block at https://continue.dev/continuedev/instinct.