Back to Ollama

VS Code

docs/integrations/vscode.mdx

0.23.12.6 KB
Original Source

VS Code includes built-in AI chat through GitHub Copilot Chat. Ollama models can be used directly in the Copilot Chat model picker.

Prerequisites

<Note> VS Code requires you to be logged in to use its model selector, even for custom models. This doesn't require a paid GitHub Copilot account; GitHub Copilot Free will enable model selection for custom models.</Note>

Quick setup

shell
ollama launch vscode

Recommended models will be shown after running the command. See the latest models at ollama.com.

Make sure Local is selected at the bottom of the Copilot Chat panel to use your Ollama models.

<div style={{ display: "flex", justifyContent: "center" }}> </div>

Run directly with a model

shell
ollama launch vscode --model qwen3.5:cloud

Cloud models are also available at ollama.com.

Manual setup

To configure Ollama manually without ollama launch:

  1. Open the Copilot Chat side bar from the top right corner

    <div style={{ display: "flex", justifyContent: "center" }}> </div>
  2. Click the settings gear icon (<Icon icon="gear" />) to bring up the Language Models window

    <div style={{ display: "flex", justifyContent: "center" }}> </div>
  3. Click Add Models and select Ollama to load all your Ollama models into VS Code

    <div style={{ display: "flex", justifyContent: "center" }}> </div>
  4. Click the Unhide button in the model picker to show your Ollama models

    <div style={{ display: "flex", justifyContent: "center" }}> </div>