docs/faqs.mdx
If you're seeing a fetch failed error and your network requires custom certificates, you will need to configure them in your config file. In each of the objects in the "models" array, add requestOptions.caBundlePath like this:
You may also set requestOptions.caBundlePath to an array of paths to multiple certificates.
Windows VS Code Users: Installing the win-ca extension may help Continue use the Windows certificate store, but requestOptions.caBundlePath is the most reliable fix.
If your logs include errors such as unable to verify the first certificate, self signed certificate in certificate chain, certificate verify failed, or CERT_UNTRUSTED, Continue was able to reach the endpoint but could not verify the TLS certificate chain it returned.
In most cases, the fix is to export the root or intermediate CA certificate for that endpoint and set requestOptions.caBundlePath in your model configuration. If the server also requires mutual TLS, add requestOptions.clientCertificate as well.
For step-by-step diagnosis with curl and openssl, see Troubleshooting SSL certificate errors.
If you are using VS Code and require requests to be made through a proxy, you are likely already set up through VS Code's Proxy Server Support. To double-check that this is enabled, use cmd/ctrl + , to open settings and search for "Proxy Support". Unless it is set to "off", then VS Code is responsible for making the request to the proxy.
Continue can be used in code-server, but if you are running across an error in the logs that includes "This is likely because the editor is not running in a secure context", please see their documentation on securely exposing code-server.
If you've made changes to a config (adding, modifying, or removing it) but the changes aren't appearing in the Continue extension in VS Code, try reloading the VS Code window:
cmd/ctrl + shift + P)This will restart VS Code and reload all extensions, which should make your config changes visible.
By default the Continue window is on the left side of VS Code, but it can be dragged to right side as well, which we recommend in our tutorial. In the situation where you have previously installed Continue and moved it to the right side, it may still be there. You can reveal Continue either by using cmd/ctrl+L or by clicking the button in the top right of VS Code to open the right sidebar.
If you have entered a valid API key and model, but are still getting a 404 error from OpenAI, this may be because you need to add credits to your billing account. You can do so from the billing console. If you just want to check that this is in fact the cause of the error, you can try adding $1 to your account and checking whether the error persists.
If you have entered a valid API key and model, but are still getting a 404 error from OpenRouter, this may be because models that do not support function calling will return an error to Continue when a request is sent. Example error: HTTP 404 Not Found from https://openrouter.ai/api/v1/chat/completions
If you are having persistent errors with indexing, our recommendation is to rebuild your index from scratch. Note that for large codebases this may take some time.
This can be accomplished using the following command: Continue: Rebuild codebase index.
If Agent mode is grayed out or tools aren't functioning properly, this is likely due to model capability configuration issues.
<Info> Continue uses system message tools as a fallback for models without native tool support, so most models should work with Agent mode automatically. </Info>capabilities: ["tool_use"] to your model config to force tool supportIf tools aren't being called:
tool_use is in your capabilitiesIf you can't upload images:
image_input to capabilitiesIf Continue's autodetection isn't working correctly, you can manually add capabilities in your config.yaml:
models:
- name: my-model
provider: openai
model: gpt-4
capabilities:
- tool_use
- image_input
Some proxy services (like OpenRouter) or custom deployments may not preserve tool calling capabilities. Check your provider's documentation.
To see what capabilities Continue detected for your model:
image_inputtool_useSee the Model Capabilities guide for complete configuration details.
This can be fixed by selecting Actions > Choose Boot runtime for the IDE then selecting the latest version, and then restarting Android Studio. See this thread for details.
We use LanceDB as our vector database for codebase search features. On x64 Linux systems, LanceDB requires specific CPU features (FMA and AVX2) which may not be available on older processors.
Most Continue features will work normally, including autocomplete and chat. However, commands that rely on codebase indexing, such as @codebase, @files, and @folder, will be disabled.
For more details about this requirement, see the LanceDB issue #2195.
For a comprehensive guide on setting up and troubleshooting Ollama, see the Ollama Guide.
If you're getting "Unable to connect to local Ollama instance" errors:
ollama serve (not just ollama run model-name)config.yaml has the correct setup:models:
- name: llama3
provider: ollama
model: llama3:latest
When connecting to Ollama on another machine:
Configure Ollama to listen on all interfaces:
OLLAMA_HOST=0.0.0.0:11434/etc/systemd/system/ollama.service and add under [Service]:
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
sudo systemctl restart ollamaUpdate your Continue config:
models:
- name: llama3
provider: ollama
apiBase: http://192.168.1.136:11434 # Use your server's IP
model: llama3:latest
For WSL users having connection issues:
Create or edit %UserProfile%\.wslconfig:
[wsl2]
networkingMode=mirrored
Then restart WSL: wsl --shutdown
In PowerShell (as Administrator):
# Add firewall rules
New-NetFireWallRule -DisplayName 'WSL Ollama' -Direction Inbound -LocalPort 11434 -Action Allow -Protocol TCP
New-NetFireWallRule -DisplayName 'WSL Ollama' -Direction Outbound -LocalPort 11434 -Action Allow -Protocol TCP
# Get WSL IP (run 'ip addr' in WSL to find eth0 IP)
# Then add port proxy (replace <WSL_IP> with your actual IP)
netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=<WSL_IP>
When running Continue or other tools in Docker that need to connect to Ollama on the host:
Windows/Mac: Use host.docker.internal:
models:
- name: llama3
provider: ollama
apiBase: http://host.docker.internal:11434
model: llama3:latest
Linux: Use the Docker bridge IP (usually 172.17.0.1):
models:
- name: llama3
provider: ollama
apiBase: http://172.17.0.1:11434
model: llama3:latest
Docker run command: Add host mapping:
docker run -d --add-host=host.docker.internal:host-gateway ...
If you're getting parse errors with remote Ollama:
Verify the model is installed on the remote:
OLLAMA_HOST=192.168.1.136:11434 ollama list
Install missing models:
OLLAMA_HOST=192.168.1.136:11434 ollama pull llama3
Check URL format: Ensure you're using http:// not https:// for local network addresses
For running Continue completely offline without internet access, see the Running Continue Without Internet guide.
Continue supports multiple methods for managing secrets locally, searched in this order:
.env files: Place a .env file in your workspace root directory.env file in <workspace-root>/.continue/.env.env file: Place a .env file in ~/.continue/.env for user-wide secrets.env filesCreate a .env file in one of these locations:
<workspace-root>/.env or <workspace-root>/.continue/.env~/.continue/.envExample .env file:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
CUSTOM_API_URL=https://api.example.com
Reference your local secrets using the secrets namespace:
models:
- provider: openai
apiKey: ${{ secrets.OPENAI_API_KEY }}
For centralized team secret management, use ${{ inputs.SECRET_NAME }} syntax in your config.yaml and manage them at https://continue.dev/settings/secrets:
models:
- provider: openai
apiKey: ${{ inputs.OPENAI_API_KEY }}
.env files to version control - add them to .gitignore.env file uses standard dotenv format (KEY=value, no quotes needed).env files take precedence over Hub secrets when both existIf your API keys aren't being recognized:
.env file is in the correct location.env file.env file has proper line endings (LF, not CRLF on Windows)You can leverage model addons from the Continue Mission Control in your local configurations using the uses: syntax. This allows you to reference pre-configured model blocks without duplicating configuration.
In your local config.yaml, reference model addons using the format provider/model-name:
name: My Local Config
version: 0.0.1
schema: v1
models:
- uses: ollama/llama3.1-8b
- uses: anthropic/claude-3.5-sonnet
- uses: openai/gpt-4
You can combine hub model addons with local models:
name: My Local Config
version: 0.0.1
schema: v1
models:
# Hub model addon
- uses: anthropic/claude-3.5-sonnet
# Local model configuration
- name: Local Ollama
provider: ollama
model: codellama:latest
apiBase: http://localhost:11434
You can override specific settings from the model addon:
models:
- uses: ollama/llama3.1-8b
override:
apiBase: http://192.168.1.100:11434 # Use remote Ollama server
roles:
- chat
- autocomplete
This feature allows you to maintain consistent model configurations across teams while still allowing local customization when needed.
Continue stores its data in the ~/.continue directory (%USERPROFILE%\.continue on Windows).
If you'd like to perform a clean reset of the extension, including removing all configuration files, indices, etc, you can remove this directory, uninstall, and then reinstall.
You can also join GitHub Discussions for additional support. Alternatively, you can create a GitHub issue here, providing details of your problem, and we'll be able to help you out more quickly.