docs/guides/how-to-self-host-a-model.mdx
For many cases, either Continue will have a built-in provider or the API you use will be OpenAI-compatible, in which case you can use the "openai" provider and change the "baseUrl" to point to the server.
However, if neither of these are the case, you will need to wire up a new LLM object.
Basic authentication can be done with any provider using the apiKey field:
config.yaml
models:
- name: Ollama
provider: ollama
model: llama2-7b
apiKey: <YOUR_CUSTOM_OLLAMA_SERVER_API_KEY>
config.json
{
"models": [
{
"title": "Ollama",
"provider": "ollama",
"model": "llama2-7b",
"apiKey": "<YOUR_CUSTOM_OLLAMA_SERVER_API_KEY>"
}
]
}
This translates to the header "Authorization": "Bearer xxx".
If you need to send custom headers for authentication, you may use the requestOptions.headers property like in this example with Ollama:
config.yaml
models:
- name: Ollama
provider: ollama
model: llama2-7b
requestOptions:
headers:
X-Auth-Token: xxx
config.json
{
"models": [
{
"title": "Ollama",
"provider": "ollama",
"model": "llama2-7b",
"requestOptions": { "headers": { "X-Auth-Token": "xxx" } }
}
]
}
Similarly if your model requires a Certificate for authentication, you may use the requestOptions.clientCertificate property like in the example below:
config.yaml
models:
- name: Ollama
provider: ollama
model: llama2-7b
requestOptions:
clientCertificate:
cert: C:\tempollama.pem
key: C:\tempollama.key
passphrase: c0nt!nu3
config.json
{
"models": [
{
"title": "Ollama",
"provider": "ollama",
"model": "llama2-7b",
"requestOptions": {
"clientCertificate": {
"cert": "C:\\tempollama.pem",
"key": "C:\\tempollama.key",
"passphrase": "c0nt!nu3"
}
}
}
]
}
If your endpoint uses a private or corporate CA but does not require mutual TLS, configure requestOptions.caBundlePath instead. For common errors like unable to verify the first certificate or CERT_UNTRUSTED, see Configure Certificates and SSL certificate errors.