docs/customize/model-providers/top-level/openai.mdx
models: - name: <MODEL_NAME> provider: openai model: <MODEL_ID> apiKey: <YOUR_OPENAI_API_KEY>
</Tab>
<Tab title="JSON (Deprecated)">
```json title="config.json"
{
"models": [
{
"title": "<MODEL_NAME>",
"provider": "openai",
"model": "<MODEL_ID>",
"apiKey": "<YOUR_OPENAI_API_KEY>"
}
]
}
OpenAI API compatible providers include
If you are using an OpenAI API compatible providers, you can change the apiBase like this:
models: - name: <OPENAI_API_COMPATIBLE_PROVIDER_MODEL> provider: openai model: <MODEL_NAME> apiBase: http://localhost:8000/v1 apiKey: <YOUR_CUSTOM_API_KEY>
</Tab>
<Tab title="JSON (Deprecated)">
```json title="config.json"
{
"models": [
{
"title": "<OPENAI_API_COMPATIBLE_PROVIDER_MODEL>",
"provider": "openai",
"model": "<MODEL_NAME>",
"apiKey": "<YOUR_CUSTOM_API_KEY>",
"apiBase": "http://localhost:8000/v1"
}
]
}
To force usage of completions instead of chat/completions endpoint you can set:
models: - name: <OPENAI_API_COMPATIBLE_PROVIDER_MODEL> provider: openai model: <MODEL_NAME>> apiBase: http://localhost:8000/v1 useLegacyCompletionsEndpoint: true
</Tab>
<Tab title="JSON (Deprecated)">
```json title="config.json"
{
"models": [
{
"title": "<OPENAI_API_COMPATIBLE_PROVIDER_MODEL>",
"provider": "openai",
"model": "<MODEL_NAME>",
"apiBase": "http://localhost:8000/v1",
"useLegacyCompletionsEndpoint": true
}
]
}