docs/5-CONFIGURATION/openai-compatible.md
Use any server that implements the OpenAI API format with Open Notebook. This includes LM Studio, Text Generation WebUI, vLLM, and many others.
Many AI tools implement the same API format as OpenAI:
POST /v1/chat/completions
POST /v1/embeddings
POST /v1/audio/speech
Open Notebook can connect to any server using this format.
| Server | Use Case | URL |
|---|---|---|
| LM Studio | Desktop GUI for local models | https://lmstudio.ai |
| Text Generation WebUI | Full-featured local inference | https://github.com/oobabooga/text-generation-webui |
| vLLM | High-performance serving | https://github.com/vllm-project/vllm |
| Ollama | Simple local models | (Use native Ollama provider instead) |
| LocalAI | Local AI inference | https://github.com/mudler/LocalAI |
| llama.cpp server | Lightweight inference | https://github.com/ggerganov/llama.cpp |
http://host.docker.internal:1234/v1 (Docker) or http://localhost:1234/v1 (local)lm-studio (placeholder, LM Studio doesn't require one)Legacy (Deprecated) — Environment variables:
export OPENAI_COMPATIBLE_BASE_URL=http://localhost:1234/v1
export OPENAI_COMPATIBLE_API_KEY=not-needed
openai_compatibleLM Studio - Llama 3The recommended way to configure OpenAI-compatible providers is through the Settings UI:
Deprecated: These environment variables are deprecated. Use the Settings UI instead.
OPENAI_COMPATIBLE_BASE_URL=http://localhost:1234/v1
OPENAI_COMPATIBLE_API_KEY=optional-api-key
OPENAI_COMPATIBLE_BASE_URL_EMBEDDING=http://localhost:1234/v1
OPENAI_COMPATIBLE_API_KEY_EMBEDDING=optional-api-key
OPENAI_COMPATIBLE_BASE_URL_TTS=http://localhost:8969/v1
OPENAI_COMPATIBLE_API_KEY_TTS=optional-api-key
OPENAI_COMPATIBLE_BASE_URL_STT=http://localhost:9000/v1
OPENAI_COMPATIBLE_API_KEY_STT=optional-api-key
When Open Notebook runs in Docker and your compatible server runs on the host, use the appropriate base URL when adding your credential in Settings → API Keys:
Base URL: http://host.docker.internal:1234/v1
Base URL (Option 1 — Docker bridge IP): http://172.17.0.1:1234/v1
Option 2: Use host networking mode: docker run --network host ...
Then use base URL: http://localhost:1234/v1
# docker-compose.yml
services:
open-notebook:
# ...
lm-studio:
# your LM Studio container
ports:
- "1234:1234"
Base URL in Settings → API Keys: http://lm-studio:1234/v1
python server.py --api --listen
In Settings → API Keys, add an OpenAI-Compatible credential with base URL: http://localhost:5000/v1
# Add to your docker-compose.yml (requires surrealdb service, see installation guide)
services:
text-gen:
image: atinoda/text-generation-webui:default
ports:
- "5000:5000"
- "7860:7860"
volumes:
- ./models:/app/models
command: --api --listen
open-notebook:
image: lfnovo/open_notebook:v1-latest
pull_policy: always
depends_on:
- text-gen
Then in Settings → API Keys, add an OpenAI-Compatible credential with base URL: http://text-gen:5000/v1
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Llama-3.1-8B-Instruct \
--port 8000
In Settings → API Keys, add an OpenAI-Compatible credential with base URL: http://localhost:8000/v1
# Add to your docker-compose.yml (requires surrealdb service, see installation guide)
services:
vllm:
image: vllm/vllm-openai:latest
command: --model meta-llama/Llama-3.1-8B-Instruct
ports:
- "8000:8000"
volumes:
- ~/.cache/huggingface:/root/.cache/huggingface
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
open-notebook:
image: lfnovo/open_notebook:v1-latest
pull_policy: always
depends_on:
- vllm
Then in Settings → API Keys, add an OpenAI-Compatible credential with base URL: http://vllm:8000/v1
openai_compatibleThe model name must match what your server expects:
| Server | Model Name Format |
|---|---|
| LM Studio | As shown in LM Studio UI |
| vLLM | HuggingFace model path |
| Text Gen WebUI | As loaded in UI |
| llama.cpp | Model file name |
# Test chat completions
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "your-model-name",
"messages": [{"role": "user", "content": "Hello"}]
}'
docker exec -it open-notebook curl http://host.docker.internal:1234/v1/models
Problem: Cannot connect to server
Solutions:
1. Verify server is running
2. Check port is correct
3. Test with curl directly
4. Check Docker networking (use host.docker.internal)
5. Verify firewall allows connection
Problem: Server returns "model not found"
Solutions:
1. Check model is loaded in server
2. Verify exact model name spelling
3. List available models: curl http://localhost:1234/v1/models
4. Update model name in Open Notebook
Problem: Requests take very long
Solutions:
1. Check server resources (RAM, GPU)
2. Use smaller/quantized model
3. Reduce context length
4. Enable GPU acceleration if available
Problem: 401 or authentication failed
Solutions:
1. Check if server requires API key
2. Set the API key in your credential (Settings → API Keys)
3. Some servers need any non-empty key (use a placeholder like "not-needed")
Problem: Request times out
Solutions:
1. Model may be loading (first request slow)
2. Increase timeout settings
3. Check server logs for errors
4. Reduce request size
You can use different compatible servers for different purposes. When adding an OpenAI-Compatible credential in Settings → API Keys, you can configure per-service URLs:
http://localhost:1234/v1 (LM Studio)http://localhost:8080/v1 (different server)http://localhost:8969/v1 (Speaches)http://localhost:9000/v1 (Speaches)Alternatively, add each as a separate credential with its own base URL.
| Model Size | RAM Needed | Speed |
|---|---|---|
| 7B | 8GB | Fast |
| 13B | 16GB | Medium |
| 70B | 64GB+ | Slow |
Use quantized models (Q4, Q5) for faster inference with less RAM:
llama-3-8b-q4_k_m.gguf → ~4GB RAM, fast
llama-3-8b-f16.gguf → ~16GB RAM, slower
Enable GPU in your server for much faster inference:
--n-gpu-layers 35| Aspect | Native Provider | OpenAI Compatible |
|---|---|---|
| Setup | API key only | Server + configuration |
| Models | Provider's models | Any compatible model |
| Cost | Pay per token | Free (local) |
| Speed | Usually fast | Depends on hardware |
| Features | Full support | Basic features |
Use OpenAI-compatible when: