docs/0-START-HERE/quick-start-local.md
Get Open Notebook running with 100% local AI using Ollama. No cloud API keys needed, completely private.
Docker Desktop installed
Local LLM - Choose one:
Everything runs on your machine. Recommended for testing/learning.
Run on a different computer, access from another. Needs network configuration.
Create a new folder open-notebook-local and add this file:
docker-compose.yml:
services:
surrealdb:
image: surrealdb/surrealdb:v2
command: start --user root --pass password --bind 0.0.0.0:8000 rocksdb:/mydata/mydatabase.db
user: root
ports:
- "8000:8000"
volumes:
- ./surreal_data:/mydata
open_notebook:
image: lfnovo/open_notebook:v1-latest
pull_policy: always
ports:
- "8502:8502" # Web UI (React frontend)
- "5055:5055" # API (required!)
environment:
# Encryption key for credential storage (required)
- OPEN_NOTEBOOK_ENCRYPTION_KEY=change-me-to-a-secret-string
# Database (required)
- SURREAL_URL=ws://surrealdb:8000/rpc
- SURREAL_USER=root
- SURREAL_PASSWORD=password
- SURREAL_NAMESPACE=open_notebook
- SURREAL_DATABASE=open_notebook
volumes:
- ./notebook_data:/app/data
depends_on:
- surrealdb
restart: always
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ./ollama_models:/root/.ollama
restart: always
# Optional: set GPU support if available
#deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
Edit the file:
change-me-to-a-secret-string with your own secret (any string works)Open terminal in your open-notebook-local folder:
docker compose up -d
Wait 10-15 seconds for all services to start.
Ollama needs at least one language model. Pick one:
# Fastest & smallest (recommended for testing)
docker exec open-notebook-local-ollama-1 ollama pull mistral
# OR: Better quality but slower
docker exec open-notebook-local-ollama-1 ollama pull neural-chat
# OR: Even better quality, more VRAM needed
docker exec open-notebook-local-ollama-1 ollama pull llama2
This downloads the model (will take 1-5 minutes depending on your internet).
Open your browser:
http://localhost:8502
You should see the Open Notebook interface.
http://ollama:11434ollama/mistral (or whichever model you downloaded)ollama/nomic-embed-text (auto-downloads if missing)http://localhost:8502All checked? You have a completely private, offline research assistant!
Trade-off: Slower than cloud models (depends on your CPU/GPU)
Docker image name might be different:
docker ps # Find the Ollama container name
docker exec <container_name> ollama pull mistral
Check internet connection and restart:
docker compose restart ollama
Then retry the model pull command.
docker compose down
docker compose up -d
Check if GPU is available:
# Show available GPUs
docker exec open-notebook-local-ollama-1 ollama ps
# Enable GPU in docker-compose.yml
Then restart: docker compose restart ollama
# List available models
docker exec open-notebook-local-ollama-1 ollama list
# Pull additional model
docker exec open-notebook-local-ollama-1 ollama pull neural-chat
Now that it's running:
Prefer a GUI? LM Studio is easier for non-technical users:
http://host.docker.internal:1234/v1lm-studio (placeholder)Note: LM Studio runs outside Docker, use host.docker.internal to connect.
ollama pull <model>, then re-discover models from the credential| Model | Speed | Quality | VRAM | Best For |
|---|---|---|---|---|
| mistral | Fast | Good | 4GB | Testing, general use |
| neural-chat | Medium | Better | 6GB | Balanced, recommended |
| llama2 | Slow | Best | 8GB+ | Complex reasoning |
| phi | Very Fast | Fair | 2GB | Minimal hardware |
Need Help? Join our Discord community - many users run local setups!