Back to Open Notebook

Quick Start - Local & Private (5 minutes)

docs/0-START-HERE/quick-start-local.md

1.8.57.4 KB
Original Source

Quick Start - Local & Private (5 minutes)

Get Open Notebook running with 100% local AI using Ollama. No cloud API keys needed, completely private.

Prerequisites

  1. Docker Desktop installed

  2. Local LLM - Choose one:

Step 1: Choose Your Setup (1 min)

Local Machine (Same Computer)

Everything runs on your machine. Recommended for testing/learning.

Remote Server (Raspberry Pi, NAS, Cloud VM)

Run on a different computer, access from another. Needs network configuration.


Step 2: Create Configuration (1 min)

Create a new folder open-notebook-local and add this file:

docker-compose.yml:

yaml
services:
  surrealdb:
    image: surrealdb/surrealdb:v2
    command: start --user root --pass password --bind 0.0.0.0:8000 rocksdb:/mydata/mydatabase.db
    user: root
    ports:
      - "8000:8000"
    volumes:
      - ./surreal_data:/mydata

  open_notebook:
    image: lfnovo/open_notebook:v1-latest
    pull_policy: always
    ports:
      - "8502:8502"  # Web UI (React frontend)
      - "5055:5055"  # API (required!)
    environment:
      # Encryption key for credential storage (required)
      - OPEN_NOTEBOOK_ENCRYPTION_KEY=change-me-to-a-secret-string

      # Database (required)
      - SURREAL_URL=ws://surrealdb:8000/rpc
      - SURREAL_USER=root
      - SURREAL_PASSWORD=password
      - SURREAL_NAMESPACE=open_notebook
      - SURREAL_DATABASE=open_notebook
    volumes:
      - ./notebook_data:/app/data
    depends_on:
      - surrealdb
    restart: always

  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ./ollama_models:/root/.ollama
    restart: always
    # Optional: set GPU support if available
    #deploy:
    #  resources:
    #    reservations:
    #      devices:
    #        - driver: nvidia
    #          count: 1
    #          capabilities: [gpu]

Edit the file:

  • Replace change-me-to-a-secret-string with your own secret (any string works)

Step 3: Start Services (1 min)

Open terminal in your open-notebook-local folder:

bash
docker compose up -d

Wait 10-15 seconds for all services to start.


Step 4: Download a Model (2-3 min)

Ollama needs at least one language model. Pick one:

bash
# Fastest & smallest (recommended for testing)
docker exec open-notebook-local-ollama-1 ollama pull mistral

# OR: Better quality but slower
docker exec open-notebook-local-ollama-1 ollama pull neural-chat

# OR: Even better quality, more VRAM needed
docker exec open-notebook-local-ollama-1 ollama pull llama2

This downloads the model (will take 1-5 minutes depending on your internet).


Step 5: Access Open Notebook (instant)

Open your browser:

http://localhost:8502

You should see the Open Notebook interface.


Step 6: Configure Ollama Provider (1 min)

  1. Go to SettingsAPI Keys
  2. Click Add Credential
  3. Select provider: Ollama
  4. Give it a name (e.g., "Local Ollama")
  5. Enter the base URL: http://ollama:11434
  6. Click Save
  7. Click Test Connection — should show success
  8. Click Discover ModelsRegister Models

Step 7: Configure Local Model (1 min)

  1. Go to SettingsModels
  2. Set:
    • Language Model: ollama/mistral (or whichever model you downloaded)
    • Embedding Model: ollama/nomic-embed-text (auto-downloads if missing)
  3. Click Save

Step 8: Create Your First Notebook (1 min)

  1. Click New Notebook
  2. Name: "My Private Research"
  3. Click Create

Step 9: Add Local Content (1 min)

  1. Click Add Source
  2. Choose Text
  3. Paste some text or a local document
  4. Click Add

Step 10: Chat With Your Content (1 min)

  1. Go to Chat
  2. Type: "What did you learn from this?"
  3. Click Send
  4. Watch as the local Ollama model responds!

Verification Checklist

  • Docker is running
  • You can access http://localhost:8502
  • Ollama credential is configured and tested
  • Models are registered
  • You created a notebook
  • Chat works with local model

All checked? You have a completely private, offline research assistant!


Advantages of Local Setup

  • No API costs - Free forever
  • No internet required - True offline capability
  • Privacy first - Your data never leaves your machine
  • No subscriptions - No monthly bills

Trade-off: Slower than cloud models (depends on your CPU/GPU)


Troubleshooting

"ollama: command not found"

Docker image name might be different:

bash
docker ps  # Find the Ollama container name
docker exec <container_name> ollama pull mistral

Model Download Stuck

Check internet connection and restart:

bash
docker compose restart ollama

Then retry the model pull command.

"Address already in use" Error

bash
docker compose down
docker compose up -d

Low Performance

Check if GPU is available:

bash
# Show available GPUs
docker exec open-notebook-local-ollama-1 ollama ps

# Enable GPU in docker-compose.yml

Then restart: docker compose restart ollama

Adding More Models

bash
# List available models
docker exec open-notebook-local-ollama-1 ollama list

# Pull additional model
docker exec open-notebook-local-ollama-1 ollama pull neural-chat

Next Steps

Now that it's running:

  1. Add Your Own Content: PDFs, documents, articles (see 3-USER-GUIDE)
  2. Explore Features: Podcasts, transformations, search
  3. Full Documentation: See all features
  4. Scale Up: Deploy to a server with better hardware for faster responses
  5. Benchmark Models: Try different models to find the speed/quality tradeoff you prefer

Alternative: Using LM Studio Instead of Ollama

Prefer a GUI? LM Studio is easier for non-technical users:

  1. Download LM Studio: https://lmstudio.ai
  2. Open the app, download a model from the library
  3. Go to "Local Server" tab, start server (port 1234)
  4. In Open Notebook, go to SettingsAPI Keys
  5. Click Add Credential → Select OpenAI-Compatible
  6. Enter base URL: http://host.docker.internal:1234/v1
  7. Enter API key: lm-studio (placeholder)
  8. Click Save, then Test Connection
  9. Configure in Settings → Models → Select your LM Studio model

Note: LM Studio runs outside Docker, use host.docker.internal to connect.


Going Further

  • Switch models: Change in Settings → Models anytime
  • Add more models:
    • Ollama: Run ollama pull <model>, then re-discover models from the credential
    • LM Studio: Download from the app library
  • Deploy to server: Same docker-compose.yml works anywhere
  • Use cloud hybrid: Keep some local models, add cloud provider credentials for complex tasks

Common Model Choices

ModelSpeedQualityVRAMBest For
mistralFastGood4GBTesting, general use
neural-chatMediumBetter6GBBalanced, recommended
llama2SlowBest8GB+Complex reasoning
phiVery FastFair2GBMinimal hardware

Need Help? Join our Discord community - many users run local setups!