docs/5-CONFIGURATION/advanced.md
Performance tuning, debugging, and advanced features.
# Max concurrent database operations (default: 5)
# Increase: Faster processing, more conflicts
# Decrease: Slower, fewer conflicts
SURREAL_COMMANDS_MAX_TASKS=5
Guidelines:
Higher concurrency = more throughput but more database conflicts (retries handle this).
# How to wait between retries
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY=exponential_jitter
# Options:
# - exponential_jitter (recommended)
# - exponential
# - fixed
# - random
For high-concurrency deployments, use exponential_jitter to prevent thundering herd.
# Client timeout (default: 300 seconds)
API_CLIENT_TIMEOUT=300
# LLM timeout (default: 60 seconds)
ESPERANTO_LLM_TIMEOUT=60
Guideline: Set API_CLIENT_TIMEOUT > ESPERANTO_LLM_TIMEOUT + buffer
Example:
ESPERANTO_LLM_TIMEOUT=120
API_CLIENT_TIMEOUT=180 # 120 + 60 second buffer
For podcast generation, control concurrent TTS requests:
# Default: 5
TTS_BATCH_SIZE=2
Providers and recommendations:
Lower = slower but more stable. Higher = faster but more load on provider.
# Start with debug logging
RUST_LOG=debug # For Rust components
LOGLEVEL=DEBUG # For Python components
# Only surreal operations
RUST_LOG=surrealdb=debug
# Only langchain
LOGLEVEL=langchain:debug
# Only specific module
RUST_LOG=open_notebook::database=debug
For debugging LLM workflows:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY=your-key
LANGCHAIN_PROJECT="Open Notebook"
Then visit https://smith.langchain.com to see traces.
Frontend: 8502 (Docker deployment)
Frontend: 3000 (Development from source)
API: 5055
SurrealDB: 8000
Edit docker-compose.yml:
services:
open-notebook:
ports:
- "8001:8502" # Change from 8502 to 8001
Access at: http://localhost:8001
API auto-detects to: http://localhost:5055 ✓
services:
open-notebook:
ports:
- "127.0.0.1:8502:8502" # Frontend
- "5056:5055" # Change API from 5055 to 5056
environment:
- API_URL=http://localhost:5056 # Update API_URL
Access API directly: http://localhost:5056/docs
Note: When changing API port, you must set API_URL explicitly since auto-detection assumes port 5055.
services:
surrealdb:
ports:
- "8001:8000" # Change from 8000 to 8001
environment:
- SURREAL_URL=ws://surrealdb:8001/rpc # Update connection URL
Important: Internal Docker network uses container name (surrealdb), not localhost.
For self-signed certs on local providers:
ESPERANTO_SSL_CA_BUNDLE=/path/to/ca-bundle.pem
# WARNING: Only for testing/development
# Vulnerable to MITM attacks
ESPERANTO_SSL_VERIFY=false
Configure multiple AI providers via Settings → API Keys. Each provider gets its own credential:
When using OpenAI-Compatible providers, you can configure per-service URLs in a single credential:
# Don't use defaults in production
SURREAL_USER=your_secure_username
SURREAL_PASSWORD=$(openssl rand -base64 32) # Generate secure password
# Protect your Open Notebook instance
OPEN_NOTEBOOK_PASSWORD=your_secure_password
# Always use HTTPS in production
API_URL=https://mynotebook.example.com
Restrict access to your Open Notebook:
Open Notebook uses multiple services for content extraction:
For advanced web scraping:
FIRECRAWL_API_KEY=your-key
Get key from: https://firecrawl.dev/
Alternative web extraction:
JINA_API_KEY=your-key
Get key from: https://jina.ai/
OPEN_NOTEBOOK_ENCRYPTION_KEY # Required for storing credentials
AI provider API keys are configured via Settings → API Keys (not environment variables).
SURREAL_URL
SURREAL_USER
SURREAL_PASSWORD
SURREAL_NAMESPACE
SURREAL_DATABASE
SURREAL_COMMANDS_MAX_TASKS
SURREAL_COMMANDS_RETRY_ENABLED
SURREAL_COMMANDS_RETRY_MAX_ATTEMPTS
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY
SURREAL_COMMANDS_RETRY_WAIT_MIN
SURREAL_COMMANDS_RETRY_WAIT_MAX
API_URL
INTERNAL_API_URL
API_CLIENT_TIMEOUT
ESPERANTO_LLM_TIMEOUT
TTS_BATCH_SIZE
Note:
ELEVENLABS_API_KEYis deprecated. Configure ElevenLabs via Settings → API Keys.
LANGCHAIN_TRACING_V2
LANGCHAIN_ENDPOINT
LANGCHAIN_API_KEY
LANGCHAIN_PROJECT
# Test API health
curl http://localhost:5055/health
# Test with sample (requires configured credential and registered models)
curl -X POST http://localhost:5055/api/chat \
-H "Content-Type: application/json" \
-d '{"message":"Hello"}'
# Check environment variables are set
env | grep OPEN_NOTEBOOK_ENCRYPTION_KEY
# Verify database connection
python -c "import os; print(os.getenv('SURREAL_URL'))"
# Reduce concurrency
SURREAL_COMMANDS_MAX_TASKS=2
# Reduce TTS batch size
TTS_BATCH_SIZE=1
# Check worker count
SURREAL_COMMANDS_MAX_TASKS
# Reduce if maxed out:
SURREAL_COMMANDS_MAX_TASKS=5
# Check timeout settings
API_CLIENT_TIMEOUT=300
# Check retry config
SURREAL_COMMANDS_RETRY_MAX_ATTEMPTS=3
# Reduce concurrency
SURREAL_COMMANDS_MAX_TASKS=3
# Use jitter strategy
SURREAL_COMMANDS_RETRY_WAIT_STRATEGY=exponential_jitter
| Path | Contents |
|---|---|
./data or /app/data | Uploads, podcasts, checkpoints |
./surreal_data or /mydata | SurrealDB database files |
# Stop services (recommended for consistency)
docker compose down
# Create timestamped backup
tar -czf backup-$(date +%Y%m%d-%H%M%S).tar.gz \
notebook_data/ surreal_data/
# Restart services
docker compose up -d
#!/bin/bash
# backup.sh - Run daily via cron
BACKUP_DIR="/path/to/backups"
DATE=$(date +%Y%m%d-%H%M%S)
# Create backup
tar -czf "$BACKUP_DIR/open-notebook-$DATE.tar.gz" \
/path/to/notebook_data \
/path/to/surreal_data
# Keep only last 7 days
find "$BACKUP_DIR" -name "open-notebook-*.tar.gz" -mtime +7 -delete
echo "Backup complete: open-notebook-$DATE.tar.gz"
Add to cron:
# Daily backup at 2 AM
0 2 * * * /path/to/backup.sh >> /var/log/open-notebook-backup.log 2>&1
# Stop services
docker compose down
# Remove old data (careful!)
rm -rf notebook_data/ surreal_data/
# Extract backup
tar -xzf backup-20240115-120000.tar.gz
# Restart services
docker compose up -d
# On source server
docker compose down
tar -czf open-notebook-migration.tar.gz notebook_data/ surreal_data/
# Transfer to new server
scp open-notebook-migration.tar.gz user@newserver:/path/
# On new server
tar -xzf open-notebook-migration.tar.gz
docker compose up -d
# Start services
docker compose up -d
# Stop services
docker compose down
# View logs (all services)
docker compose logs -f
# View logs (specific service)
docker compose logs -f api
# Restart specific service
docker compose restart api
# Update to latest version
docker compose down
docker compose pull
docker compose up -d
# Check resource usage
docker stats
# Check service health
docker compose ps
# Remove stopped containers
docker compose rm
# Remove unused images
docker image prune
# Full cleanup (careful!)
docker system prune -a
Most deployments need:
Tune performance only if:
Advanced features: