docs/6-TROUBLESHOOTING/faq.md
Common questions about Open Notebook usage, configuration, and best practices.
Open Notebook is an open-source, privacy-focused alternative to Google's Notebook LM. It allows you to:
Privacy: Your data stays local by default. Only your chosen AI providers receive queries. Flexibility: Support for 17+ AI providers (OpenAI, Anthropic, Google, local models, etc.) Customization: Open source, so you can modify and extend functionality Control: You control your data, models, and processing
Partially: The application runs locally, but requires internet for:
Fully offline: Possible with local models (Ollama) for basic functionality.
Documents: PDF, DOCX, TXT, Markdown Web Content: URLs, YouTube videos Media: MP3, WAV, M4A (audio), MP4, AVI, MOV (video) Other: Direct text input, CSV, code files
Software: Free (open source) AI API costs: Pay-per-use to providers:
Typical monthly costs: $5-50 for moderate usage.
For beginners: OpenAI (reliable, well-documented) For privacy: Local models (Ollama) or European providers (Mistral) For cost optimization: Groq, Google (free tier), or OpenRouter For long context: Anthropic (200K tokens) or Google Gemini (1M tokens)
Yes: Configure different providers for different tasks:
Budget-friendly:
gpt-4o-mini (OpenAI) or deepseek-chattext-embedding-3-small (OpenAI)High-quality:
claude-3-5-sonnet (Anthropic) or gpt-4o (OpenAI)text-embedding-3-large (OpenAI)Privacy-focused:
Model selection:
Usage optimization:
Local storage: By default, all data is stored locally:
surreal_data/data/uploads/data/podcasts/# Create backup
tar -czf backup-$(date +%Y%m%d).tar.gz data/ surreal_data/
# Restore backup
tar -xzf backup-20240101.tar.gz
Currently: No built-in sync functionality. Workarounds:
Soft deletion: Notebooks are marked as archived, not permanently deleted. Recovery: Archived notebooks can be restored from the database.
Recommended size: 20-100 sources per notebook for best performance.
OPEN_NOTEBOOK_PASSWORD for public deploymentsYes: Open Notebook provides a REST API:
http://localhost:5055/docsYes: Designed for production use with:
Minimum:
Recommended:
Common causes:
Solutions:
# In .env:
API_CLIENT_TIMEOUT=600 # 10 minutes for slow setups
ESPERANTO_LLM_TIMEOUT=180 # 3 minutes for model inference
| Setup | API_CLIENT_TIMEOUT |
|---|---|
| Cloud APIs (OpenAI, Anthropic) | 300 (default) |
| Local Ollama with GPU | 600 |
| Local Ollama with CPU | 1200 |
| Remote LM Studio | 900 |
Include:
Submit to: GitHub Issues