Documentation/quick_start.md
Get up and running in 5 minutes!
Best for: Production deployments, isolated environments, easy scaling
Best for: Development, customization, debugging, faster iteration
# Clone repository
git clone <your-repository-url>
cd rag_system_old
# Ensure Docker is running
docker version
Even with Docker, Ollama runs locally for better performance:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama (in one terminal)
ollama serve
# Install models (in another terminal)
ollama pull qwen3:0.6b
ollama pull qwen3:8b
# Start all containers
./start-docker.sh
# Or manually:
docker compose --env-file docker.env up --build -d
# Check container status
docker compose ps
# Test endpoints
curl http://localhost:3000 # Frontend
curl http://localhost:8000/health # Backend
curl http://localhost:8001/models # RAG API
Open your browser to: http://localhost:3000
# Clone repository
git clone <your-repository-url>
cd rag_system_old
# Install Python dependencies
pip install -r requirements.txt
# Install Node.js dependencies
npm install
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama (in one terminal)
ollama serve
# Install models (in another terminal)
ollama pull qwen3:0.6b
ollama pull qwen3:8b
# Start all components with one command
python run_system.py
Or start components manually in separate terminals:
# Terminal 1: RAG API
python -m rag_system.api_server
# Terminal 2: Backend
cd backend && python server.py
# Terminal 3: Frontend
npm run dev
# Check system health
python system_health_check.py
# Test endpoints
curl http://localhost:3000 # Frontend
curl http://localhost:8000/health # Backend
curl http://localhost:8001/models # RAG API
Open your browser to: http://localhost:3000
# Container management
./start-docker.sh # Start all containers
./start-docker.sh stop # Stop all containers
./start-docker.sh logs # View logs
./start-docker.sh status # Check status
# Manual Docker Compose
docker compose ps # Check status
docker compose logs -f # Follow logs
docker compose down # Stop containers
docker compose up --build -d # Rebuild and start
# System management
python run_system.py # Start all services
python system_health_check.py # Check system health
# Individual components
python -m rag_system.api_server # RAG API only
cd backend && python server.py # Backend only
npm run dev # Frontend only
# Stop: Press Ctrl+C in terminal running services
Containers not starting?
# Check Docker daemon
docker version
# Restart Docker Desktop and try again
./start-docker.sh
Port conflicts?
# Check what's using ports
lsof -i :3000 -i :8000 -i :8001
# Stop conflicting processes
./start-docker.sh stop
Import errors?
# Check Python installation
python --version # Should be 3.8+
# Reinstall dependencies
pip install -r requirements.txt --force-reinstall
Node.js errors?
# Check Node version
node --version # Should be 16+
# Reinstall dependencies
rm -rf node_modules package-lock.json
npm install
Ollama not responding?
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Restart Ollama
pkill ollama
ollama serve
Out of memory?
# Check memory usage
docker stats # For Docker
htop # For direct development
# Recommended: 16GB+ RAM for optimal performance
Run this comprehensive check:
# Check all endpoints
curl -f http://localhost:3000 && echo "ā
Frontend OK"
curl -f http://localhost:8000/health && echo "ā
Backend OK"
curl -f http://localhost:8001/models && echo "ā
RAG API OK"
curl -f http://localhost:11434/api/tags && echo "ā
Ollama OK"
# For Docker: Check containers
docker compose ps
If you see:
You're ready to start using LocalGPT!
rag-system/
āāā š³ start-docker.sh # Docker deployment script
āāā š run_system.py # Direct development launcher
āāā š©ŗ system_health_check.py # System verification
āāā š requirements.txt # Python dependencies
āāā š¦ package.json # Node.js dependencies
āāā š Documentation/ # Complete documentation
āāā š rag_system/ # Core system code
Documentation/architecture_overview.mdDocumentation/system_overview.mdDocumentation/deployment_guide.mdDOCKER_TROUBLESHOOTING.mdHappy RAG-ing! š
The repository includes several convenient scripts for document indexing:
For quick document indexing without the UI:
# Basic usage
./simple_create_index.sh "Index Name" "document.pdf"
# Multiple documents
./simple_create_index.sh "Research Papers" "paper1.pdf" "paper2.pdf" "notes.txt"
# Using wildcards
./simple_create_index.sh "Invoice Collection" ./invoices/*.pdf
Supported file types: PDF, TXT, DOCX, MD
For processing large document collections:
# Using the Python batch indexing script
python demo_batch_indexing.py
# Or using the direct indexing script
python create_index_script.py
These scripts automatically: