backend/SCRIPTS_DOCUMENTATION.md
This comprehensive document details all the .cmd, .ps1, and .sh scripts in the backend directory, their purposes, usage patterns, interactions, and available options.
The backend contains three categories of deployment approaches:
Prerequisites:
Build Process:
# 1. Navigate to backend directory
cd backend
# 2. Build whisper.cpp and setup environment
build_whisper.cmd small
# 3. Start services (interactive mode)
start_with_output.ps1
# Alternative: Use clean_start_backend.cmd
clean_start_backend.cmd
What happens during build:
whisper.cpp)whisper-custom/server/venv/requirements.txtggml-small.bin ~244MB)whisper-server-package/ is created with all necessary filesPrerequisites:
xcode-select --install/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"brew install python3brew install cmake llvm libompBuild Process:
# 1. Navigate to backend directory
cd backend
# 2. Build whisper.cpp and setup environment
./build_whisper.sh small
# 3. Start services (interactive mode)
./clean_start_backend.sh
macOS-Specific Optimizations:
libomp for OpenMP accelerationPrerequisites:
Quick Start:
# 1. Navigate to backend directory
cd backend
# 2. Build whisper.cpp and setup environment
.\build-docker.ps1 cpu -NoCache
# 3. Interactive setup with all options
.\run-docker.ps1 start -Interactive
# 4. Or use defaults for quick start
.\run-docker.ps1 start -Detach
Interactive Setup Flow:
Advanced Configuration:
# Start with specific model and GPU
.\run-docker.ps1 start -Model large-v3 -Port 8081 -Gpu -Language de -Detach
# Monitor and manage
.\run-docker.ps1 logs -Service whisper -Follow
.\run-docker.ps1 status
.\run-docker.ps1 gpu-test
Prerequisites:
Quick Start:
# 1. Navigate to backend directory
cd backend
# 2. Build whisper.cpp and setup environment
./build-docker.sh cpu -no-cache
# 3. Interactive setup
./run-docker.sh start --interactive
# 4. Or quick start with defaults
./run-docker.sh start --detach
macOS-Specific Features:
Advanced Usage:
# Start with specific configuration
./run-docker.sh start --model large-v3 --gpu --language en --detach
# Database setup with auto-detection
./run-docker.sh setup-db --auto
# Build macOS-optimized images
./run-docker.sh build macos
# System monitoring
./run-docker.sh logs --service whisper --follow
./run-docker.sh status
./run-docker.sh models download base.en
| Aspect | Native | Docker |
|---|---|---|
| Performance | Optimal (direct hardware access) | Not optimal (container overhead ~5-10%) |
| Setup Time | Medium (compile time ~5-10 min) | Fast (pre-built images) |
| Dependencies | Manual installation required | Isolated, no host pollution |
| GPU Support | Full native support | NVIDIA only (Windows/Linux) |
| Portability | Platform-specific builds | Universal containers |
| Development | Faster iteration cycles | Consistent environments |
| Troubleshooting | Direct system access | Container logs and debugging |
| Resource Usage | Lower memory footprint | Higher memory usage |
| Isolation | Shared host environment | Complete isolation |
Windows: Native approach for fastest iteration
build_whisper.cmd small
start_with_output.ps1
macOS: Docker approach for consistency
build-docker.sh cpu -no-cache
./run-docker.sh start --interactive
Both Platforms: Docker with pre-built models
# Pre-download models
./run-docker.sh models download large-v3
# Start in production mode
./run-docker.sh start --model large-v3 --detach --language auto
Both Platforms: Docker with registry
./run-docker.sh build both --registry ghcr.io/yourorg --push
After successful startup, services are available at:
Whisper Server: http://localhost:8178
GET /POST /inferencews://localhost:8178/Meeting App: http://localhost:5167
GET /get-meetingsws://localhost:5167/ws# Windows: CMake not found
# Solution: Install Visual Studio Build Tools
# macOS: Compilation errors
brew install cmake llvm libomp
export CC=/opt/homebrew/bin/clang
export CXX=/opt/homebrew/bin/clang++
# Python dependency issues
python -m pip install --upgrade pip
pip install -r requirements.txt --force-reinstall
# Port conflicts
./run-docker.sh stop
# Check with: netstat -an | findstr :8178
# GPU not detected (Windows)
# Enable WSL2 integration in Docker Desktop
# Install nvidia-container-toolkit
# Model download failures
# Check internet connection and disk space
./run-docker.sh models download base.en
build_whisper.cmd / build_whisper.shPurpose: Primary build script that compiles whisper.cpp, sets up Python environment, and creates the whisper-server package.
Key Features:
whisper-custom/server/Usage:
# With specific model
./build_whisper.sh small
# Interactive mode (prompts for model)
./build_whisper.sh
Options:
MODEL_NAME: First argument specifies whisper model to download (tiny, base, small, medium, large-v1, large-v2, large-v3, etc.)clean_start_backend.cmd / clean_start_backend.shPurpose: Complete environment cleanup and service startup script that ensures clean state before launching.
Key Features:
Usage:
# With specific model
./clean_start_backend.sh large-v3
# Interactive mode
./clean_start_backend.sh
Options:
MODEL_NAME: First argument for model selectionstart_python_backend.cmdPurpose: Standalone Python backend launcher for Windows.
Features:
Usage:
start_python_backend.cmd [PORT]
start_whisper_server.cmdPurpose: Standalone whisper server launcher for Windows.
Features:
Usage:
start_whisper_server.cmd [MODEL_NAME]
download-ggml-model.cmd / download-ggml-model.shPurpose: Downloads pre-converted whisper models from HuggingFace.
Features:
Usage:
# Download specific model
./download-ggml-model.sh base.en
# View available models
./download-ggml-model.sh
Available Models:
run-docker.ps1 / run-docker.shPurpose: Comprehensive Docker deployment manager with advanced user experience features.
Key Features:
Commands:
# Interactive setup with all options
.\run-docker.ps1 start -Interactive
# Quick start with defaults
.\run-docker.ps1 start
# Start with specific configuration
.\run-docker.ps1 start -Model large-v3 -Port 8081 -Gpu -Language es -Detach
# Database setup
.\run-docker.ps1 setup-db --auto
# View logs with options
.\run-docker.ps1 logs -Service whisper -Follow
# System management
.\run-docker.ps1 status
.\run-docker.ps1 clean -All
.\run-docker.ps1 gpu-test
Advanced Options:
build-docker.ps1 / build-docker.shPurpose: Multi-platform Docker image builder with intelligent platform detection.
Key Features:
Build Types:
# CPU-only build (universal compatibility)
.\build-docker.ps1 cpu
# GPU-enabled build (CUDA support)
.\build-docker.ps1 gpu
# macOS-optimized build (Apple Silicon)
.\build-docker.ps1 macos
# Build both CPU and GPU versions
.\build-docker.ps1 both
Advanced Options:
# Multi-platform build with registry push
.\build-docker.ps1 gpu -Registry ghcr.io/user -Push -Platforms linux/amd64,linux/arm64
# Custom build with specific CUDA version
.\build-docker.ps1 gpu -BuildArgs "CUDA_VERSION=12.1.1"
# Build with cache optimization
.\build-docker.ps1 cpu -NoCache -Tag custom-build
setup-db.ps1 / setup-db.shPurpose: Database setup and migration utility for Docker deployments.
Features:
Usage Modes:
# Interactive setup (recommended)
.\setup-db.ps1
# Auto-detect and migrate
.\setup-db.ps1 -Auto
# Fresh installation
.\setup-db.ps1 -Fresh
# Custom database path
.\setup-db.ps1 -DbPath "C:\path\to\database.db"
Search Locations:
/opt/homebrew/Cellar/meetily-backend/*/~/.meetily/, ~/Documents/meetily/, ~/Desktop/start_with_output.ps1Purpose: Advanced service launcher with comprehensive user interface for Windows.
Features:
Interactive Features:
docker/entrypoint.shPurpose: Docker container initialization and runtime management.
Key Features:
Environment Variables:
WHISPER_MODEL=models/ggml-base.en.bin # Model path
WHISPER_HOST=0.0.0.0 # Server host
WHISPER_PORT=8178 # Server port
WHISPER_THREADS=0 # Thread count (0=auto)
WHISPER_USE_GPU=true # GPU acceleration
WHISPER_LANGUAGE=en # Language code
WHISPER_TRANSLATE=false # Translation to English
WHISPER_DIARIZE=false # Speaker diarization
WHISPER_PRINT_PROGRESS=true # Progress display
WHISPER_DEBUG=false # Debug logging
Container Commands:
# Start server (default)
docker run whisper-server
# Run diagnostics
docker run whisper-server gpu-test
docker run whisper-server models
docker run whisper-server test
# Shell access
docker run -it whisper-server bash
build_whisper.sh/.cmd →
download-ggml-model.sh/.cmd for model acquisitionclean_start_backend.sh/.cmd →
download-ggml-model.sh/.cmd if models missingbuild-docker.sh/.ps1 →
setup-db.ps1/.sh →
run-docker.sh/.ps1 →
build-docker.sh/.ps1 if images missingsetup-db.sh/.ps1 for database preparationdocker/entrypoint.sh (inside container) →
run-docker.ps1 Preference System.docker-preferences file with JSON-like formattasklist, taskkill for process controlnetstat -ano for port monitoringps, kill, pkill for process controllsof, netstat for port monitoringbuild_whisper.sh, clean_start_backend.sh) for fastest iterationrun-docker.sh/.ps1) for consistency and isolationThis documentation provides comprehensive coverage of all script functionality, interactions, and usage patterns for the Meeting Minutes backend system.