backend/README.md
FastAPI backend for meeting transcription and analysis with Docker distribution system for easy deployment.
When running in Docker containers, audio processing can drop chunks due to resource limitations:
Symptoms:
Prevention:
Choose your preferred deployment method:
# Navigate to backend directory
cd backend
# Windows (PowerShell)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
# macOS/Linux (Bash)
./build-docker.sh cpu
./run-docker.sh start --interactive
# Navigate to backend directory
cd backend
# Windows - Install dependencies first, then build
.\install_dependancies_for_windows.ps1 # Run as Administrator
build_whisper.cmd small
start_with_output.ps1
# macOS/Linux
./build_whisper.sh small
./clean_start_backend.sh
After startup, access:
/docs)Docker provides the easiest setup with automatic dependency management, GPU detection, and cross-platform compatibility.
# Build images
.\build-docker.ps1 cpu
# Interactive setup (recommended for first-time users)
.\run-docker.ps1 start -Interactive
# Quick start with defaults
.\run-docker.ps1 start -Detach
# GPU acceleration
.\build-docker.ps1 gpu
.\run-docker.ps1 start -Model large-v3 -Gpu -Language en -Detach
# Custom ports and features
.\run-docker.ps1 start -Port 8081 -AppPort 5168 -Translate -Diarize
# Monitor services
.\run-docker.ps1 logs -Service whisper -Follow
.\run-docker.ps1 status
# Build images
./build-docker.sh cpu
# Interactive setup (recommended)
./run-docker.sh start --interactive
# Quick start with defaults
./run-docker.sh start --detach
# With specific model and language
./run-docker.sh start --model base --language es --detach
# View logs and status
./run-docker.sh logs --service whisper --follow
./run-docker.sh status
# Database migration from existing installation
./run-docker.sh setup-db --auto
The interactive mode guides you through:
| Model | Size | Accuracy | Speed | Best For |
|---|---|---|---|---|
| tiny | ~39 MB | Basic | Fastest | Testing, low resources |
| base | ~142 MB | Good | Fast | General use (recommended) |
| small | ~244 MB | Better | Medium | Better accuracy needed |
| medium | ~769 MB | High | Slow | High accuracy requirements |
| large-v3 | ~1550 MB | Best | Slowest | Maximum accuracy |
| Aspect | Docker | Native |
|---|---|---|
| Setup | Easy (automated) | Manual (requires dependencies) |
| Performance | Good (5-10% overhead) | Optimal (direct hardware) |
| GPU Support | NVIDIA only | Full native support |
| Isolation | Complete | Shared environment |
| Portability | Universal | Platform-specific |
| Updates | Container replacement | Manual updates |
Native deployment offers optimal performance by running directly on the host system.
xcode-select --install/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"brew install python3brew install cmake llvm libompš¦ Option 1: Pre-built Release (Recommended - Easiest)
The simplest and fastest way to get started is using the pre-built backend release:
Prerequisites:
Installation Steps:
C:\meetily_backend\)Get-ChildItem -Path . -Recurse | Unblock-File
.\start_with_output.ps1
What it includes:
whisper-server.exe binaryFeatures:
ā Success Check: The script will guide you through setup and start both Whisper server (port 8178) and Meeting app (port 5167) automatically.
š¦ Option 2: Docker Setup (Alternative - Easier)
Docker handles all dependencies automatically:
# Navigate to backend directory
cd backend
# Build and start (CPU version)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
Prerequisites:
š ļø Option 3: Local Build (Best Performance)
For optimal performance, build locally after installing dependencies:
š§ Required Dependencies (Install First):
Step 1: Install Dependencies
# Run dependency installer (as Administrator)
Set-ExecutionPolicy Bypass -Scope Process -Force
.\install_dependancies_for_windows.ps1
ā ļø This takes 15-30 minutes and installs all required tools
Step 2: Build Whisper
# Build whisper.cpp with model (e.g., 'small', 'base.en', 'large-v3')
build_whisper.cmd small
# Start services interactively
start_with_output.ps1
# Alternative: Clean start
clean_start_backend.cmd
Build Process:
whisper.cpp)whisper-custom/server/venv/requirements.txtwhisper-server-package/ with all filesDependency Installation Details:
The install_dependancies_for_windows.ps1 script installs:
# Navigate to backend directory
cd backend
# Build whisper.cpp with model
./build_whisper.sh small
# Start services
./clean_start_backend.sh
macOS Optimizations:
libompGET /POST /inferencews://localhost:8178/GET /get-meetingsws://localhost:5167/wsIf you prefer complete manual control over the installation process.
Windows:
# Python 3.9+ from Python.org (add to PATH)
# Visual Studio Build Tools (Desktop C++ workload)
# CMake from CMake.org (add to PATH)
# FFmpeg (download or: choco install ffmpeg)
# Git from Git-scm.com
# Ollama from Ollama.com
macOS:
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install dependencies
brew install [email protected] cmake llvm libomp ffmpeg git ollama
# Windows
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
# macOS
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
# Windows
./build_whisper.cmd
# macOS (make executable if needed)
chmod +x build_whisper.sh
./build_whisper.sh
# Windows
./start_with_output.ps1
# macOS
chmod +x clean_start_backend.sh
./clean_start_backend.sh
Once services are running:
Whisper.cpp Server (Port 8178)
FastAPI Backend (Port 5167)
Port Conflicts:
# Stop services
./run-docker.sh stop # or .\run-docker.ps1 stop
# Check port usage
netstat -an | grep :8178
lsof -i :8178 # macOS/Linux
GPU Not Detected (Windows):
.\run-docker.ps1 gpu-testModel Download Failures:
# Manual download
./run-docker.sh models download base.en
# or
.\run-docker.ps1 models download base.en
Windows Build Problems:
# CMake not found - install Visual Studio Build Tools
# PowerShell execution blocked:
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process
macOS Build Problems:
# Compilation errors
brew install cmake llvm libomp
export CC=/opt/homebrew/bin/clang
export CXX=/opt/homebrew/bin/clang++
# Permission denied
chmod +x build_whisper.sh
chmod +x clean_start_backend.sh
# Port conflicts
lsof -i :5167 # Find process using port
kill -9 PID # Kill process
Services Won't Start:
Model Issues:
Build Docker images with GPU support and cross-platform compatibility.
Usage:
# Build Types
cpu, gpu, macos, both, test-gpu
# Options
-Registry/-r REGISTRY # Docker registry
-Push/-p # Push to registry
-Tag/-t TAG # Custom tag
-Platforms PLATFORMS # Target platforms
-BuildArgs ARGS # Build arguments
-NoCache/--no-cache # Build without cache
-DryRun/--dry-run # Show commands only
Examples:
# Basic builds
.\build-docker.ps1 cpu
./build-docker.sh gpu
# Multi-platform with registry
.\build-docker.ps1 both -Registry "ghcr.io/user" -Push
./build-docker.sh cpu --platforms "linux/amd64,linux/arm64" --push
Complete Docker deployment manager with interactive setup.
Commands:
start, stop, restart, logs, status, shell, clean, build, models, gpu-test, setup-db, compose
Start Options:
-Model/-m MODEL # Whisper model (default: base.en)
-Port/-p PORT # Whisper port (default: 8178)
-AppPort/--app-port # Meeting app port (default: 5167)
-Gpu/-g/--gpu # Force GPU mode
-Cpu/-c/--cpu # Force CPU mode
-Language/--language # Language code (default: auto)
-Translate/--translate # Enable translation
-Diarize/--diarize # Enable diarization
-Detach/-d/--detach # Run in background
-Interactive/-i # Interactive setup
Examples:
# Interactive setup
.\run-docker.ps1 start -Interactive
./run-docker.sh start --interactive
# Advanced configuration
.\run-docker.ps1 start -Model large-v3 -Gpu -Language es -Detach
./run-docker.sh start --model base --translate --diarize --detach
# Management
.\run-docker.ps1 logs -Service whisper -Follow
./run-docker.sh logs --service app --follow --lines 100
Build whisper.cpp server with custom modifications.
Usage:
build_whisper.cmd [MODEL_NAME] # Windows
./build_whisper.sh [MODEL_NAME] # macOS/Linux
Available Models:
tiny, tiny.en, base, base.en, small, small.en, medium, medium.en,
large-v1, large-v2, large-v3, large-v3-turbo,
*-q5_1 (5-bit quantized), *-q8_0 (8-bit quantized)
Service Configuration:
WHISPER_MODEL=base.en # Default model
WHISPER_PORT=8178 # Whisper port
APP_PORT=5167 # App port
WHISPER_LANGUAGE=auto # Language
WHISPER_TRANSLATE=false # Translation
WHISPER_DIARIZE=false # Diarization
Build Configuration:
REGISTRY=ghcr.io/user # Docker registry
PUSH=true # Push to registry
PLATFORMS=linux/amd64 # Target platforms
FORCE_GPU=true # Force GPU mode
DEBUG=true # Debug output
Supported Sources:
Auto-Discovery Paths (macOS/Linux):
/opt/homebrew/Cellar/meetily-backend/*/backend/meeting_minutes.db
$HOME/.meetily/meeting_minutes.db
$HOME/Documents/meetily/meeting_minutes.db
$HOME/Desktop/meeting_minutes.db
./meeting_minutes.db
$SCRIPT_DIR/data/meeting_minutes.db
Port Conflict Resolution:
GPU Detection:
Model Management:
Interactive Setup:
This comprehensive guide covers all deployment options and provides clear instructions for getting the Meetily backend running in any environment.