docs/self-hosting/docker.mdx
Reactive Resume can be self-hosted using Docker in a matter of minutes, and this guide will walk you through the process. Here are some of the services you'll need to get started:
<CardGroup cols={2}> <Card title="PostgreSQL">Stores accounts, resumes, and application data.</Card> <Card title="Printer">Generates PDFs and screenshots using a headless Chromium browser.</Card> <Card title="Email (optional)"> SMTP for verification emails, password reset, etc. If not configured, emails are logged to the server console. </Card> <Card title="Storage (optional)"> Use S3-compatible storage, or local persistent storage via <code>/app/data</code>. </Card> </CardGroup>You can pull the latest app image from:
amruthpillai/reactive-resume:latestghcr.io/amruthpillai/reactive-resume:latestCreate a new folder (for example reactive-resume/) with:
compose.yml.env./data)# --- Server ---
TZ="Etc/UTC"
APP_URL="http://localhost:3000"
# Optional, uses APP_URL by default
# This can be set to a different URL (like http://host.docker.internal:3000 or http://{docker_service}:3000)
# to let the browser navigate to a non-public instance of Reactive Resume
PRINTER_APP_URL="http://host.docker.internal:3000"
# --- Printer ---
# Keep this token in sync with the Browserless TOKEN value.
BROWSERLESS_TOKEN="change-me"
PRINTER_ENDPOINT="ws://printer:3000?token=change-me"
# --- Database (PostgreSQL) ---
DATABASE_URL="postgresql://postgres:postgres@postgres:5432/postgres"
# --- Authentication ---
# Generated using `openssl rand -hex 32`
AUTH_SECRET=""
# Better Auth dashboard API key (optional)
BETTER_AUTH_API_KEY=""
# Social Auth (Google, optional)
GOOGLE_CLIENT_ID=""
GOOGLE_CLIENT_SECRET=""
# Social Auth (GitHub, optional)
GITHUB_CLIENT_ID=""
GITHUB_CLIENT_SECRET=""
# Social Auth (LinkedIn, optional)
LINKEDIN_CLIENT_ID=""
LINKEDIN_CLIENT_SECRET=""
# Custom OAuth Provider
OAUTH_PROVIDER_NAME=""
OAUTH_CLIENT_ID=""
OAUTH_CLIENT_SECRET=""
# Use EITHER discovery URL (preferred for OIDC-compliant providers):
OAUTH_DISCOVERY_URL=""
# OR manual URLs (all three required if not using discovery):
OAUTH_AUTHORIZATION_URL=""
OAUTH_TOKEN_URL=""
OAUTH_USER_INFO_URL=""
OAUTH_DYNAMIC_CLIENT_REDIRECT_HOSTS=""
# Custom scopes (space-separated, defaults to "openid profile email")
OAUTH_SCOPES=""
# Optional Better Auth runtime overrides for advanced deployments:
# BETTER_AUTH_URL="https://auth.example.com"
# BETTER_AUTH_SECRET=""
# --- AI (optional) ---
# Comma-separated hostnames/origins for custom AI base URLs
# Example: api.openai.com,https://gateway.ai.vercel.com
AI_ALLOWED_BASE_URLS=""
# --- Email (optional) ---
# If all keys are disabled, the app logs the email to be sent to the console instead.
SMTP_HOST=""
SMTP_PORT="587"
SMTP_USER=""
SMTP_PASS=""
SMTP_FROM="Reactive Resume <[email protected]>"
SMTP_SECURE="false"
# --- Storage (optional) ---
# If all keys are disabled, the app uses local filesystem (usually /app/data) to store uploads instead.
# Make sure to mount this directory to a volume or the host filesystem to ensure data integrity.
S3_ACCESS_KEY_ID=""
S3_SECRET_ACCESS_KEY=""
S3_REGION="us-east-1"
S3_ENDPOINT=""
S3_BUCKET=""
# Set to "true" for path-style URLs (https://endpoint/bucket), common with MinIO, SeaweedFS, etc.
# Set to "false" for virtual-hosted-style URLs (https://bucket.endpoint), common with AWS S3, Cloudflare R2, etc.
S3_FORCE_PATH_STYLE="false"
# --- Feature Flags ---
FLAG_DEBUG_PRINTER="false"
FLAG_DISABLE_SIGNUPS="false"
FLAG_DISABLE_EMAIL_AUTH="false"
FLAG_DISABLE_IMAGE_PROCESSING="false"
```bash Linux/macOS (alternative)
head -c 32 /dev/urandom | hexdump -v -e '/1 "%02x"'
```
```powershell Windows
[byte[]]$bytes = New-Object byte[] 32; (New-Object System.Security.Cryptography.RNGCryptoServiceProvider).GetBytes($bytes); $bytes | ForEach-Object { "{0:x2}" -f $_ } | Out-String -Stream | ForEach-Object { $_.Trim() } | Write-Host -NoNewline
```
</CodeGroup>
<CodeGroup>
services:
postgres:
image: postgres:latest
restart: unless-stopped
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres_data:/var/lib/postgresql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d postgres"]
interval: 10s
timeout: 5s
retries: 10
printer:
image: ghcr.io/browserless/chromium:latest
restart: unless-stopped
ports:
- "4000:3000"
environment:
- HEALTH=true
- CONCURRENT=20
- QUEUED=10
- TOKEN=${BROWSERLESS_TOKEN}
healthcheck:
test: ["CMD-SHELL", 'curl -fsS "http://localhost:3000/pressure?token=${BROWSERLESS_TOKEN}" > /dev/null']
interval: 10s
timeout: 5s
retries: 10
reactive-resume:
image: amruthpillai/reactive-resume:latest
# image: ghcr.io/amruthpillai/reactive-resume:latest
restart: unless-stopped
ports:
- "3000:3000"
env_file:
- .env
volumes:
# Used when S3 is not configured; keeps uploads persistent
- ./data:/app/data
depends_on:
postgres:
condition: service_healthy
printer:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
</CodeGroup>
<Note>
**Alternative Printer Options**: If you don't want to use browserless, you can also use a lightweight headless Chrome Docker image like `chromedp/headless-shell`:
```yaml
chrome:
image: chromedp/headless-shell:latest
restart: unless-stopped
ports:
- "9222:9222"
```
Then set `PRINTER_ENDPOINT` to `http://chrome:9222` (or `http://localhost:9222` if running outside Docker Compose). This provides the same PDF/screenshot generation functionality with a smaller image footprint.
</Note>
<Tip>
Prefer pulling from Docker Hub? Keep <code>amruthpillai/reactive-resume:latest</code>. Prefer GHCR? Swap it to <code>ghcr.io/amruthpillai/reactive-resume:latest</code>.
</Tip>
docker compose up -d
docker compose ps
docker compose logs -f reactive-resume
</CodeGroup>
Reactive Resume should now be available at your `APP_URL` (for the example above: `http://localhost:3000`).
<Note>
**Alternative to browserless**: You can use a lightweight headless Chrome Docker image like `chromedp/headless-shell`:
```yaml
chrome:
image: chromedp/headless-shell:latest
restart: unless-stopped
ports:
- "9222:9222"
```
Set `PRINTER_ENDPOINT` to `http://chrome:9222` (in Docker Compose) or `http://localhost:9222` (if running externally). This provides the same PDF/screenshot generation with a smaller image footprint.
</Note>
Generate with:
<CodeGroup>
openssl rand -hex 32
</CodeGroup>
**`GOOGLE_CLIENT_ID`** / **`GOOGLE_CLIENT_SECRET`** (optional): Enables Google sign-in.
**`GITHUB_CLIENT_ID`** / **`GITHUB_CLIENT_SECRET`** (optional): Enables GitHub sign-in.
**`LINKEDIN_CLIENT_ID`** / **`LINKEDIN_CLIENT_SECRET`** (optional): Enables LinkedIn sign-in.
**`BETTER_AUTH_API_KEY`** (optional): Enables Better Auth dashboard integrations.
**`BETTER_AUTH_URL`** (optional, advanced): Overrides auth base URL if it must differ from `APP_URL` (for split-host deployments).
**`BETTER_AUTH_SECRET`** (optional, advanced): Overrides `AUTH_SECRET` for Better Auth internals.
**Custom OAuth provider** (optional):
- **`OAUTH_PROVIDER_NAME`**: Display name in the UI
- **`OAUTH_CLIENT_ID`** / **`OAUTH_CLIENT_SECRET`**: Required for any custom OAuth provider
- **`OAUTH_DYNAMIC_CLIENT_REDIRECT_HOSTS`**: Comma-separated allowlist for extra dynamic OAuth redirect hosts/origins (HTTPS only, non-private hosts).
- **`OAUTH_SCOPES`**: Space-separated scopes (defaults to `openid profile email`)
Configure endpoints using **one** of these methods:
- **Option A — OIDC Discovery (preferred)**: Set `OAUTH_DISCOVERY_URL` to your provider's `.well-known/openid-configuration` URL
- **Option B — Manual URLs**: Set all three: `OAUTH_AUTHORIZATION_URL`, `OAUTH_TOKEN_URL`, and `OAUTH_USER_INFO_URL`
- Email delivery is enabled only when **all** of `SMTP_HOST`, `SMTP_USER`, `SMTP_PASS`, and `SMTP_FROM` are set.
- **`SMTP_HOST`**: SMTP host (if empty, email sending is disabled).
- **`SMTP_PORT`**: Defaults to `587` in the app.
- **`SMTP_USER`** / **`SMTP_PASS`**: SMTP credentials.
- **`SMTP_FROM`**: Default from address (for example, `Reactive Resume <[email protected]>`).
- **`SMTP_SECURE`**: `"true"` or `"false"` (string). Match your provider settings.
To update your Reactive Resume installation to the latest available version, follow these steps:
Back up your database and uploads first (highly recommended before every update).
Pull the latest images for all services defined in your Docker Compose file.
docker compose pull
Restart the containers to run the new images.
docker compose up -d
Check migration/startup logs after deploy.
docker compose logs -f reactive-resume
(Optional) Remove old, unused Docker images to free up disk space.
docker image prune -f
This process updates app services and automatically runs DB migrations on startup. If migration fails, restore from backup and fix configuration before retrying.
Regular backups are essential to protect your data. Reactive Resume stores data in two places: the PostgreSQL database and file uploads (either local storage or S3).
Your PostgreSQL database contains all user accounts, resumes, and application data. For self-hosted deployments, you can use pg_dump to create periodic backups of your database and store them in a secure location. Many hosting providers also offer automated backup solutions for managed PostgreSQL instances, which handle scheduling, retention, and restoration for you.
If you're using local storage (the ./data directory), include this directory in your regular backup routine. A simple approach is to use rsync or a similar tool to copy the directory to a remote server or cloud storage.
If you're using S3-compatible storage, consider enabling versioning on your bucket to protect against accidental deletions. Most S3 providers also support lifecycle rules for automatic cleanup of old versions and cross-region replication for disaster recovery.
Reactive Resume exposes a health check endpoint at /api/health that verifies the application and its dependencies. It checks database, printer, and storage; if any one is unhealthy, the endpoint returns HTTP 503.
The Docker Compose configuration includes a health check that periodically calls the /api/health endpoint:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
When the health check fails, Docker marks the container as unhealthy. This status is visible when running docker compose ps or docker ps.
Most reverse proxies (such as Traefik, Caddy, or nginx with upstream health checks) can use Docker's health status to make routing decisions:
This is particularly useful in high-availability setups where you have multiple instances of Reactive Resume. If one instance becomes unhealthy (for example, it loses database, printer, or storage connectivity), the reverse proxy will stop routing traffic to it until it recovers.
<Tip> If you're using **Traefik**, it automatically respects Docker health checks when using the Docker provider. Unhealthy containers are excluded from routing without any additional configuration. </Tip>You can manually verify the health of your Reactive Resume instance:
# From outside the container
curl -f http://localhost:3000/api/health
# Check Docker's health status
docker compose ps
A healthy response returns HTTP 200. Any other response (or a connection failure) indicates a problem that should be investigated in the JSON response body and container logs.