SETUP.md
English | Español | 简体中文 | 日本語
Welcome! This guide will walk you through setting up Resume Matcher on your local machine. Whether you're a developer looking to contribute or someone who wants to run the application locally, this guide has you covered.
Before you begin, make sure you have the following installed on your system:
| Tool | Minimum Version | How to Check | Installation |
|---|---|---|---|
| Python | 3.13+ | python --version | python.org |
| Node.js | 22+ | node --version | nodejs.org |
| npm | 10+ | npm --version | Comes with Node.js |
| uv | Latest | uv --version | astral.sh/uv |
| Git | Any | git --version | git-scm.com |
Resume Matcher uses uv for fast, reliable Python dependency management. Install it with:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or via pip
pip install uv
If you're familiar with development tools and want to get running quickly:
# 1. Clone the repository
git clone https://github.com/srbhr/Resume-Matcher.git
cd Resume-Matcher
# 2. Start the backend (Terminal 1)
cd apps/backend
cp .env.example .env # Create config from template
uv sync # Install Python dependencies
uv run uvicorn app.main:app --reload --port 8000
# 3. Start the frontend (Terminal 2)
cd apps/frontend
npm install # Install Node.js dependencies
npm run dev # Start the dev server
Open your browser to http://localhost:3000 and you're ready to go!
Note: You'll need to configure an AI provider before using the app. See Configuring Your AI Provider below.
First, get the code on your machine:
git clone https://github.com/srbhr/Resume-Matcher.git
cd Resume-Matcher
The backend is a Python FastAPI application that handles AI processing, resume parsing, and data storage.
cd apps/backend
cp .env.example .env
.env file with your preferred text editor# macOS/Linux
nano .env
# Or use any editor you prefer
code .env # VS Code
The most important setting is your AI provider. Here's a minimal configuration for OpenAI:
LLM_PROVIDER=openai
LLM_MODEL=gpt-5-nano-2025-08-07
LLM_API_KEY=sk-your-api-key-here
# Keep these as default for local development
HOST=0.0.0.0
PORT=8000
FRONTEND_BASE_URL=http://localhost:3000
CORS_ORIGINS=["http://localhost:3000", "http://127.0.0.1:3000"]
uv sync
This creates a virtual environment and installs all required packages.
uv run uvicorn app.main:app --reload --port 8000
You should see output like:
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process
Keep this terminal running and open a new terminal for the frontend.
The frontend is a Next.js application that provides the user interface.
cd apps/frontend
This is only needed if your backend runs on a different port:
cp .env.sample .env.local
npm install
npm run dev
You should see:
▲ Next.js 16.x.x (Turbopack)
- Local: http://localhost:3000
Open http://localhost:3000 in your browser. You should see the Resume Matcher dashboard!
Resume Matcher supports multiple AI providers. You can configure your provider through the Settings page in the app, or by editing the backend .env file.
| Provider | Configuration | Get API Key |
|---|---|---|
| OpenAI | LLM_PROVIDER=openai | |
LLM_MODEL=gpt-5-nano-2025-08-07 | platform.openai.com | |
| Anthropic | LLM_PROVIDER=anthropic | |
LLM_MODEL=claude-haiku-4-5-20251001 | console.anthropic.com | |
| Google Gemini | LLM_PROVIDER=gemini | |
LLM_MODEL=gemini/gemini-3-flash-preview | aistudio.google.com | |
| OpenRouter | LLM_PROVIDER=openrouter | |
LLM_MODEL=deepseek/deepseek-chat | openrouter.ai | |
| DeepSeek | LLM_PROVIDER=deepseek | |
LLM_MODEL=deepseek-chat | platform.deepseek.com |
Example .env for Anthropic:
LLM_PROVIDER=anthropic
LLM_MODEL=claude-haiku-4-5-20251001
LLM_API_KEY=sk-ant-your-key-here
Want to run AI models locally without API costs? Use Ollama!
Download and install from ollama.com
ollama pull gemma3:4b
Other good options: llama3.2, mistral, codellama, neural-chat
.envLLM_PROVIDER=ollama
LLM_MODEL=gemma3:4b
LLM_API_BASE=http://localhost:11434
# LLM_API_KEY is not needed for Ollama
ollama serve
Ollama typically starts automatically after installation.
Prefer containerized deployment? Resume Matcher includes Docker support.
# Start the container from a published image
docker compose up -d
# View logs
docker compose logs -f
# Stop the container
docker compose down
# Change host port only (container stays on 3000)
PORT=4000 docker compose up -d
| Variable | Default | Description |
|---|---|---|
PORT | 3000 | Host port mapped to container port 3000 |
LOG_LEVEL | INFO | Application-wide Python/Uvicorn log level (ERROR, WARNING, INFO, DEBUG) |
LOG_LLM | WARNING | LiteLLM log level (ERROR, WARNING, INFO, DEBUG) |
LLM_PROVIDER | openai | AI provider (openai, anthropic, gemini, etc.) |
LLM_MODEL | — | Model to use (configured via Settings UI) |
LLM_API_KEY | — | API key (recommended: configure via Settings UI) |
LLM_API_BASE | — | Custom API endpoint (for Ollama or proxies) |
Note: Changes to
LOG_LEVELandLOG_LLMrequire a container restart to take effect.
To use Ollama running on your host machine:
LLM_API_BASE=http://host.docker.internal:11434 docker compose up -d
Then configure Ollama as your provider in the Settings UI.
The container supports *_FILE from
docker secrets.
For sensitive values, you can mount a secret file and point to it:
LLM_API_KEY_FILE=/run/secrets/llm_api_key docker compose up -d
Supported *_FILE variables:
| Variable | *_FILE variant |
|---|---|
LOG_LEVEL | LOG_LEVEL_FILE |
LOG_LLM | LOG_LLM_FILE |
LLM_PROVIDER | LLM_PROVIDER_FILE |
LLM_MODEL | LLM_MODEL_FILE |
LLM_API_KEY | LLM_API_KEY_FILE |
LLM_API_BASE | LLM_API_BASE_FILE |
Rules:
*_FILE variant, not both.You can tune logs globally and for LiteLLM separately:
LOG_LEVEL=INFO LOG_LLM=DEBUG docker compose up -d
Security warning:
LOG_LLM=DEBUGcauses LiteLLM to log API keys in plaintext. Do not useDEBUGlevel in production or shared environments. The defaultWARNINGis safe.
Note: LiteLLM also reads the
LITELLM_LOGenvironment variable internally to control handler-level filtering.LOG_LLMsets the logger level. Both must allow a message for it to appear. If you setLITELLM_LOGfrom LiteLLM docs, make sureLOG_LLMis set to an equal or lower level.
http://localhost:3000/settingsresume-data)/, API on /apiOnce the container is running, open your browser:
| URL | Description |
|---|---|
| http://localhost:3000 | Main application (Dashboard) |
| http://localhost:3000/settings | Configure AI provider |
| http://localhost:3000/api/v1/health | Backend health check |
| http://localhost:3000/docs | Interactive API documentation |
cd apps/backend
# Start development server (with auto-reload)
uv run uvicorn app.main:app --reload --port 8000
# Start production server
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000
# Install dependencies
uv sync
# Install with dev dependencies (for testing)
uv sync --group dev
# Run tests
uv run pytest
# Check if database needs reset (stored as JSON files)
ls -la data/
cd apps/frontend
# Start development server (with Turbopack for fast refresh)
npm run dev
# Build for production
npm run build
# Start production server
npm run start
# Run linter
npm run lint
# Format code with Prettier
npm run format
# Run on a different port
npm run dev -- -p 3001
Resume Matcher uses TinyDB (JSON file storage). All data is in apps/backend/data/:
# View database files
ls apps/backend/data/
# Backup your data
cp -r apps/backend/data apps/backend/data-backup
# Reset everything (start fresh)
rm -rf apps/backend/data
Error: ModuleNotFoundError
Make sure you're running with uv:
uv run uvicorn app.main:app --reload
Error: LLM_API_KEY not configured
Check your .env file has a valid API key for your chosen provider.
Error: ECONNREFUSED when loading pages
The backend isn't running. Start it first:
cd apps/backend && uv run uvicorn app.main:app --reload
Error: Build or TypeScript errors
Clear the Next.js cache:
rm -rf apps/frontend/.next
npm run dev
Error: Cannot connect to frontend for PDF generation
Your backend can't reach the frontend. Check:
FRONTEND_BASE_URL in .env matches your frontend URLCORS_ORIGINS includes your frontend URLIf frontend runs on port 3001:
FRONTEND_BASE_URL=http://localhost:3001
CORS_ORIGINS=["http://localhost:3001", "http://127.0.0.1:3001"]
Error: Connection refused to localhost:11434
ollama listollama serveollama pull gemma3:4bResume-Matcher/
├── apps/
│ ├── backend/ # Python FastAPI backend
│ │ ├── app/
│ │ │ ├── main.py # Application entry point
│ │ │ ├── config.py # Environment configuration
│ │ │ ├── database.py # TinyDB wrapper
│ │ │ ├── llm.py # AI provider integration
│ │ │ ├── routers/ # API endpoints
│ │ │ ├── services/ # Business logic
│ │ │ ├── schemas/ # Data models
│ │ │ └── prompts/ # LLM prompt templates
│ │ ├── data/ # Database storage (auto-created)
│ │ ├── .env.example # Environment template
│ │ └── pyproject.toml # Python dependencies
│ │
│ └── frontend/ # Next.js React frontend
│ ├── app/ # Pages (dashboard, builder, etc.)
│ ├── components/ # Reusable React components
│ ├── lib/ # Utilities and API client
│ ├── .env.sample # Environment template
│ └── package.json # Node.js dependencies
│
├── docs/ # Additional documentation
├── docker-compose.yml # Docker configuration
├── Dockerfile # Container build instructions
└── README.md # Project overview
Stuck? Here are your options:
| Document | Description |
|---|---|
| backend-guide.md | Backend architecture and API details |
| frontend-workflow.md | User flow and component architecture |
| style-guide.md | UI design system (Swiss International Style) |
Happy resume building! If you find Resume Matcher helpful, consider starring the repo and joining our Discord.