docs/v3/advanced/self-hosted.mdx
Running multiple Prefect server instances enables high availability and distributes load across your infrastructure. This guide covers configuration and deployment patterns for scaling self-hosted Prefect.
Multi-server deployments require:
A scaled Prefect deployment typically includes:
%%{
init: {
'theme': 'neutral',
'flowchart': {
'curve' : 'linear',
'rankSpacing': 120,
'nodeSpacing': 80
}
}
}%%
flowchart TB
%% Style definitions
classDef userClass fill:#ede7f6db,stroke:#4527a0db,stroke-width:2px
classDef lbClass fill:#e3f2fddb,stroke:#1565c0db,stroke-width:2px
classDef apiClass fill:#1860f2db,stroke:#1860f2db,stroke-width:2px
classDef bgClass fill:#7c3aeddb,stroke:#7c3aeddb,stroke-width:2px
classDef dataClass fill:#16a34adb,stroke:#16a34adb,stroke-width:2px
classDef workerClass fill:#f59e0bdb,stroke:#f59e0bdb,stroke-width:2px
%% Nodes
subgraph clients[Client Side]
direction TB
Users[Users / UI / API Clients]:::userClass
Workers[Workers poll any available API server
Process / K8s / Docker / Serverless]:::workerClass
end
LB[Load Balancer
NGINX / HAProxy / ALB
Port 4200]:::lbClass
subgraph servers[Prefect Server Components]
direction TB
subgraph api[API Servers - Horizontal Scaling]
direction LR
API1[API Server 1
--no-services]:::apiClass
API2[API Server 2
--no-services]:::apiClass
API3[API Server N...
--no-services]:::apiClass
end
subgraph bg[Background Services - Horizontal Scaling]
direction LR
BG1[Background Services 1
prefect server services start]:::bgClass
BG2[Background Services 2
prefect server services start]:::bgClass
BG3[Background Services N...
prefect server services start]:::bgClass
end
end
subgraph data[Data Layer]
direction LR
PG[(PostgreSQL
• Flow/Task State
• Configuration
• History)]:::dataClass
Redis[(Redis
• Events
• Automations
• Real-time Updates)]:::dataClass
end
%% Connections
Users --> |HTTPS| LB
LB --> |Round Robin| api
api --> |Read/Write| PG
api --> |Publish| Redis
bg --> |Read/Write| PG
bg --> |Subscribe / Coordinate| Redis
Workers -.-> |Poll Work| api
Configure PostgreSQL as your database backend:
export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://user:password@host:5432/prefect"
To use AWS IAM authentication for your PostgreSQL database (experimental):
Install the AWS integration:
pip install prefect-aws
Create an IAM policy with rds-db:connect permission and attach it to your IAM user/role.
Enable experimental plugins and IAM authentication:
export PREFECT_EXPERIMENTS_PLUGINS_ENABLED=true
export PREFECT_INTEGRATIONS_AWS_RDS_IAM_ENABLED=true
# Optional: export PREFECT_INTEGRATIONS_AWS_RDS_IAM_REGION_NAME=us-east-1
Configure your connection URL:
export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://iam_user@host:5432/prefect"
Configure Redis as your server's message broker, cache, and lease storage:
export PREFECT_MESSAGING_BROKER="prefect_redis.messaging"
export PREFECT_MESSAGING_CACHE="prefect_redis.messaging"
export PREFECT_SERVER_EVENTS_CAUSAL_ORDERING="prefect_redis.ordering"
export PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE="prefect_redis.lease_storage"
export PREFECT_REDIS_MESSAGING_HOST="redis-host"
export PREFECT_REDIS_MESSAGING_PORT="6379"
export PREFECT_REDIS_MESSAGING_DB="0"
If your Redis instance requires authentication, you may configure a username and password:
export PREFECT_REDIS_MESSAGING_USERNAME="marvin"
export PREFECT_REDIS_MESSAGING_PASSWORD="dontpanic!"
For Redis instances that require an encrypted connection, you can enable SSL/TLS:
export PREFECT_REDIS_MESSAGING_SSL="true"
Alternatively, configure the Redis connection with a single URL instead of individual fields. When PREFECT_REDIS_MESSAGING_URL is set, it takes precedence and the individual host, port, db, username, password, and SSL fields are ignored:
export PREFECT_REDIS_MESSAGING_URL="redis://username:password@redis-host:6379/0"
Use rediss:// for TLS connections:
export PREFECT_REDIS_MESSAGING_URL="rediss://redis-host:6379/0"
Prefect uses Docket to coordinate background services like the scheduler, late run detection, and automation triggers. By default, Docket uses in-memory storage (memory://), which only works for single-server deployments.
For high-availability deployments, configure Docket to use Redis:
export PREFECT_SERVER_DOCKET_URL="redis://redis-host:6379/0"
If your Redis instance requires authentication:
export PREFECT_SERVER_DOCKET_URL="redis://username:password@redis-host:6379/0"
For Redis instances that require SSL/TLS:
export PREFECT_SERVER_DOCKET_URL="rediss://redis-host:6379/0"
For optimal performance, run API servers and background services separately:
API servers (multiple instances):
prefect server start --host 0.0.0.0 --port 4200 --no-services
Background services:
prefect server services start
For high availability and throughput, you can run multiple prefect server services start processes in parallel. Prefect uses Docket (backed by Redis) to coordinate work across background service processes so that periodic work (like scheduling, late run detection, and automation trigger evaluation) runs exactly once per interval even when multiple processes are running.
To run multiple background service processes:
PREFECT_SERVER_DOCKET_URL to point at a shared Redis instance (see Docket URL for background services). The default in-memory backend (memory://) is only safe for a single process.prefect server services start process per replica. Each replica runs the same set of enabled services; Docket ensures only one replica picks up each scheduled run.All services run together in a single prefect server services start process by default. To dedicate a process to a specific subset of services (for example, to scale a noisy neighbor independently), disable the services you don't want to run on that process with their *_ENABLED environment variable.
List all services and their enable/disable environment variable:
prefect server services ls
For example, to run a process that only handles the scheduler and late run detection:
export PREFECT_SERVER_SERVICES_CANCELLATION_CLEANUP_ENABLED=false
export PREFECT_SERVER_SERVICES_PAUSE_EXPIRATIONS_ENABLED=false
export PREFECT_SERVER_SERVICES_FOREMAN_ENABLED=false
export PREFECT_SERVER_SERVICES_TRIGGERS_ENABLED=false
export PREFECT_SERVER_SERVICES_EVENT_PERSISTER_ENABLED=false
export PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_ENABLED=false
# ...disable any other services you don't want on this process
prefect server services start
Then run another process with the complementary set of services enabled. As long as every enabled service is running on at least one process (and all processes share the same Docket Redis), every enabled service continues to operate.
<Note> Some services maintain their own at-least-once semantics at the database or Redis level rather than relying on Docket's run-once guarantee. Running multiple processes with the same service enabled is supported, but starting with one process per service (and scaling up only when you observe a specific bottleneck) keeps operations simple. </Note>Disable automatic migrations in multi-server deployments:
export PREFECT_API_DATABASE_MIGRATE_ON_START="false"
Run migrations separately before deployment:
prefect server database upgrade -y
Configure health checks for your load balancer:
/api/health{"status": "healthy"}Example NGINX configuration:
upstream prefect_api {
least_conn;
server prefect-api-1:4200 max_fails=3 fail_timeout=30s;
server prefect-api-2:4200 max_fails=3 fail_timeout=30s;
server prefect-api-3:4200 max_fails=3 fail_timeout=30s;
}
server {
listen 4200;
location /api/health {
proxy_pass http://prefect_api;
proxy_connect_timeout 1s;
proxy_read_timeout 1s;
}
location / {
proxy_pass http://prefect_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
When hosting Prefect behind a reverse proxy, ensure proper header forwarding:
server {
listen 80;
server_name prefect.example.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name prefect.example.com;
ssl_certificate /path/to/ssl/certificate.pem;
ssl_certificate_key /path/to/ssl/certificate_key.pem;
location /api {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Authentication headers
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
proxy_pass http://prefect_api;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://prefect_api;
}
}
When self-hosting the UI behind a proxy:
PREFECT_UI_API_URL: Connection URL from UI to APIPREFECT_UI_SERVE_BASE: Base URL path to serve the UIPREFECT_UI_URL: URL for clients to access the UIFor self-signed certificates:
Add certificate to system bundle and set:
export SSL_CERT_FILE=/path/to/certificate.pem
Or disable verification (testing only):
export PREFECT_API_TLS_INSECURE_SKIP_VERIFY=True
Prefect respects standard proxy environment variables:
export HTTPS_PROXY=http://proxy.example.com:8080
export HTTP_PROXY=http://proxy.example.com:8080
export NO_PROXY=localhost,127.0.0.1,.internal
services:
postgres:
image: postgres:15
environment:
POSTGRES_USER: prefect
POSTGRES_PASSWORD: prefect
POSTGRES_DB: prefect
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: pg_isready -h localhost -U $$POSTGRES_USER
interval: 2s
timeout: 5s
retries: 15
redis:
image: redis:7
migrate:
image: prefecthq/prefect:3-latest
depends_on:
postgres:
condition: service_healthy
command: prefect server database upgrade -y
environment:
PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect
prefect-api:
image: prefecthq/prefect:3-latest
depends_on:
migrate:
condition: service_completed_successfully
postgres:
condition: service_healthy
redis:
condition: service_started
deploy:
replicas: 3
command: prefect server start --host 0.0.0.0 --no-services
environment:
PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect
PREFECT_API_DATABASE_MIGRATE_ON_START: "false"
PREFECT_MESSAGING_BROKER: prefect_redis.messaging
PREFECT_MESSAGING_CACHE: prefect_redis.messaging
PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering
PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage
PREFECT_REDIS_MESSAGING_HOST: redis
PREFECT_REDIS_MESSAGING_PORT: "6379"
PREFECT_SERVER_DOCKET_URL: redis://redis:6379/1
ports:
- "4200-4202:4200" # Maps to different ports for each replica
prefect-background:
image: prefecthq/prefect:3-latest
depends_on:
migrate:
condition: service_completed_successfully
postgres:
condition: service_healthy
redis:
condition: service_started
deploy:
replicas: 2
command: prefect server services start
environment:
PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect
PREFECT_API_DATABASE_MIGRATE_ON_START: "false"
PREFECT_MESSAGING_BROKER: prefect_redis.messaging
PREFECT_MESSAGING_CACHE: prefect_redis.messaging
PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering
PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage
PREFECT_REDIS_MESSAGING_HOST: redis
PREFECT_REDIS_MESSAGING_PORT: "6379"
PREFECT_SERVER_DOCKET_URL: redis://redis:6379/1
volumes:
postgres_data:
When running migrations on large database instances (especially where tables like events, flow_runs, or task_runs can reach millions of rows), the default database timeout of 10 seconds may not be sufficient for creating indexes.
If you encounter a TimeoutError during migrations, increase the database timeout:
# Set timeout to 10 minutes (adjust based on your database size)
export PREFECT_API_DATABASE_TIMEOUT=600
# Then run the migration
prefect server database upgrade -y
For Docker deployments:
docker run -e PREFECT_API_DATABASE_TIMEOUT=600 prefecthq/prefect:latest prefect server database upgrade -y
If a migration times out while creating indexes, you may need to manually complete it. For example, if migration 7a73514ca2d6 fails:
First, check which indexes were partially created:
SELECT indexname FROM pg_indexes WHERE tablename = 'events' AND indexname LIKE 'ix_events%';
Manually create the missing indexes using CONCURRENTLY to avoid blocking:
-- Drop any partial indexes from the failed migration
DROP INDEX IF EXISTS ix_events__event_related_occurred;
DROP INDEX IF EXISTS ix_events__related_resource_ids;
-- Create the new indexes
CREATE INDEX CONCURRENTLY ix_events__related_gin ON events USING gin(related);
CREATE INDEX CONCURRENTLY ix_events__event_occurred ON events (event, occurred);
CREATE INDEX CONCURRENTLY ix_events__related_resource_ids_gin ON events USING gin(related_resource_ids);
Mark the migration as complete:
UPDATE alembic_version SET version_num = '7a73514ca2d6';
Monitor your multi-server deployment: