docs/getting-started/single-binary.md
This guide will help you get Cortex running in single-binary mode using Docker Compose. In this mode, all Cortex components run in a single process, making it perfect for learning, development, and testing.
Time to complete: ~15 minutes
This setup creates the following services:
┌─────────────┐ remote_write ┌─────────────┐
│ Prometheus │ ───────────────────> │ Cortex │
│ │ │ (single) │
└─────────────┘ └─────────────┘
│
│ stores blocks
▼
┌─────────────┐ ┌─────────────┐
│ Grafana │ ────── queries ────> │ SeaweedFS │
│ Perses │ │ (S3) │
└─────────────┘ └─────────────┘
Components:
git clone https://github.com/cortexproject/cortex.git
cd cortex/docs/getting-started
The getting-started directory contains all the configuration files needed for this guide.
docker compose up -d
This command starts all services in the background. Docker Compose will:
What's happening? Check the logs:
# View all logs
docker compose logs -f
# View Cortex logs only
docker compose logs -f cortex
After ~30 seconds, all services should be healthy. Verify by checking:
docker compose ps
You should see all services with status "Up" or "healthy".
Open these URLs in your browser:
--user any:any)Let's verify that metrics are flowing from Prometheus → Cortex → Grafana.
prometheus_remote_storage_samples_total increasingTest that Cortex is receiving metrics:
curl -H "X-Scope-OrgID: cortex" "http://localhost:9009/prometheus/api/v1/query?query=up" | jq
You should see JSON output with metrics data.
Note: The X-Scope-OrgID header specifies which tenant's data to query. Cortex is multi-tenant by default. Prometheus automatically adds this header when writing metrics via remote_write.
admin / admin)upPre-built dashboards are available at Dashboards:
Cortex can evaluate PromQL recording rules and alerting rules, similar to Prometheus. This is optional but demonstrates an important Cortex feature.
What are these?
The repository includes example rules in rules.yaml and alerts.yaml.
For Linux users:
docker run --network host \
-v "$(pwd):/workspace" -w /workspace \
quay.io/cortexproject/cortex-tools:v0.17.0 \
rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009
For macOS/Windows users:
docker run --network cortex-docs-getting-started_default \
-v "$(pwd):/workspace" -w /workspace \
quay.io/cortexproject/cortex-tools:v0.17.0 \
rules sync rules.yaml alerts.yaml --id cortex --address http://cortex:9009
Note: The --id cortex flag specifies the tenant ID. Cortex is multi-tenant, so rules are namespaced by tenant.
View rules in Grafana: Alerting → Alert rules
Or check via API:
curl -H "X-Scope-OrgID: cortex" "http://localhost:9009/prometheus/api/v1/rules" | jq
Cortex includes a multi-tenant Alertmanager that receives alerts from the ruler.
For Linux users:
docker run --network host \
-v "$(pwd):/workspace" -w /workspace \
quay.io/cortexproject/cortex-tools:v0.17.0 \
alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009
For macOS/Windows users:
docker run --network cortex-docs-getting-started_default \
-v "$(pwd):/workspace" -w /workspace \
quay.io/cortexproject/cortex-tools:v0.17.0 \
alertmanager load alertmanager-config.yaml --id cortex --address http://cortex:9009
Configure Alertmanager notification policies in Grafana: Alerting → Notification policies
Now that everything is running, try these experiments to learn how Cortex works:
Cortex runs all components in one process, so stopping Cortex simulates an ingester failure.
docker compose stop cortex
Observe:
Restart Cortex:
docker compose start cortex
Result: Prometheus catches up by sending queued samples. Check the Cortex / Writes dashboard to see the backlog being processed.
Cortex stores recent data (last ~2 hours) in memory and older data in object storage (S3).
Query recent metrics (from ingester memory):
curl "http://localhost:9009/prometheus/api/v1/query?query=up" | jq
After 2+ hours, query old metrics (from S3 blocks):
curl "http://localhost:9009/prometheus/api/v1/query?query=up[24h]" | jq
Observe: Both queries work! Cortex seamlessly queries both sources.
In Prometheus: Query up
In Grafana (Cortex datasource): Query up
Are they the same? Initially yes, but after Prometheus sends data to Cortex via remote_write, the data diverges:
Cortex uses a hash ring for consistent hashing of time series to ingesters.
View the ring status: http://localhost:9009/ring
In single-binary mode, you'll see one ingester. In microservices mode, you'd see multiple ingesters.
SeaweedFS stores Cortex blocks. You can inspect them using the S3 API:
List buckets:
curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333
List objects in the cortex-blocks bucket:
curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333/cortex-blocks?list-type=2
You'll see:
cortex/ directory (tenant ID)01J8KRQ7M8...)index, chunks/, and meta.jsonTip: You can also use the AWS CLI with SeaweedFS:
export AWS_ACCESS_KEY_ID=any
export AWS_SECRET_ACCESS_KEY=any
aws --endpoint-url=http://localhost:8333 s3 ls s3://cortex-blocks/
This setup uses several configuration files. Here's what each does:
| File | Purpose |
|---|---|
docker-compose.yaml | Defines all services (Cortex, Prometheus, Grafana, SeaweedFS) |
cortex-config.yaml | Cortex configuration (storage, limits, components) |
prometheus-config.yaml | Prometheus configuration with remote_write to Cortex |
grafana-datasource-docker.yaml | Grafana datasource pointing to Cortex |
rules.yaml | Example recording rules |
alerts.yaml | Example alerting rules |
alertmanager-config.yaml | Alertmanager configuration |
Want to customize? Edit these files and restart services:
docker compose restart cortex
# Check logs
docker compose logs
# Check port conflicts
lsof -i :9009 # Cortex
lsof -i :9090 # Prometheus
lsof -i :3000 # Grafana
curl "http://localhost:9009/prometheus/api/v1/query?query=up"The --network host flag doesn't work on macOS/Windows. Use the Docker network name instead:
docker run --network cortex-docs-getting-started_default ...
Increase Docker's memory limit to 4GB or more:
When you're done, stop and remove all services:
docker compose down -v
The -v flag removes volumes (stored data). Omit it to keep data between runs.
Congratulations! You've successfully run Cortex in single-binary mode. Here's what to explore next: