operator/DOCKER-COMPOSE.md
This docker-compose configuration runs a local DataHaven network with:
operator/
├── docker-compose.yml # Main compose configuration
├── Dockerfile # Node image
├── scripts/
│ ├── docker-entrypoint.sh # Unified key injection entrypoint
│ └── docker-prepare.sh # Build preparation script
└── DOCKER-COMPOSE.md # This file
Before running docker-compose, you need to build the DataHaven node binary.
# For development (faster blocks with fast-runtime feature)
./scripts/docker-prepare.sh --fast
# For production
./scripts/docker-prepare.sh
# For development (faster blocks with fast-runtime feature)
cargo build --release --features fast-runtime
# For production
cargo build --release
# Copy binary to expected location
mkdir -p build
cp target/release/datahaven-node build/
The binary will be output to target/release/datahaven-node and copied to build/datahaven-node.
Once the binary is built and copied, start the network:
# Start both validators
docker-compose up -d
# View logs
docker-compose logs -f
# View logs for specific node
docker-compose logs -f alice
docker-compose logs -f bob
docker-compose logs -f msp
docker-compose logs -f bsp01
docker-compose logs -f bsp02
docker-compose logs -f postgres
docker-compose logs -f indexer
docker-compose logs -f fisherman
All nodes automatically inject the required keys on startup using the unified docker-entrypoint.sh script.
Validators require 4 keys:
gran) - ed25519 - Finality gadgetbabe) - sr25519 - Block authoringimon) - sr25519 - Validator heartbeatbeef) - ecdsa - Bridge consensusStorage providers (both MSP and BSP) require 1 key:
bcsv) - ecdsa - Storage provider identityFisherman nodes require 1 key:
bcsv) - ecdsa - Storage provider identityKeys are derived from a test seed phrase using the pattern: <seed>//<NodeName> (e.g., //Alice, //Bob, //Charlie, //Dave, //Eve, //Gustavo).
⚠️ Security Warning: The default seed phrase is for development only. Never use this in production! To use custom seeds, modify the SEED environment variable in docker-compose.yml.
All nodes are accessible on the following ports:
ws://localhost:9944http://localhost:9615localhost:30333ws://localhost:9945http://localhost:9616localhost:30334ws://localhost:9946http://localhost:9617localhost:30335ws://localhost:9947http://localhost:9618localhost:30336ws://localhost:9948http://localhost:9619localhost:30337localhost:5432datahavenindexerindexerpostgresql://indexer:indexer@localhost:5432/datahavenws://localhost:9949http://localhost:9620localhost:30338ws://localhost:9950http://localhost:9621localhost:30339All nodes run on a shared Docker network (datahaven-network). All nodes use:
--discover-local for automatic peer discovery via mDNS--unsafe-force-node-key-generation for automatic node key generationThe validators (Alice and Bob) will produce blocks, while the storage providers (MSP and BSPs) provide storage services.
Note: All nodes use libp2p as the network backend (--network-backend=libp2p).
Important: On Docker Desktop for macOS, you must use the experimental DockerVMM virtualization framework for proper networking support.
To enable DockerVMM:
Note: The default Apple Virtualization Framework will cause networking issues with peer-to-peer connections, resulting in connection failures and protocol handshake errors.
# Stop all nodes
docker-compose down
# Stop and remove volumes (clears chain data)
docker-compose down -v
--chain=stagenet-local - Use stagenet-local chain specification (ensures all nodes share same genesis)--base-path=/data - Base directory for chain data and keystore--keystore-path=/data/keystore - Keystore persisted in Docker volumes--validator - Enables validator mode (Alice & Bob only)--pool-type=fork-aware - Uses fork-aware transaction pool for better fork handling--unsafe-force-node-key-generation - Automatic P2P key generation--unsafe-rpc-external - RPC exposed externally (development only!)--rpc-cors=all - Allows all CORS origins--force-authoring - Forces block authoring even with a single validator (Alice only)--no-prometheus - Prometheus metrics disabled--enable-offchain-indexing=true - Enables offchain indexing--discover-local - Enables local peer discovery via mDNS--alice / --bob - Use well-known development identities--provider - Enables storage provider mode (MSP & BSP)--provider-type=msp|bsp - Type of storage provider--max-storage-capacity - Maximum storage capacity in bytes (1073741824 = 1 GiB)--jump-capacity - Jump capacity in bytes (104857600 = 100 MiB)--msp-charging-period - Charging period in blocks (MSP only)The docker-compose setup includes five node types:
NODE_TYPE=validator) - Alice & Bob
NODE_TYPE=msp) - Charlie (name: msp)
--provider, --provider-type=msp, --msp-charging-period, storage capacity settingsNODE_TYPE=bsp) - Dave (bsp01) & Eve (bsp02)
--provider, --provider-type=bsp, storage capacity settings--indexer, --indexer-mode=full, --indexer-database-urlNODE_TYPE=fisherman) - Gustavo
--fisherman, --fisherman-database-url/data (not using --tmp)/data/keystore (alice-keystore, bob-keystore, msp-keystore, bsp01-keystore, bsp02-keystore, fisherman-keystore)postgres-data volumeindexer-data volumedocker-compose down -vroot user to allow the entrypoint script to inject keys and set permissionsdocker-entrypoint.sh) switches to the datahaven user (UID 1001) before starting the node processdatahaven:datahavenAll settings are configured for local development only:
If you see "datahaven-node: No such file or directory", ensure:
cd operatorcargo build --releasecp target/release/datahaven-node build/
Or simply run: ./prepare-docker.shCheck the logs to ensure nodes are peering correctly:
# Check if Bob connected to Alice
docker-compose logs bob | grep -i "peer\|sync"
# Check if MSP connected to Alice
docker-compose logs msp | grep -i "peer\|sync"
# View Alice's peer connections
docker-compose logs alice | grep -i "peer"
You should see messages like:
Discovered new external addressSyncing or best: #XIf nodes are not connecting:
docker-compose ps alicedocker exec datahaven-bob ping aliceIf ports are already in use, modify the port mappings in docker-compose.yml:
ports:
- "YOUR_PORT:9944" # Change YOUR_PORT to an available port
If you see key injection errors in the logs:
# Check the logs for key injection
docker-compose logs alice | grep "🔑"
# Verify keystore contents
docker exec datahaven-alice ls -la /keystore
To regenerate keys, remove the keystore volumes and restart:
docker-compose down -v
docker-compose up -d
To verify that keys were injected successfully:
# Alice keys (validator - 4 keys)
docker exec datahaven-alice ls -la /keystore
# Bob keys (validator - 4 keys)
docker exec datahaven-bob ls -la /keystore
# MSP keys (storage provider - 1 key)
docker exec datahaven-msp ls -la /data/keystore
# BSP keys (storage provider - 1 key)
docker exec datahaven-bsp01 ls -la /data/keystore
docker exec datahaven-bsp02 ls -la /data/keystore
# Fisherman keys (storage provider monitor - 1 key)
docker exec datahaven-fisherman ls -la /data/keystore
Validators should show: babe, gran, imon, and beef
Storage providers (MSP/BSP) and Fisherman should show: bcsv
To verify the PostgreSQL database is working:
# Check PostgreSQL is running
docker exec datahaven-postgres pg_isready -U indexer -d datahaven
# Connect to database
docker exec -it datahaven-postgres psql -U indexer -d datahaven
# View indexer tables (once running)
docker exec datahaven-postgres psql -U indexer -d datahaven -c "\dt"