bin/reth-bb/README.md
A modified reth node for benchmarking big block execution — payloads that merge transactions from multiple consecutive blocks into a single block to simulate high-gas workloads.
Not for production use. reth-bb disables some consensus-related validations to allow artificially large blocks. It is intended solely for performance benchmarking.
reth-bb extends the standard Ethereum node with:
Multi-segment execution — a custom reth_newPayload handler that accepts optional BigBlockData alongside the payload. When present, the block is executed in multiple segments, each with its own EVM environment (matching the original blocks that were merged).
Relaxed consensus — the gas-limit bound-divisor check and blob gas validation are skipped, since merged blocks exceed single-block limits.
The full workflow has four steps: build binaries, generate big blocks, start reth-bb, and replay the payloads.
cargo build --profile profiling -p reth-bb -p reth-bench
Fetch consecutive blocks from an RPC and merge them until a target gas is reached. Use --from-block set to the block number following the one the node is currently synced to (i.e. the next block the node would process):
reth-bench generate-big-block \
--rpc-url https://rpc.hoodi.ethpandaops.io \
--chain hoodi \
--from-block 910020 \
--target-gas 2G \
--num-big-blocks 5 \
--output-dir /tmp/payloads
This produces one JSON file per big block in the output directory.
reth-bb node \
--datadir /data/reth/hoodi \
--chain hoodi \
--http --http.api debug,eth \
--authrpc.jwtsecret /tmp/jwt.hex \
-d
reth-bench replay-payloads \
--engine-rpc-url http://localhost:8551 \
--jwt-secret /tmp/jwt.hex \
--payload-dir /tmp/payloads \
--reth-new-payload
The --reth-new-payload flag is required for big blocks — it uses the reth_newPayload endpoint which carries the multi-segment execution metadata.