benchmark/kernels/flashinfer_allreduce_fusion/README.md
This benchmark script is modified from the original implementation by the vLLM community. It aims to compare the performance differences between FlashInfer fused operators in SGLang (trtllm_allreduce_fusion: AllReduce + Residual Add + RMSNorm + optional quantization) and conventional implementations (standard tensor_model_parallel_all_reduce + separate RMSNorm/quantization). Specifically, this script tests the timing performance of two implementation paths: 1) Standard AllReduce and RMSNorm executed separately; 2) FlashInfer's fused operator combining AllReduce, Residual Add, RMSNorm, and optional quantization operations.
This benchmark script helps us tune the ipc workspace size of the flashinfer_allreduce_residual_rmsnorm operator in SGLang and prepare for applications with FP8/FP4 quantized fused operators.
Script path: benchmark/kernels/flashinfer_allreduce_fusion/benchmark_fused_collective.py
torchrun (NCCL backend)The following examples use world_size=2. You can modify --nproc_per_node and parameters according to your machine:
torchrun --nproc_per_node=2 \
benchmark/kernels/flashinfer_allreduce_fusion/benchmark_fused_collective.py \
--no-quant --hidden-dim 1024 --seq-lens 512 1024 2048 4096 --trials 100
torchrun --nproc_per_node=2 \
benchmark/kernels/flashinfer_allreduce_fusion/benchmark_fused_collective.py \
--quant-fp8 --hidden-dim 1024 --seq-lens 512 1024 2048 4096 --trials 100
torchrun --nproc_per_node=2 \
benchmark/kernels/flashinfer_allreduce_fusion/benchmark_fused_collective.py \
--quant-fp4 --hidden-dim 1024 --seq-lens 512 1024 2048 4096 --trials 100
torchrun --nproc_per_node=2 \
benchmark/kernels/flashinfer_allreduce_fusion/benchmark_fused_collective.py \
--no-quant --hidden-dim 4096 --seq-lens 512 1024 2048 4096 --trials 100
--seq-lens: List of sequence lengths to test (default: 128 512 1024 2048)--hidden-dim: Hidden dimension (default: 8192)--dtypes: Data type list, float16|bfloat16|float32 (default: bfloat16)--no-residual: Only test "no residual" scenarios (default tests both "with/without residual")--no-quant: No quantization testing--quant-fp8: Only FP8 quantization testing--quant-fp4: Only FP4 quantization testing--quant-all: Test all (default)--disable-oneshot: Disable oneshot mode (default enables oneshot and tests twoshot simultaneously)--warmup: Warmup count before graph capture and before graph replay (default 5)--trials: Benchmark iteration count (default 20; internally each graph.replay() will batch replay multiple times)--output-file: Save results as Markdown file (only rank0 takes effect)Each configuration group prints a table showing average execution time and relative speedup ratios (baseline is the faster standard implementation). For example:
================================================================================
Results: seq_len=1024, hidden_dim=1024
dtype=torch.bfloat16, residual=yes, quant_mode=none
================================================================================
Operation Time (ms) Speedup
--------------------------------------------------------------------------------
standard_allreduce_rmsnorm 0.024 0.98x
standard_allreduce_rmsnorm_native_compiled 0.023 baseline
flashinfer_fused_allreduce_rmsnorm_oneshot 0.011 2.19x
flashinfer_fused_allreduce_rmsnorm_twoshot 0.041 0.57x
If --output-file is specified, all configurations will be summarized in Markdown tables in that file.
torchrun environment variables to initialize distributed training and binds tensors/communication groups to the current rank's corresponding device.WORLD_SIZE > 1 to perform communication operator benchmarks. Otherwise, the script will error and prompt.e4m3/e4m3fnuz etc.scaled_fp4_quant, requiring corresponding platform support.graph_capture() to prepare capture-ready state for communication, then uses torch.cuda.graph to capture kernels, reducing measurement jitter.