doc/manual/source/development/benchmarking.md
This guide explains how to build and run performance benchmarks in the Nix codebase.
Nix uses the Google Benchmark framework for performance testing. Benchmarks help measure and track the performance of critical operations like derivation parsing.
Benchmarks are disabled by default and must be explicitly enabled during the build configuration. For accurate results, use a debug-optimized release build.
First, enter the development shell which includes the necessary dependencies:
nix develop .#native-ccacheStdenv
From the project root, configure the build with benchmarks enabled and optimization:
cd build
meson configure -Dbenchmarks=true -Dbuildtype=debugoptimized
The debugoptimized build type provides:
Build the project including benchmarks:
ninja
This will create benchmark executables in the build directory. Currently available:
build/src/libstore-tests/nix-store-benchmarks - Store-related performance benchmarksAdditional benchmark executables will be created as more benchmarks are added to the codebase.
Run benchmark executables directly. For example, to run store benchmarks:
./build/src/libstore-tests/nix-store-benchmarks
As more benchmark executables are added, run them similarly from their respective build directories.
Run specific benchmarks using regex patterns:
# Run only derivation parser benchmarks
./build/src/libstore-tests/nix-store-benchmarks --benchmark_filter="derivation.*"
# Run only benchmarks for hello.drv
./build/src/libstore-tests/nix-store-benchmarks --benchmark_filter=".*hello.*"
Generate benchmark results in different formats:
# JSON output
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=json > results.json
# CSV output
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=csv > results.csv
# Run benchmarks multiple times for better statistics
./build/src/libstore-tests/nix-store-benchmarks --benchmark_repetitions=10
# Set minimum benchmark time (useful for micro-benchmarks)
./build/src/libstore-tests/nix-store-benchmarks --benchmark_min_time=2
# Compare against baseline
./build/src/libstore-tests/nix-store-benchmarks --benchmark_baseline=baseline.json
# Display time in custom units
./build/src/libstore-tests/nix-store-benchmarks --benchmark_time_unit=ms
To add new benchmarks:
Create a new .cc file in the appropriate *-tests directory
Include the benchmark header:
#include <benchmark/benchmark.h>
Write benchmark functions:
static void BM_YourBenchmark(benchmark::State & state)
{
// Setup code here
for (auto _ : state) {
// Code to benchmark
}
}
BENCHMARK(BM_YourBenchmark);
Add the file to the corresponding meson.build:
benchmarks_sources = files(
'your-benchmark.cc',
# existing benchmarks...
)
For deeper performance analysis, combine benchmarks with profiling tools:
# Using Linux perf
perf record ./build/src/libstore-tests/nix-store-benchmarks
perf report
Valgrind's callgrind tool provides detailed profiling information that can be visualized with kcachegrind:
# Profile with callgrind
valgrind --tool=callgrind ./build/src/libstore-tests/nix-store-benchmarks
# Visualize the results with kcachegrind
kcachegrind callgrind.out.*
This provides:
# Save baseline results
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=json > baseline.json
# Compare against baseline in CI
./build/src/libstore-tests/nix-store-benchmarks --benchmark_baseline=baseline.json
Ensure benchmarks are enabled:
meson configure build | grep benchmarks
# Should show: benchmarks true
--benchmark_repetitions