docs/benchmarking.md
This document describes the benchmarking infrastructure for Prisma's query compiler and client packages. It covers how to run benchmarks, interpret results, add new benchmarks, and profile performance.
Prisma uses Benchmark.js with CodSpeed integration for reliable, continuous performance tracking. Benchmarks are automatically run on CI for every push to main and on pull requests.
End-to-End Query Performance (packages/client)
Query Compilation Performance (packages/client)
Query Interpreter Performance (packages/client-engine-runtime)
# From repository root
pnpm install
pnpm build
# Run all benchmarks (outputs to output.txt)
pnpm bench
# Run without file output (for CodSpeed CI)
pnpm bench-stdout-only
# Filter by pattern
pnpm bench query-performance
pnpm bench compilation
pnpm bench interpreter
# Run a specific benchmark file directly
node -r esbuild-register packages/client/src/__tests__/benchmarks/query-performance/query-performance.bench.ts
node -r esbuild-register packages/client/src/__tests__/benchmarks/query-performance/compilation.bench.ts
node -r esbuild-register packages/client-engine-runtime/bench/interpreter.bench.ts
# Set the environment variable to enable CodSpeed mode
CODSPEED_BENCHMARK=true pnpm bench
Location: packages/client/src/__tests__/benchmarks/query-performance/
| File | Description |
|---|---|
query-performance.bench.ts | End-to-end query benchmarks with SQLite |
compilation.bench.ts | Query compiler benchmarks |
schema.prisma | Benchmark schema with typical web app models |
seed-data.ts | Data generation utilities |
prisma.config.ts | Prisma configuration for benchmarks |
Location: packages/client-engine-runtime/bench/
| File | Description |
|---|---|
interpreter.bench.ts | Query interpreter and data mapper benchmarks |
Location: packages/client/src/__tests__/benchmarks/
| File | Description |
|---|---|
huge-schema/ | Client generation benchmarks (~50 models) |
lots-of-relations/ | Client generation with many relations |
These benchmarks test realistic query patterns:
findUnique by id - Primary key lookupfindUnique by unique field - Unique constraint lookupfindFirst with simple where - Basic filteringfindMany 10/50/100 records - Various result sizesfindMany with orderBy - Ordered queriesfindMany with filter - Filtered queriesfindMany with pagination - Skip/take paginationfindUnique with 1:1 include - One-to-one relationsfindUnique with 1:N include - One-to-many relationsfindMany with nested includes - Multi-level includesfindMany with deep nested includes - Complex relation treescreate single record - Basic insertscreate with nested - Insert with relationsupdate single record - UpdatesupdateMany - Bulk updatesupsert - Insert or updatecount all/filtered - Count queriesaggregate sum/avg - Aggregate functionsgroupBy with count - Groupingblog post page query - Post with author, comments, tagsblog listing page query - Paginated post listuser profile page query - User with posts and statsorder history query - Orders with items and productsdashboard stats query - Multiple aggregationsThese benchmarks measure compiler performance:
These benchmarks measure execution overhead:
// packages/client/src/__tests__/benchmarks/your-benchmark/your.bench.ts
import { withCodSpeed } from '@codspeed/benchmark.js-plugin'
import Benchmark from 'benchmark'
async function runBenchmarks(): Promise<void> {
// Setup code here...
const suite = withCodSpeed(new Benchmark.Suite('your-benchmark-name'))
// Async benchmark
suite.add('benchmark name', {
defer: true,
fn: function (deferred: Benchmark.Deferred) {
yourAsyncFunction()
.then(() => deferred.resolve())
.catch((err) => {
console.error('Benchmark error:', err)
process.exit(1)
})
},
})
// Sync benchmark
suite.add('sync benchmark', {
fn: function () {
yourSyncFunction()
},
})
// Run suite
await new Promise<void>((resolve) => {
suite
.on('cycle', (event: Benchmark.Event) => {
console.log(String(event.target))
})
.on('complete', () => {
console.log('Benchmarks complete.')
resolve()
})
.run({ async: true })
})
}
runBenchmarks().catch((error) => {
console.error('Fatal error:', error)
process.exit(1)
})
// packages/client-engine-runtime/bench/your.bench.ts
import { withCodSpeed } from '@codspeed/benchmark.js-plugin'
import Benchmark from 'benchmark'
import { QueryInterpreter } from '../src/interpreter/query-interpreter'
// Create mock adapter, define query plans, etc.
// See interpreter.bench.ts for examples
withCodSpeed wrapper - Enables CodSpeed integration# Generate a CPU profile
node --cpu-prof -r esbuild-register packages/client/src/__tests__/benchmarks/query-performance/query-performance.bench.ts
# The profile will be saved as CPU.*.cpuprofile
# Open in Chrome DevTools (chrome://inspect) or VS Code
# Generate heap snapshots
node --heap-prof -r esbuild-register packages/client/src/__tests__/benchmarks/query-performance/query-performance.bench.ts
# Install 0x
npm install -g 0x
# Generate flame graph
0x -o -- node -r esbuild-register packages/client/src/__tests__/benchmarks/query-performance/query-performance.bench.ts
# Install clinic
npm install -g clinic
# Doctor (general analysis)
clinic doctor -- node -r esbuild-register your-benchmark.bench.ts
# Flame (flame graph)
clinic flame -- node -r esbuild-register your-benchmark.bench.ts
# Bubbleprof (async analysis)
clinic bubbleprof -- node -r esbuild-register your-benchmark.bench.ts
Benchmarks run automatically via .github/workflows/benchmark.yml:
# From .github/workflows/benchmark.yml
alert-threshold: '200%' # Alert if 2x slower
comment-on-alert: true # Comment on PR
fail-on-alert: true # Fail the check
The benchmarks use configurable seed data sizes:
// From seed-data.ts
const SEED_CONFIGS = {
small: {
// Quick iteration
users: 10,
postsPerUser: 5,
// ...
},
medium: {
// Typical benchmark run
users: 100,
postsPerUser: 10,
// ...
},
large: {
// Stress testing
users: 500,
postsPerUser: 20,
// ...
},
}
Modify the config in your benchmark:
import { seedDatabase, SEED_CONFIGS } from './seed-data'
// Use small config for debugging
seedResult = await seedDatabase(prisma, SEED_CONFIGS.small)
// Use large config for stress testing
seedResult = await seedDatabase(prisma, SEED_CONFIGS.large)
timeout_ms if neededCODSPEED_BENCHMARK=true is setwithCodSpeed wrapperglobal.gc && global.gc()--max-old-space-size flagfindUnique by id x 15,234 ops/sec ±0.87% (89 runs sampled)
For web application workloads, aim for:
| Operation | Target |
|---|---|
| Simple findUnique | >10,000 ops/sec |
| findMany (10 rows) | >5,000 ops/sec |
| findMany (100 rows) | >1,000 ops/sec |
| Complex nested query | >500 ops/sec |
| Create | >5,000 ops/sec |
| Update | >5,000 ops/sec |
When submitting performance-related PRs: