TESTING.md
Dgraph employs a complex sophisticated testing framework with extensive test coverage. The
codebase contains >200 test files with >2,000 test functions and benchmark functions across multiple
packages and modules.
This guide helps engineers navigate testing in the Dgraph codebase.
If you're making a change, you should be able to:
The testing framework uses Go build tags to conditionally compile tests that are more costly to run.
We distinguish the following types of tests:
dql/dql_test.go, types/value_test.go, schema/parse_test.goacl/acl_test.go, worker/worker_test.go, query/query0_test.go//go:build integrationacl/upgrade_test.go, worker/upgrade_test.go//go:build upgradeBenchmarkquery/benchmark_test.go, dql/bench_test.goquery/cloud_test.go, systest/cloud/cloud_test.go//go:build cloudIntegration, Upgrade and Benchmark tests require a running Dgraph cluster (Docker) and come in two
forms: tests driven by the t/ runner, and tests using the dgraphtest package, which provides
programmatic control over local Dgraph clusters. Most newer integration2 and upgrade tests rely on
dgraphtest.
Note: The
testutilpackage is being phased out. For new tests, preferdgraphtest(cluster management) anddgraphapi(client operations). Thetestutilpackage is maintained for backward compatibility with existing tests only.
The main module is github.com/hypermodeinc/dgraph
The codebase is organized into several key packages:
| Package | Description |
|---|---|
acl | Access Control Lists and authentication |
algo | Algorithms and data structures |
audit | Audit logging functionality |
backup | Backup and restore operations |
chunker | Data chunking and parsing |
codec | Encoding/decoding utilities |
conn | Connection management |
dgraph | Main Dgraph binary and commands |
dgraphapi | Dgraph API client |
dgraphtest | Testing utilities |
dql | Dgraph Query Language parser and processor |
edgraph | GraphQL endpoint |
filestore | File storage abstraction |
graphql | GraphQL implementation |
lex | Lexical analysis |
posting | Posting list management |
query | Query processing engine |
raftwal | Raft write-ahead log |
schema | Schema management |
systest | System integration tests |
testutil | Testing utilities |
tok | Tokenization and text processing |
types | Data type definitions |
upgrade | Database upgrade utilities |
worker | Worker processes |
x | Common utilities |
Before running tests, ensure you have the following installed and configured.
TL;DR: On a fresh checkout, run
make setupto auto-install tool dependencies, thenmake installfollowed bymake test. The build system automatically handles OS detection, builds the correct binaries, and validates dependencies.
The test framework includes scripts that check for required dependencies and can optionally auto-install them:
# Auto-install all missing tool dependencies (recommended for first-time setup)
make setup
# Check dependencies without installing (reports what's missing)
make check-deps
# Same as 'make check-deps' but auto-installs anything missing
make check-deps AUTO_INSTALL=true
The check scripts validate:
Note: You do not need to install these manually. Running
make setupormake check-deps AUTO_INSTALL=truefrom the repo root automatically checks and installs all missing dependencies. The commands below are listed only as reference for what gets installed.
go version # Verify Go is installed
docker --version
docker compose version
# Allocate sufficient memory: 4GB minimum, 8GB recommended
# Docker Desktop → Settings → Resources → Memory
# Set GOPATH (if not already set)
export GOPATH=$(go env GOPATH)
echo $GOPATH # Should output something like /Users/you/go
# Add to your shell profile (~/.zshrc, ~/.bashrc)
export GOPATH=$(go env GOPATH)
export PATH=$PATH:$GOPATH/bin
go install gotest.tools/gotestsum@latest
# Verify installation
gotestsum --version
brew install ack
# Build and install Dgraph binary to $GOPATH/bin
make install
# Verify installation
which dgraph # Should show $GOPATH/bin/dgraph
dgraph version
Note: The
t/runner's Docker Compose files mount the dgraph binary into containers at startup. On macOS, binaries are read from$GOPATH/linux_<arch>/dgraph; on Linux, from$GOPATH/bin/dgraph. Simply runmake installafter code changes — no Docker image rebuild needed.
The build system now handles most setup automatically. On both Linux and macOS:
# Auto-install tool dependencies (gotestsum, ack, etc.)
make setup
# Build dgraph binary (automatically handles Linux binary on macOS)
make install
# Run tests (builds Docker image and runs test suite)
make test
That's it! The make install command:
$GOPATH/bin/dgraph$GOPATH/bin/dgraph AND Linux binary to
$GOPATH/linux_<arch>/dgraphThe Docker Compose files automatically use the correct binary path via the LINUX_GOBIN environment
variable.
The build system now automatically handles cross-compilation for macOS users:
make install builds both native macOS and Linux binaries automatically$GOPATH/linux_<arch>/dgraph${LINUX_GOBIN:-$GOPATH/bin} to find the correct binaryAfter code changes, simply run make install again — it handles everything.
Background: Bulk and live loader tests (systest/bulk_live/) execute dgraph bulk and
dgraph live commands locally on your machine (not inside Docker).
Good news: Since make install now builds both binaries on macOS, you have:
$GOPATH/bin/dgraph (used for local commands)$GOPATH/linux_<arch>/dgraph (used by Docker containers)Use go test to run one easy test on types package:
go test -v ./types/... -run TestConvert
# Or with make (runs all unit tests, not just one)
make test-unit
Expected output:
=== RUN TestConvertToDefault
--- PASS: TestConvertToDefault (0.00s)
...
=== RUN TestConvertToGeoJson_PolyError2
--- PASS: TestConvertToGeoJson_PolyError2 (0.00s)
PASS
ok github.com/dgraph-io/dgraph/v25/types (cached)
? github.com/dgraph-io/dgraph/v25/types/facets [no test files]
Note: Start Docker Desktop before running integration or upgrade tests
cd t && go build . && ./t --test=TestGQLSchema
# Or with make
make test TEST=TestGQLSchema
If both pass, you're ready to run all test types!
The simplest way to run tests is make test (default: integration suite + integration2). Each
test-* target is a shortcut for make test with specific arguments. The table below shows all
three ways to run each test type.
| Target | make test equivalent | Without make |
|---|---|---|
make test | (default) | cd t && ./t --suite=integration then go test -v --tags=integration2 ./... |
make test-unit | make test SUITE=unit | cd t && ./t --suite=unit |
make test-integration | make test SUITE=integration | cd t && ./t --suite=integration |
make test-core | make test SUITE=core | cd t && ./t --suite=core |
make test-systest | make test SUITE=systest | cd t && ./t --suite=systest |
make test-vector | make test SUITE=vector | cd t && ./t --suite=vector |
make test-integration-heavy | make test SUITE=systest-heavy,ldbc,load | cd t && ./t --suite=systest-heavy,ldbc,load |
make test-integration2 | make test TAGS=integration2 | go test -v --tags=integration2 ./... |
make test-upgrade | make test TAGS=upgrade | go test -v --tags=upgrade ./... |
make test-fuzz | make test FUZZ=1 | go test -v -fuzz=Fuzz -fuzztime=300s ./dql/... |
make test-benchmark | (no equivalent) | go test -bench=. -benchmem ./... |
make test-all | (no equivalent) | Runs SUITE=all + integration2 + upgrade + fuzz sequentially |
Tip: All targets accept
PKG=,TEST=, andTIMEOUT=variables. For example:make test-systest PKG=systest/plugin TEST=TestPasswordReturn TIMEOUT=60m
Run make help to see all available targets, variables, and dynamically discovered SUITE/TAGS
values.
For more control, pass variables to make test:
| Variable | Purpose | Example |
|---|---|---|
SUITE | Select t/ runner suite | make test SUITE=integration |
TAGS | Go build tags - bypasses t/ runner | make test TAGS=integration2 |
PKG | Limit to specific package | make test PKG=systest/export |
TEST | Run specific test function | make test TEST=TestGQLSchema |
TIMEOUT | Per-package test timeout | make test TIMEOUT=90m |
FUZZ | Enable fuzz testing | make test FUZZ=1 |
FUZZTIME | Fuzz duration per package | make test FUZZ=1 FUZZTIME=60s |
Precedence: TAGS > FUZZ > SUITE > default (first match wins). When no variable is set,
make test runs integration suite (via t/ runner) plus integration2.
# Run integration2 tests for vector package
make test TAGS=integration2 PKG=systest/vector
# Run upgrade tests for ACL with specific test
make test TAGS=upgrade PKG=acl TEST=TestACL
# Run fuzz tests with custom duration
make test FUZZ=1 PKG=dql FUZZTIME=30s
# Run systest for backup package
make test SUITE=systest PKG=systest/backup/filesystem
# Benchmark a specific package
make test-benchmark PKG=posting
Use this section to quickly determine what test to write and where to place it.
Cover as many scenarios as possible. A good PR includes tests for:
Use a layered testing approach. Aim for broad coverage with unit tests to validate individual functions and quickly identify failures, and complement them with integration and end-to-end tests for cluster-dependent behavior and real-world scenarios. Each test type is important and they should be mutually reinforcing.
Unit tests run without a Dgraph cluster. They test pure logic in isolation.
*_test.go next to the source fileExample: Changing worker/export.go → add test in worker/export_test.go
Testing individual functions and components in isolation is usually not enough. Integration Tests test component interactions and full system workflows. They require a running Dgraph cluster.
Go build tags are special comments at the top of a file (for example, //go:build integration) that
instruct the Go toolchain when to compile that file. When you run tests with
go test -tags=integration, only test files without a build tag (default) or with a matching tag
are compiled and executed.
We use build tags to exclude expensive or environment-dependent tests (like integration,
integration2, and upgrade) from the default go test ./... run, while allowing you to opt in to
them when needed.
| Build Tag | Purpose |
|---|---|
integration | Standard integration tests requiring a Docker cluster |
integration2 | Integration tests using Docker Go client via dgraphtest package |
upgrade | Tests for upgrade scenarios between dgraph versions |
cloud (deprecated) | Tests running against cloud environment |
| If you're testing... | Test type | Build tag | Where to place |
|---|---|---|---|
| Query or mutation logic | Integration | integration | Existing package or systest/ |
| Backup / Restore | Integration | integration | systest/backup/ or systest/online-restore/ |
| Export | Integration | integration | systest/export/ |
| Live loader / Bulk loader | Integration | integration | systest/bulk_live/ or systest/loader/ |
| Multi-tenancy / Namespaces | Integration | integration | systest/multi-tenancy/ |
| Vector / Embeddings | Integration | integration | systest/vector/ |
| GraphQL schema or endpoints | Integration | integration | graphql/e2e/ |
| ACL / Auth | Integration | integration | acl/ or systest/acl/ |
| Upgrade from older version | Upgrade | upgrade | Same package with //go:build upgrade |
| Fine-grained cluster control (start/stop nodes) | Integration2 | integration2 | systest/integration2/ or relevant package |
I fixed a bug in query parsing (no cluster needed to fully validate) → Unit test in
query/*_test.go, no build tag
I fixed a bug in export that affects vector data → Integration test in systest/vector/, use
dgraphtest.LocalCluster, tag: //go:build integration
I changed backup behaviour → Integration test in systest/backup/, tag:
//go:build integration
I need to test behaviour after upgrading from v23 to main → Upgrade test in relevant package,
tag: //go:build upgrade
I changed GraphQL admin endpoint → Integration test in graphql/e2e/, tag:
//go:build integration
Maximize unit test coverage. If you can fully test it without a cluster - unit tests only. If it can't be tested at all without a cluster, integration tests only. Otherwise add a mix of both unit and integration tests – unit tests for what parts can be tested in isolation and integration tests for the remainder.
Cover multiple scenarios. Don't just test the happy path—include edge cases and error conditions.
Use table-driven tests. One test function with multiple cases beats many separate functions.
No flaky tests. Avoid time.Sleep(); use polling, retries, or explicit waits with timeouts.
Follow existing patterns. Look at nearby *_test.go files and match their style.
go test [flags] [package] [test-filter]
-v (verbose): Shows detailed output for each test-run <pattern>: Run only tests matching the pattern (regex)./types/: Single package./types/...: Package and all subpackages recursively# Run all tests in types package
go test ./types/
# Run all tests in types and subpackages
go test ./types/...
# Run specific test with verbose output
go test -v ./types/... -run TestConvert
With make:
# Run all unit tests (no Docker, no build tags)
make test-unit
# Run unit tests for a specific package
make test-unit PKG=types
# Run a specific unit test
make test-unit PKG=types TEST=TestConvert
//go:build tag at the top of the file = unit test//go:build integration are NOT unit testsPlace *_test.go next to the code being tested:
| Code in | Test in |
|---|---|
types/conversion.go | types/conversion_test.go |
dql/parser.go | dql/parser_test.go |
schema/parse.go | schema/parse_test.go |
The t/ runner orchestrates Docker-based integration tests. It spins up Dgraph clusters using
Docker Compose and runs tests tagged with integration.
$GOPATH/bin/dgraphdocker-compose.yml (package-specific or default)--tags=integrationA suite is a named group of test packages that can be run together with the --suite flag.
| Suite | Purpose | Packages/Tests Included |
|---|---|---|
unit | True unit tests only | All packages except ldbc/load — no Docker, no --tags=integration |
integration | Default suite — all integration tests except heavy | Everything except ldbc, load, and systest-heavy (replaces old unit) |
core | Core Dgraph functionality | Query, mutation, schema, GraphQL e2e, ACL, TLS, worker |
systest | All system integration tests | Both systest-baseline + systest-heavy (backward compatible) |
systest-baseline | Lean systest for daily dev | backup/filesystem, export, multi-tenancy, audit, CDC, group-delete, plugin, ... |
systest-heavy | Resource-intensive systests | backup/minio*, backup/encryption, backup/advanced-scenarios, tracing, online-restore |
vector | Vector search functionality | Vector index, similarity search, HNSW |
ldbc | Benchmark queries | LDBC benchmark suite |
load | Heavy data loading scenarios | 21million, 1million, bulk_live, bgindex, bulkloader |
all | Everything in t/ runner | All packages |
The runner looks for docker-compose.yml:
systest/export/docker-compose.yml)dgraph/docker-compose.ymlTests with custom compose files run in isolated clusters.
# Build the runner first
cd t && go build .
# Run a suite
./t --suite=core
# Run specific package
./t --pkg=systest/export
# Run single test
./t --test=TestExportAndLoadJson
# Keep cluster after test (for debugging)
./t --pkg=systest/export --keep
# Cleanup all test containers
./t -r
With make:
# Run a suite
make test SUITE=core
# Run specific package
make test SUITE=integration PKG=systest/export
# Run single test
make test TEST=TestExportAndLoadJson
| Flag | Description |
|---|---|
--suite=X | Select test suite(s): all, ldbc, load, unit, integration, systest, systest-baseline, systest-heavy, vector, core |
--pkg=X | Run specific package |
--test=X | Run specific test function |
--timeout=X | Per-package timeout (e.g. 60m, 2h). Default: 30m (180m with --race) |
-j=N | Concurrency (default: 1) |
--keep | Keep cluster running after tests |
-r | Remove all test containers |
--skip-slow | Skip slow packages |
The t/ runner manages cluster lifecycle automatically.
# Build runner
cd t && go build .
# Run all tests in a package
./t --pkg=systest/export
# Run single test
./t --test=TestExportAndLoadJson
# Keep cluster running after tests (for debugging)
./t --pkg=systest/export --keep
With make:
# Run all tests in a package (make builds the runner automatically)
make test SUITE=integration PKG=systest/export
# Run single test
make test TEST=TestExportAndLoadJson
For fine-grained control, manually start a cluster and run tests against it.
# Start default cluster with a custom prefix
docker compose -f dgraph/docker-compose.yml -p mytest up -d
# Or start package-specific cluster
docker compose -f systest/export/docker-compose.yml -p mytest up -d
# Set the prefix (tells testutil which cluster to use)
export TEST_DOCKER_PREFIX=mytest
# Run all tests in package
go test -v --tags=integration ./systest/export/...
# Run single test
go test -v --tags=integration --run '^TestExportAndLoadJson$' ./systest/export/
# Run multiple specific tests
go test -v --tags=integration --run 'TestExport.*' ./systest/export/
docker compose -f dgraph/docker-compose.yml -p mytest down -v
# Start cluster manually first
docker compose -f dgraph/docker-compose.yml -p myprefix up -d
# Run tests against it (no cluster restart)
cd t && ./t --prefix=myprefix --pkg=systest/export
# Cluster stays running after tests
Using go test regex:
export TEST_DOCKER_PREFIX=mytest
# All tests matching pattern
go test -v --tags=integration --run 'TestExport' ./systest/export/
# Multiple test names
go test -v --tags=integration --run 'TestExportAndLoad|TestExportSchema' ./systest/export/
Using t/ runner:
# Run all tests in multiple packages
./t --pkg=systest/export,systest/backup/filesystem
# Run entire suite
./t --suite=systest
With make:
# Run all systest packages
make test-systest
# Run specific systest package
make test SUITE=systest PKG=systest/export
# Run specific test by name
make test TEST=TestExportAndLoadJson
| Variable | Purpose | Set by |
|---|---|---|
TEST_DOCKER_PREFIX | Docker Compose prefix for cluster | t/ runner or manual |
TEST_DATA_DIRECTORY | Path to test data files | t/ runner |
GOPATH | Required for finding dgraph binary | User |
Uses dgraphtest package for programmatic cluster control via Docker Go client.
Important:
dgraphtestanddgraphapiare the future direction for Dgraph testing. New tests should use these packages instead oftestutil. Thetestutilpackage is being retired and maintained only for backward compatibility with existing tests.
| Feature | t/ runner | integration2 |
|---|---|---|
| Cluster management | docker-compose | Docker Go client |
| Version switching | No | Yes |
| Individual node control | No | Yes (Start/Stop/Kill per node) |
| Upgrade testing | No | Yes |
| Build tag | integration | integration2 |
# Build your local binary first
make install
# Run tests
go test -v --tags=integration2 ./systest/integration2/
go test -v --tags=integration2 --run '^TestName$' ./pkg/
With make:
# Run all integration2 tests
make test-integration2
# Run integration2 tests for a specific package
make test TAGS=integration2 PKG=systest/vector
# Run a specific integration2 test
make test TAGS=integration2 PKG=systest/vector TEST=TestVectorSearch
Automatic version handling:
/tmp/dgraph-repo-* on first runmake dgraph (GOOS=linux)dgraphtest/binaries/dgraph_<version>Version formats:
"local" - uses $GOPATH/bin/dgraph (default)"v23.0.1" - git tag"4fc9cfd" - commit hashFirst run is slow (builds binaries), subsequent runs reuse cache.
dgraphapi provides high-level client wrappers for interacting with Dgraph in tests.
Two client types:
dgo.DgraphBoth clients support authentication and multi-tenancy (namespace-aware operations).
$GOPATH/bin/dgraph must exist for "local" versionGOPATH environment variable must be setgroot / passwordThe dgraphapi package can work with any running Dgraph cluster. If no Docker prefix is detected
(no TEST_DOCKER_PREFIX env var), it falls back to localhost ports.
Default fallback ports:
localhost:9080localhost:8080localhost:5080localhost:6080Use case: Write quick Go scripts to interact with your local development cluster instead of using Postman for repetitive tasks.
Benefits:
This is especially useful for testing backup/restore, namespace operations, or complex mutation sequences during development.
Example test:
func TestLocalCluster(t *testing.T) {
c := dgraphtest.NewComposeCluster()
gc, cleanup, err := c.Client()
require.NoError(t, err)
defer cleanup()
require.NoError(t, gc.SetupSchema(testSchema))
numVectors := 9
rdfs, _ := dgraphapi.GenerateRandomVectors(0, numVectors, 100, pred)
mu := &api.Mutation{SetNquads: []byte(rdfs), CommitNow: true}
_, err = gc.Mutate(mu)
require.NoError(t, err)
}
Tests that verify Dgraph behaviour when upgrading from one version to another.
//go:build upgrade
package main
func TestUpgradeFromV23(t *testing.T) {
// Start with old version
conf := dgraphtest.NewClusterConfig().WithVersion("v23.0.1")
// ... test upgrade to "local" ...
}
| Strategy | How it works | Use case |
|---|---|---|
BackupRestore | Take backup on old version, restore on new | Most common customer upgrade path |
InPlace | Stop cluster, swap binary, restart | Fast upgrade, tests binary compatibility |
ExportImport | Export from old, import to new | Migration across major versions |
Specified when calling c.Upgrade():
c.Upgrade("local", dgraphtest.BackupRestore)
c.Upgrade("local", dgraphtest.InPlace)
c.Upgrade("local", dgraphtest.ExportImport)
Controlled by DGRAPH_UPGRADE_MAIN_ONLY environment variable:
DGRAPH_UPGRADE_MAIN_ONLY=true (default):
DGRAPH_UPGRADE_MAIN_ONLY=false:
Run all upgrade tests:
# Build your local binary first
make install
# Run with main combos only (fast)
go test -v --tags=upgrade ./...
# Run with all version combos (slow, 30min+)
DGRAPH_UPGRADE_MAIN_ONLY=false go test -v --tags=upgrade ./...
Run specific package:
go test -v --tags=upgrade ./systest/mutations-and-queries/
go test -v --tags=upgrade ./acl/
go test -v --tags=upgrade ./worker/
Run single test:
go test -v --tags=upgrade -run '^TestUpgradeName$' ./pkg/
With make:
# Run all upgrade tests
make test-upgrade
# Run upgrade tests for a specific package
make test TAGS=upgrade PKG=acl
# Run a specific upgrade test
make test TAGS=upgrade PKG=acl TEST=TestACL
| Package | Tests |
|---|---|
systest/mutations-and-queries/ | Data preservation across upgrades |
systest/multi-tenancy/ | Namespace/ACL upgrade behaviour |
systest/plugin/ | Custom plugin compatibility |
acl/ | ACL schema and permissions |
worker/ | Worker-level upgrade logic |
query/ | Query behaviour consistency |
Dgraph follows standard Go testing patterns with specific conventions.
Function names:
Test: TestParseSchema, TestQueryExecutionTestBackupAndRestore, not Test_Backup_And_RestoreTestVectorIndexRebuilding not TestVectorFile names:
_test.go: parser_test.go, backup_test.goschema.go → schema_test.goUsed extensively in Dgraph for testing multiple scenarios:
func TestConversion(t *testing.T) {
tests := []struct {
name string
input Val
output Val
wantErr bool
}{
{name: "string to int", input: Val{...}, output: Val{...}},
{name: "invalid type", input: Val{...}, wantErr: true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got, err := Convert(tc.input, tc.output.Tid)
if tc.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.output, got)
})
}
}
Benefits:
t.RunDgraph uses the testify library:
require.* (fail immediately):
require.NoError(t, err) // Stops test if err != nil
require.Equal(t, expected, actual)
require.True(t, condition)
When to use: Setup, critical checks, integration tests
assert.* (continue on failure):
assert.NoError(t, err) // Logs error but continues
assert.Equal(t, expected, actual)
When to use: Rarely in Dgraph; prefer require for clarity
Convention: Use require by default.
Creates isolated subtests with individual names:
func TestCluster(t *testing.T) {
t.Run("start nodes", func(t *testing.T) {
// subtest 1
})
t.Run("health check", func(t *testing.T) {
// subtest 2
})
}
Benefits:
go test --run TestCluster/healthAlways defer cleanup operations:
func TestWithCluster(t *testing.T) {
c, err := dgraphtest.NewLocalCluster(conf)
require.NoError(t, err)
defer func() { c.Cleanup(t.Failed()) }() // Cleanup even if test fails
gc, cleanup, err := c.Client()
require.NoError(t, err)
defer cleanup() // Close client connections
}
Why: Ensures resources are freed even on test failure.
Mark helper functions so failures point to actual test line:
func setupTestData(t *testing.T, gc *GrpcClient) {
t.Helper() // Failures show caller line, not this line
err := gc.SetupSchema(`name: string .`)
require.NoError(t, err)
}
func TestSomething(t *testing.T) {
setupTestData(t, gc) // If this fails, error points here
// ...
}
// BAD
time.Sleep(5 * time.Second) // Flaky!
// GOOD
require.NoError(t, c.HealthCheck(false)) // Wait for actual condition
// BAD
var sharedClient *Client // Tests interfere with each other
// GOOD
func TestX(t *testing.T) {
client := newClient() // Each test gets its own
}
// BAD - Test2 depends on Test1 running first
func TestInsertData(t *testing.T) { /* insert */ }
func TestQueryData(t *testing.T) { /* assumes data exists */ }
// GOOD - Each test is independent
func TestQuery(t *testing.T) {
setupData(t) // Set up what you need
// ... test query
}
// BAD
client.Mutate(mutation) // Ignoring error
// GOOD
_, err := client.Mutate(mutation)
require.NoError(t, err)
Use with caution:
func TestIndependent(t *testing.T) {
t.Parallel() // Can run in parallel with other tests
// Only if test doesn't share resources
}
Don't use for:
Dgraph uses testify/suite for tests needing shared setup/teardown across multiple test methods.
When to use:
SetupTest, TearDownTest)Benefits:
Key pattern: Shared test logic across build tags
Dgraph uses suites to run identical test methods for both integration and upgrade tests:
Integration suite (//go:build integration):
Upgrade suite (//go:build upgrade):
Upgrade() methodAvailable hooks:
SetupSuite() - once before all testsSetupTest() - before each test methodSetupSubTest() - before each subtestTearDownTest() - after each test methodTearDownSuite() - once after all testsHow to run:
# Run entire test suite (all test methods)
go test -v --tags=integration ./systest/plugin/
# Run specific test method from suite
go test -v --tags=integration --run 'TestPluginTestSuite/TestPasswordReturn' ./systest/plugin/
# Run specific subtest within a test method
go test -v --tags=integration --run 'TestPluginTestSuite/TestPasswordReturn/subtest' ./systest/plugin/
# Run same tests in upgrade mode
go test -v --tags=upgrade --run 'TestPluginTestSuite/TestPasswordReturn' ./systest/plugin/
With make:
# Run the plugin systest package via t/ runner
make test SUITE=systest PKG=systest/plugin
# Run a specific test
make test SUITE=systest PKG=systest/plugin TEST=TestPluginTestSuite/TestPasswordReturn
# Run in upgrade mode
make test TAGS=upgrade PKG=systest/plugin TEST=TestPluginTestSuite/TestPasswordReturn
When NOT to use:
func TestX(t *testing.T)Examples in Dgraph codebase:
acl/integration_test.go + acl/acl_integration_test.go - ACL suitesystest/plugin/ - Integration + Upgrade suites sharing test methodssystest/mutations-and-queries/ - Integration + Upgrade suitesFuzzing tests parser and validation logic with random inputs to find edge cases.
Go's native fuzzing generates random inputs to find crashes, panics, or unexpected behaviour.
dql/parser_fuzz_test.go - DQL query parser fuzzing# Run fuzz test for 5 minutes
go test -v ./dql -fuzz=Fuzz -fuzztime=5m
# Run with custom timeout
go test -v ./dql -fuzz=Fuzz -fuzztime=300s -fuzzminimizetime=120s
With make:
# Run all fuzz tests (default 300s per package)
make test-fuzz
# Fuzz a specific package with custom duration
make test FUZZ=1 PKG=dql FUZZTIME=5m
ci-dgraph-fuzz.yml (runs on PRs)go test -v ./dql -fuzz="Fuzz" -fuzztime="300s"The following improvements could still enhance the developer experience:
t/ runner also handle unit and integration2 tests, providing a
consistent interface for all test types.