src/KurrentDB.Testing/README.md
A comprehensive testing toolkit for KurrentDB that provides modern testing infrastructure, test data generation, advanced assertions, and observability integration.
KurrentDB.Testing is an opinionated testing framework designed to standardize and enhance the testing experience across the KurrentDB codebase. It provides:
Every test project that references KurrentDB.Testing MUST create a TestEnvironmentWireUp.cs file to bootstrap the test environment.
This is the absolute minimum that every test project needs:
using KurrentDB.Testing.TUnit;
using TUnit.Core.Executors;
// Register the toolkit configurator and executor at assembly level
[assembly: ToolkitTestConfigurator]
[assembly: TestExecutor<ToolkitTestExecutor>]
namespace YourProject.Tests;
public class TestEnvironmentWireUp {
[Before(Assembly)]
public static ValueTask BeforeAssembly(AssemblyHookContext context) =>
ToolkitTestEnvironment.Initialize(context.Assembly);
[After(Assembly)]
public static ValueTask AfterAssembly(AssemblyHookContext context) =>
ToolkitTestEnvironment.Reset(context.Assembly);
}
Add the KurrentDB.Testing project reference to your test project:
<ItemGroup>
<ProjectReference Include="..\KurrentDB.Testing\KurrentDB.Testing.csproj" />
</ItemGroup>
This step is MANDATORY. See the Required Setup section above for detailed instructions.
At minimum, create a TestEnvironmentWireUp.cs file with assembly-level hooks to initialize the test environment. Test-level hooks for logging are optional.
public class MyFirstTests {
[Test]
public async ValueTask MyTest_ShouldPass() {
// Arrange
var expected = "Hello, World!";
// Act
var actual = "Hello, World!";
// Assert
await Assert.That(actual).IsEqualTo(expected);
}
}
The ToolkitTestEnvironment class provides:
appsettings.json and environment variablesTestUid property is automatically added to all log eventsThe ToolkitTestExecutor manages test execution lifecycle:
Enhanced test context with utilities:
context.InjectItem<T>(): Store data in test contextcontext.ExtractItem<T>(): Retrieve stored dataBogus should be used as a TUnit ClassDataSource for test data generation:
public class MyTests {
[ClassDataSource<BogusFaker>(Shared = SharedType.PerTestSession)]
public required BogusFaker Faker { get; init; }
[Test]
public async Task GenerateRandomPerson() {
var name = Faker.Name.FullName();
var email = Faker.Internet.Email();
await Assert.That(name).IsNotNull();
}
}
This approach ensures:
The toolkit provides ShouldBeEquivalentTo for deep object comparison:
[Test]
public async Task DeepComparison_ShouldSucceed() {
var expected = new Order {
Id = 1,
Items = new List<OrderItem> {
new() { ProductId = 100, Quantity = 2 },
new() { ProductId = 101, Quantity = 1 }
}
};
var actual = GetOrder();
actual.ShouldBeEquivalentTo(expected, config => config
.Excluding(x => x.CreatedAt)
.WithStringComparison(StringComparison.OrdinalIgnoreCase)
.WithNumericTolerance(0.01));
}
config => config
.Excluding(x => x.Property) // Exclude specific properties
.Excluding("Path.To.Property") // Exclude by path string
.Using<DateTime>((a, e) => a.Date == e.Date) // Custom comparer
.WithStringComparison(StringComparison.OrdinalIgnoreCase)
.WithNumericTolerance(0.01) // Tolerance for numeric comparisons
.IgnoringCollectionOrder() // Order-independent collection comparison
[Test]
public async Task SubsetComparison() {
var subset = new[] { 1, 2, 3 };
var collection = new[] { 1, 2, 3, 4, 5 };
subset.ShouldBeSubsetOf(collection);
}
Test correlation is built into TUnit. The testing toolkit automatically adds a TestUid property to all log events, allowing you to correlate logs with specific tests.
All logs generated during test execution are automatically tagged with the test's unique identifier for easy filtering and debugging in Seq or other log aggregation tools.
[Test]
public async Task MyTest(TestContext context) {
// TestUid is automatically added to all logs
Log.Information("Processing test");
// All logs will be tagged with TestUid for correlation
}
Add OTel metadata to test context:
context.AddOtelServiceMetadata(new OtelServiceMetadata {
ServiceName = "MyService",
ServiceVersion = "1.0.0"
});
The HomeAutomation sample demonstrates a complete implementation of the testing toolkit for a smart home domain.
[Test]
public async Task GenerateSmartHome() {
var faker = new Faker();
// Generate a home with 5 rooms, 2 devices per room
var home = faker.HomeAutomation().Home(rooms: 5, devicesPerRoom: 2);
await Assert.That(home.Rooms.Count).IsEqualTo(5);
await Assert.That(home.Devices.Count).IsEqualTo(10);
}
[Test]
public async Task GenerateOnlyLightsAndSensors() {
var faker = new Faker();
var allowedTypes = new[] {
DeviceType.SmartLight,
DeviceType.MotionSensor
};
var home = faker.HomeAutomation().Home(deviceTypes: allowedTypes);
foreach (var device in home.Devices) {
await Assert.That(allowedTypes).Contains(device.DeviceType);
}
}
[Test]
public async Task GenerateHomeEvents() {
var faker = new Faker();
var home = faker.HomeAutomation().Home();
// Generate 50 events for this home's devices
var events = faker.HomeAutomation().Events(home, count: 50);
await Assert.That(events.Count).IsGreaterThanOrEqualTo(50);
}
The HomeAutomation DataSet includes smart event correlation:
// When motion is detected, lights in the same room may turn on
var events = faker.HomeAutomation().Events(home, count: 10);
// Events are automatically correlated:
// 1. MotionDetected in Living Room
// 2. LightStateChanged (Living Room light turns on) - correlated 2-10 seconds later
The testing toolkit includes a docker-compose configuration that runs:
Seq (http://localhost:5341)
Aspire Dashboard (http://localhost:18888)
Start the infrastructure:
docker-compose up -d
When adding new testing utilities: