docs/benchmark-results/mysql-cdc.md
Environment: Intel Core i7-10850H @ 2.70GHz, 32 GB RAM, WSL2 (Linux 6.6.87.2), x86_64
Redpanda Connect mysql_cdc input reading a full snapshot of cart (10,000,000 rows × ~600 B) and dropping all output immediately.
No Kafka, no sink — this measures the raw MySQL read ceiling.
Varying GOMAXPROCS and batching.count.
See internal/impl/mysql/bench/ for configs and run instructions.
task bench:load:cart COUNT=10000000
task bench:run CORES=1 BATCH=1000
task bench:run CORES=2 BATCH=1000
# ...
| GOMAXPROCS | batch=1000 | batch=5000 | batch=10000 |
|---|---|---|---|
| 1 | 99,977 | 103,433 | 104,630 |
| 2 | 163,592 | 173,022 | 173,045 |
| 4 | 187,419 | 187,439 | 187,462 |
| 8 | 191,439 | 187,464 | 187,464 |
| GOMAXPROCS | batch=1000 | batch=5000 | batch=10000 |
|---|---|---|---|
| 1 | 60 | 62 | 63 |
| 2 | 98 | 104 | 104 |
| 4 | 113 | 113 | 113 |
| 8 | 115 | 113 | 113 |
Observations:
10,000,000 rows written from Kafka to MySQL via Confluent JDBC Sink connector. Schema/payload JSON envelope, 16 partitions.
See internal/impl/mysql/bench/mysql-write/jdbc-sink/ for configs and run instructions.
task bench:load COUNT=10000000
task bench:run TASKS=16
| tasks.max | msg/sec |
|---|---|
| 4 | 18,518 |
| 8 | 31,250 |
| 16 | 42,553 |
| tasks.max | msg/sec |
|---|---|
| 16 | 43,859 |
Observations:
Debezium MySQL source connector reading 10,000,000-row cart snapshot into a Kafka topic.
Varying max.batch.size, max.queue.size, and max.poll.records.
See internal/impl/mysql/bench/mysql-read/debezium/ for configs and run instructions.
| fetch.size | batch.size | queue.size | elapsed | msg/sec |
|---|---|---|---|---|
| 1,000 | 1,000 | 4,000 | 841s | 11,890 |
| 5,000 | 5,000 | 20,000 | 747s | 13,386 |
| 10,000 | 10,000 | 40,000 | 781s | 12,804 |
Observations:
Redpanda Connect mysql_cdc input reading 10,000,000-row cart snapshot into a Kafka topic (kafka_franz output).
Varying GOMAXPROCS and batching.count.
See internal/impl/mysql/bench/mysql-read/rpcn/ for configs and run instructions.
task bench:build
task bench:load COUNT=10000000
task bench:all OUT=results.txt
| GOMAXPROCS | batch=1,000 | batch=5,000 | batch=10,000 |
|---|---|---|---|
| 1 | 15,085 | 27,849 | 46,137 |
| 2 | 39,253 | 38,760 | 41,322 |
| 4 | 29,412 | 45,455 | 45,455 |
| 8 | 29,412 | 45,455 | 45,872 |
| unbounded | 28,592 | 41,908 | 50,440 |
Observations:
Redpanda Connect consuming from a Kafka topic (kafka_franz input) and writing to MySQL (sql_insert output).
Same Kafka broker as above (cpus: 3), 16 partitions. Varying GOMAXPROCS and batching.count.
See internal/impl/mysql/bench/mysql-write/rpcn/ for configs and run instructions.
task bench:load COUNT=10000000
task bench:run CORES=1 BATCH=10000
task bench:run CORES=4 BATCH=10000
# ...
| GOMAXPROCS | batch=10000 |
|---|---|
| 4 | 64,102 |
| 8 | 60,975 |
Observations:
Debezium MySQL source connector streaming CDC change events (inserts) for 10,000,000 rows into a Kafka topic.
Varying max.batch.size and max.queue.size.
See internal/impl/mysql/bench/mysql-read/debezium/ for configs and run instructions.
| batch.size | queue.size | elapsed | msg/sec |
|---|---|---|---|
| 1,000 | 4,000 | ~549s | 18,227 |
| 5,000 | 20,000 | 392s | 25,510 |
| 10,000 | 40,000 | 427s | 23,419 |
Observations:
Redpanda Connect mysql_cdc input streaming CDC change events (inserts) for 10,000,000 rows into a Kafka topic (kafka_franz output).
Varying GOMAXPROCS and batching.count.
See internal/impl/mysql/bench/mysql-read/rpcn/ for configs and run instructions.
task bench:build
task bench:load:cdc
task bench:all:cdc COUNT=10000000 OUT=cdc_results.txt
| GOMAXPROCS | batch=1,000 | batch=5,000 | batch=10,000 |
|---|---|---|---|
| 1 | 17,361 | 19,920 | 19,920 |
| 2 | 18,939 | 15,873 | 15,974 |
| 4 | 15,873 | 15,873 | 15,823 |
| 8 | 16,077 | 15,773 | 16,287 |
Observations: