pip/pip-454.md
Apache Pulsar currently uses Apache ZooKeeper as its metadata store for broker coordination, topic metadata, namespace policies, and BookKeeper ledger management. While ZooKeeper has served well, there are several motivations for enabling migration to alternative metadata stores:
Operational Simplicity: Alternative metadata stores like Oxia may offer simpler operations, better observability, or reduced operational overhead compared to ZooKeeper ensembles.
Performance Characteristics: Different metadata stores have different performance profiles. Some workloads may benefit from stores optimized for high throughput or low latency.
Deployment Flexibility: Organizations may prefer metadata stores that align better with their existing infrastructure and expertise.
Zero-Downtime Migration: Operators need a safe, automated way to migrate metadata between stores without service interruption.
Currently, there is no supported path to migrate from one metadata store to another without cluster downtime. This PIP proposes a safe, simple migration framework that ensures metadata consistency by avoiding complex dual-write/dual-read patterns. The framework enables:
Provide a safe, automated framework for migrating Apache Pulsar's metadata from one store implementation (e.g., ZooKeeper) to another (e.g., Oxia) with zero service interruption.
The migration framework introduces a DualMetadataStore wrapper that transparently handles migration without modifying existing metadata store implementations.
Transparent Wrapping: The DualMetadataStore wraps the existing source store (e.g., ZKMetadataStore) without modifying its implementation.
Lazy Target Initialization: The target store is only initialized when migration begins, triggered by a flag in the source store.
Ephemeral-First Approach: Before copying persistent data, all brokers and bookies recreate their ephemeral nodes in the target store. This ensures the cluster is "live" in both stores during migration.
Read-Only Mode During Migration: To ensure consistency, all metadata writes are blocked during PREPARATION and COPYING phases. Components receive SessionLost events to defer non-critical operations (e.g., ledger rollovers).
Phase-Based Migration: Migration proceeds through well-defined phases (PREPARATION → COPYING → COMPLETED).
Generic Framework: The framework is agnostic to specific store implementations - it works with any source and target that implement the MetadataStore interface.
Guaranteed Consistency: By blocking writes during migration and using atomic copy, metadata is always in a consistent state. No dual-write complexity, no data divergence, no consistency issues.
NOT_STARTED
↓
PREPARATION ← All brokers/bookies recreate ephemeral nodes in target
← Metadata writes are BLOCKED (read-only mode)
↓
COPYING ← Coordinator copies persistent data source → target
← Metadata writes still BLOCKED
↓
COMPLETED ← Migration complete, all services using target store
← Metadata writes ENABLED on target
↓
After validation period:
* Update config and restart brokers & bookies
* Decommission source store
(If errors occur):
FAILED ← Rollback to source store, writes ENABLED
Participant Registration (at startup): Each broker and bookie registers itself as a migration participant by creating a sequential ephemeral node:
/pulsar/migration-coordinator/participants/id-NNNN (sequential)Administrator triggers migration:
pulsar-admin metadata-migration start --target oxia://oxia1:6648
Coordinator actions:
/pulsar/migration-coordinator/migration
{
"phase": "PREPARATION",
"targetUrl": "oxia://oxia1:6648"
}
Broker/Bookie actions (automatic, triggered by watching the flag):
/pulsar/migration-coordinator/migrationCoordinator waits for all participant nodes to be deleted (indicating all participants are ready)
Coordinator actions:
COPYINGDuring this phase:
During this phase:
Estimated duration:
Impact on operations:
Coordinator actions:
COMPLETEDBroker/Bookie actions (automatic, triggered by phase update):
COMPLETED phaseAt this point:
Operator follow-up (after validation period):
# Before (ZooKeeper):
metadataStoreUrl=zk://zk1:2181,zk2:2181/pulsar
# After (Oxia):
metadataStoreUrl=oxia://oxia1:6648
If migration fails at any point:
FAILEDFAILED phaseOperator actions:
pulsar-admin metadata-migration start --target <url>Direct Oxia Client Usage: The coordinator uses AsyncOxiaClient directly instead of going through MetadataStore interface. This allows setting version-id and modification count to match the source values, ensuring conditional writes (compare-and-set operations) continue to work correctly after migration.
Breadth-First Traversal: Processes paths level by level using a work queue, enabling high concurrency while preventing deep recursion.
Concurrent Operations: Uses a semaphore to limit pending operations (default: 1000), balancing throughput with memory usage.
Migration State (/pulsar/migration-coordinator/migration):
{
"phase": "PREPARATION",
"targetUrl": "oxia://oxia1:6648/default"
}
Fields:
phase: Current migration phase (NOT_STARTED, PREPARATION, COPYING, COMPLETED, FAILED)targetUrl: Target metadata store URL (e.g., oxia://oxia1:6648/default)Participant Registration (/pulsar/migration-coordinator/participants/id-NNNN):
No additional state tracking: The simplified design removes complex state tracking and checksums. Migration state is kept minimal.
# Start migration
pulsar-admin metadata-migration start --target <target-url>
# Check status
pulsar-admin metadata-migration status
The simplified design only requires two commands. Rollback happens automatically if migration fails (phase transitions to FAILED).
POST /admin/v2/metadata/migration/start
Body: { "targetUrl": "oxia://..." }
GET /admin/v2/metadata/migration/status
Returns: { "phase": "COPYING", "targetUrl": "oxia://..." }
The migration design guarantees metadata consistency by avoiding dual-write and dual-read patterns entirely:
Single Source of Truth: At any given time, there is exactly ONE active metadata store:
No Dual-Write Complexity: Unlike approaches that write to both stores simultaneously, this design eliminates:
No Dual-Read Complexity: Unlike approaches that read from both stores, this design eliminates:
Atomic Cutover: All participants switch stores simultaneously when COMPLETED phase is detected. There is no ambiguous state where some participants use one store and others use another.
Fast Migration Window: With < 30 seconds for typical metadata sizes (even up to 500 MB), the read-only window is minimal and acceptable for most production environments.
Bottom line: Metadata is always in a consistent state - either fully in the source store or fully in the target store, never split or diverged between them.
Version Preservation: All persistent data is copied with original version-id and modification count preserved. This ensures conditional writes (compare-and-set operations) continue working after migration.
Ephemeral Node Recreation: All ephemeral nodes are recreated by their owning brokers/bookies before persistent data copy begins.
Read-Only Mode: All metadata writes are blocked during PREPARATION and COPYING phases, ensuring no data inconsistencies during migration.
Important: Read-only mode only affects metadata operations. Data plane operations continue normally:
No Downtime: Brokers and bookies remain online throughout the migration. Data plane operations (publish/consume) continue without interruption. Only metadata operations are temporarily blocked during the migration phases.
Graceful Failure: If migration fails at any point, phase transitions to FAILED and cluster returns to source store automatically.
Session Events: Components receive SessionLost event during migration to defer non-critical writes (e.g., ledger rollovers), and SessionReestablished when migration completes or fails.
Participant Coordination: Migration waits for all participants to complete preparation before copying data.
Atomic Cutover: All participants switch to target store simultaneously when COMPLETED phase is detected.
Ephemeral Session Consistency: Each participant manages its own ephemeral nodes in target store with proper session management.
No Dual-Write Complexity: By blocking writes during migration, we avoid complex dual-write error handling and data divergence issues.
The beauty of this design is that no configuration changes are needed to start migration:
metadataStoreUrl configDualMetadataStore wrapper is automatically applied when using ZooKeeperAfter migration completes and validation period ends, update config files:
# Before migration
metadataStoreUrl=zk://zk1:2181,zk2:2181,zk3:2181/pulsar
# After migration (update and rolling restart)
metadataStoreUrl=oxia://oxia1:6648
Apache Kafka faced a similar challenge migrating from ZooKeeper to KRaft (Kafka Raft). Their approach provides useful comparison points:
Migration Strategy:
Timeline:
Migration Strategy:
Timeline:
| Aspect | Kafka (ZK → KRaft) | Pulsar (ZK → Oxia) |
|---|---|---|
| Migration Duration | Hours to days | < 30 seconds (up to 500 MB) |
| Metadata Writes | Continue during migration | Blocked during migration |
| Data Plane | Unaffected | Unaffected (publish/consume continues) |
| Approach | Continuous sync + dual-mode | Atomic copy + read-only mode |
| Consistency | Dual-write (eventual consistency) | Single source of truth (always consistent) |
| Complexity | High (dual-mode broker logic) | Low (transparent wrapper) |
| Version Preservation | Not applicable (different metadata models) | Yes (conditional writes preserved) |
| Rollback | Manual, complex | Automatic on failure |
| Monitoring | Requires lag tracking | Simple phase monitoring |
Data Plane Independence: The key insight is that Pulsar's data plane (publish/consume, ledger writes) does not require metadata writes to function. This architectural property allows pausing metadata writes for a brief period (< 30 seconds) without affecting data operations. This is what makes the migration provably safe and consistent, not the metadata size.
Write-Pause Safety: Pausing writes during copy ensures:
This works regardless of metadata size - whether 50K nodes or millions of topics. The migration handles large metadata volumes through high concurrency (1000 parallel operations), completing in < 30 seconds even for 500 MB.
Ephemeral Node Handling: Pulsar has significant ephemeral metadata (broker registrations, bundle ownership), making dual-write complex. Read-only mode simplifies this.
Conditional Writes: Pulsar relies heavily on compare-and-set operations. Version preservation ensures these continue working post-migration, which Kafka doesn't need to address.
Architectural Enabler: Pulsar's separation of data plane and metadata plane allows brief metadata write pauses without data plane impact, enabling a simpler, safer migration approach.
Pulsar's design incorporates lessons from Kafka's migration: