apps/docs/content/guides/database/replication/replication-faq.mdx
Replication is currently in private alpha. Access is limited and features may change.
</Admonition>Replication currently supports BigQuery as the only managed destination. See the Setup guide for configuration details.
We are working on adding more destinations in the future. Availability may continue to vary based on the planned roll-out strategy.
We are currently working on a new Supabase Warehouse product designed to address the limitations of the previous Analytics Buckets. Our goal is to build a solution we can confidently stand behind, rather than continuing to support an approach that does not meet the quality and flexibility we want for our users.
As a result, managed replication to Analytics Buckets through Supabase ETL is no longer available. Right now, BigQuery is the only supported managed destination, and we are actively working on expanding capabilities.
Common reasons:
Check your publication settings and verify your table meets the requirements.
After modifying your Postgres publication, you must restart the replication pipeline for changes to take effect. See Adding or removing tables for instructions.
Pipeline failures occur during the streaming phase when an error happens while replicating live data. This prevents data loss. To recover:
See Handling errors for more details.
Table errors occur during the copy phase. To recover, click View status, find the affected table, and reset the table state. This will restart the table copy from the beginning.
Check the Database → replication page:
See the Replication Monitoring guide for comprehensive monitoring instructions.
You can manage your pipeline using the actions menu in the destinations list. See Managing your pipeline for details on available actions.
Note: Stopping replication will cause changes to queue up in the WAL.
If a table is deleted downstream at the destination (for example, in your BigQuery dataset), the replication pipeline will automatically recreate it.
This behavior is by design to prevent the pipeline from breaking if tables are accidentally deleted. The pipeline ensures that all tables in your publication are always present at the destination.
To permanently remove a table from your destination:
You have two options:
Option 1: Pause the pipeline first
Option 2: Remove from publication first
ALTER PUBLICATION ... DROP TABLENote: Removing a table from the publication and restarting the pipeline does not delete the table downstream, it only stops replicating new changes to it.
Yes, data duplicates can occur in certain scenarios when stopping a pipeline.
When you stop a pipeline (for restarts or updates), the replication process tries to finish processing any transactions that are currently being sent to your destination. It waits up to a few minutes to allow these in-progress transactions to complete cleanly before stopping.
However, if a transaction in your database takes longer than this waiting period to complete, the pipeline will stop before that entire transaction has been fully processed. When the pipeline starts again, it must restart the incomplete transaction from the beginning to maintain transaction boundaries, which results in some data being sent twice to your destination.
Understanding transaction boundaries: A transaction is a group of database changes that happen together (for example, all changes within a BEGIN...COMMIT block). Postgres logical replication must process entire transactions - it cannot process part of a transaction, stop, and then continue from the middle. This means if a transaction is interrupted, the whole transaction must be replayed when the pipeline resumes.
Example scenario: Suppose you have a batch operation that updates 10,000 rows within a single transaction. If this operation takes 10 minutes to complete and you stop the pipeline after 5 minutes (when 5,000 rows have been processed), the pipeline cannot resume from row 5,001. Instead, when it restarts, it must reprocess all 10,000 rows from the beginning, resulting in the first 5,000 rows being sent to your destination twice.
Important: There are currently no plans to implement automatic deduplication. If your use case requires guaranteed exactly-once delivery, you should implement deduplication logic in your downstream systems based on primary keys or other unique identifiers.
Navigate to Logs → Replication to see all pipeline logs. Logs contain diagnostic information. If you're experiencing issues, contact support with your error details.
If you need assistance: