apps/docs/content/guides/database/replication/replication-setup.mdx
Replication is currently in private alpha. Access is limited and features may change.
</Admonition>Replication uses Postgres logical replication to stream changes from your database. Powered by Supabase ETL, an open source tool built for Postgres logical replication, it provides a managed interface through the Dashboard to configure and monitor replication pipelines.
Replication requires two main components: a Postgres publication (defines what to replicate) and a destination (where data is sent). Follow these steps to set up your replication pipeline.
<Admonition type="tip">If you already have a Postgres publication set up, you can skip to Step 2: Enable replication.
</Admonition>A Postgres publication defines which tables and change types will be replicated from your database. You create publications using SQL.
The following SQL examples assume you have users and orders tables in your database.
-- Create publication for both tables
create publication pub_users_orders
for table users, orders;
This publication will track all changes (INSERT, UPDATE, DELETE, TRUNCATE) for both the users and orders tables.
-- Create a publication for all tables in the public schema
create publication pub_all_public for tables in schema public;
This will track changes for all existing and future tables in the public schema.
-- Create a publication for all tables
create publication pub_all_tables for all tables;
This will track changes for all tables in your database.
You can replicate only a subset of columns from a table:
-- Replicate only specific columns from the users table
create publication pub_users_subset
for table users (id, email, created_at);
This will only replicate the id, email, and created_at columns from the users table.
You can filter which rows to replicate using a WHERE clause:
-- Only replicate active users
create publication pub_active_users
for table users where (status = 'active');
-- Only replicate recent orders
create publication pub_recent_orders
for table orders where (created_at > '2024-01-01');
After creating a publication via SQL, you can view it in the Supabase Dashboard:
Before adding destinations, you need to enable replication for your project:
<Image alt="Enable Replication" src="/docs/img/database/replication/replication-enable-replication.png"
width={3560} height={2146} />
Once replication is enabled and you have a Postgres publication, you can add a destination. The destination is where your replicated data will be stored, while the pipeline is the active Postgres replication process that continuously streams changes from your database to that destination.
Follow these steps to configure your destination. Right now, BigQuery is the only managed destination available in Replication.
<Admonition type="note" title="Why Analytics Buckets are no longer listed here">We are currently working on a new Supabase Warehouse product designed to address the limitations of the previous Analytics Buckets. Our goal is to build a solution we can confidently stand behind, rather than continuing to support an approach that does not meet the quality and flexibility we want for our users.
At the moment, BigQuery is the only supported destination, but we are actively working on expanding capabilities.
</Admonition>BigQuery is Google's fully managed data warehouse. You can replicate your database tables to BigQuery for analytics and reporting.
Before configuring BigQuery as a destination, set up the following in Google Cloud Platform:
Google Cloud Platform (GCP) account: Sign up for GCP if you don't have one
BigQuery dataset: Create a BigQuery dataset in your GCP project
supabase_replication)GCP service account key: Create a service account with appropriate permissions:
Required permissions:
bigquery.datasets.getbigquery.tables.createbigquery.tables.getbigquery.tables.getDatabigquery.tables.updatebigquery.tables.updateDataNavigate to Database → replication in your Supabase Dashboard
Click Add destination
<Image alt="BigQuery Configuration Settings" src="/docs/img/database/replication/replication-bigquery-details.png" width={3560} height={2146} />
Configure the general settings:
Configure BigQuery-specific settings:
Project ID: Your BigQuery project identifier (found in the GCP Console)
Dataset ID: The name of your BigQuery dataset (without the project ID)
<Admonition type="note">In the GCP Console, the dataset is shown as project-id.dataset-id. Enter only the part after the dot. For example, if you see my-project.my_dataset, enter my_dataset.
Service Account Key: Your GCP service account key in JSON format (from Step 1)
Configure Advanced Settings (optional):
Click Create and start to begin replication
Your replication pipeline will now start copying data from your database to BigQuery.
Once configured, replication to BigQuery:
Due to BigQuery limitations, replicated tables use a versioned structure:
users)_version suffix (e.g., users_version)This structure handles table truncations seamlessly while maintaining query compatibility.
_ when replicating to BigQueryAfter creating a destination, the replication pipeline will start and appear in the destinations list. You can monitor the pipeline's status and performance from the Dashboard.
<Image alt="Replication Destinations List" src="/docs/img/database/replication/replication-destinations-list.png" width={3560} height={2146}
/>
For comprehensive monitoring instructions including pipeline states, metrics, and logs, see the Replication Monitoring guide.
You can manage your pipeline from the destinations list using the actions menu.
<Image alt="Pipeline Actions" src="/docs/img/database/replication/replication-pipeline-actions.png" width={3560} height={2146}
/>
Available actions:
If you need to modify which tables are replicated after your replication pipeline is already running, follow these steps:
<Admonition type="note">If your Postgres publication uses FOR ALL TABLES or FOR TABLES IN SCHEMA, new tables in that scope are automatically included in the publication. However, you still must restart the replication pipeline for the changes to take effect.
Add the table to your publication using SQL:
-- Add a single table to an existing publication
alter publication pub_users_orders add table products;
-- Or add multiple tables at once
alter publication pub_users_orders add table products, categories;
Restart the replication pipeline using the actions menu (see Managing your pipeline) for the changes to take effect.
Remove the table from your Postgres publication using SQL:
-- Remove a single table from a publication
alter publication pub_users_orders drop table orders;
-- Or remove multiple tables at once
alter publication pub_users_orders drop table orders, products;
Restart the replication pipeline using the actions menu (see Managing your pipeline) for the changes to take effect.
Deleted tables are automatically recreated by the pipeline. To permanently delete a table, pause the pipeline first or remove it from the publication before deleting. See the FAQ for details.
</Admonition>Once configured, replication:
Changes are sent in batches to optimize performance and reduce costs. The batch size and timing can be adjusted using the advanced settings. The replication pipeline currently performs data extraction and loading only, without transformation - your data is replicated as-is to the destination.
If you encounter issues during setup:
For more troubleshooting help, see the Replication FAQ.
Replication has the following limitations:
Destination-specific limitations, such as BigQuery's row size limits, are documented in Step 3 above.
Replication is actively being developed. Planned improvements include:
There are no public timelines for these features, but they represent the roadmap for making replication more robust and flexible.