metadata-ingestion/docs/sources/fivetran/fivetran_post.md
Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.
The Fivetran source uses quoted identifiers for database and schema names to properly handle special characters and case-sensitive names. This follows Snowflake's quoted identifier convention, which is then transpiled to the target database dialect (Snowflake, BigQuery, or Databricks).
Important Notes:
use database "my-database")"my-schema".table_name)my-database)my database)my.database)MyDatabase)Migration Impact:
Case Sensitivity Considerations:
mydatabase becomes MYDATABASE), while double-quoted identifiers preserve the exact case as entered (e.g., "mydatabase" stays as mydatabase). See Snowflake's identifier documentation for details.fivetran_logs, MY_SCHEMA), it will be automatically uppercased to match existing Snowflake objects created without quotescreate or replace role fivetran_datahub;
// Grant access to a warehouse to run queries to view metadata
grant operate, usage on warehouse "<your-warehouse>" to role fivetran_datahub;
// Grant access to view database and schema in which your log and metadata tables exist
// Note: Database and schema names are automatically quoted, so use quoted identifiers if your names contain special characters
grant usage on DATABASE "<fivetran-log-database>" to role fivetran_datahub;
grant usage on SCHEMA "<fivetran-log-database>"."<fivetran-log-schema>" to role fivetran_datahub;
// Grant access to execute select query on schema in which your log and metadata tables exist
grant select on all tables in SCHEMA "<fivetran-log-database>"."<fivetran-log-schema>" to role fivetran_datahub;
// Grant the fivetran_datahub to the snowflake user.
grant role fivetran_datahub to user snowflake_user;
USE CATALOG privilege on any catalogs you want to ingestUSE SCHEMA privilege on any schemas you want to ingestSELECT privilege on any tables and views you want to ingestworkspace_url and token with your information from the previous steps.If you have multiple instances of source/destination systems that are referred in your fivetran setup, you'd need to configure platform instance for these systems in fivetran recipe to generate correct lineage edges. Refer the document Working with Platform Instances to understand more about this.
While configuring the platform instance for source system you need to provide connector id as key and for destination system provide destination id as key.
When creating the connection details in the fivetran UI make a note of the destination Group ID of the service account, as that will need to be used in the destination_to_platform_instance configuration.
I.e:
In this case the configuration would be something like:
destination_to_platform_instance:
greyish_positive: <--- this comes from bigquery destination - see screenshot
database: <big query project ID>
env: PROD
# Map of connector source to platform instance
sources_to_platform_instance:
postgres_connector_id1:
platform_instance: cloud_postgres_instance
env: PROD
postgres_connector_id2:
platform_instance: local_postgres_instance
env: DEV
# Map of destination to platform instance
destination_to_platform_instance:
snowflake_destination_id1:
platform_instance: prod_snowflake_instance
env: PROD
snowflake_destination_id2:
platform_instance: dev_snowflake_instance
env: PROD
Module behavior is constrained by source APIs, permissions, and metadata exposed by the platform. Refer to capability notes for unsupported or conditional features.
Works only for:
To prevent excessive data ingestion, the following configurable limits apply per connector:
fivetran_log_config.max_jobs_per_connector)fivetran_log_config.max_table_lineage_per_connector)fivetran_log_config.max_column_lineage_per_connector)When these limits are exceeded, only the most recent entries are ingested. Warnings will be logged during ingestion to notify you when truncation occurs.
These limits act as safety nets to prevent excessive data ingestion. You can increase them cautiously if you need to ingest more historical data or have connectors with many tables/columns. Example configuration:
source:
type: fivetran
config:
fivetran_log_config:
# ... other config ...
max_jobs_per_connector: 1000 # Increase sync history limit
max_table_lineage_per_connector: 500 # Increase table lineage limit
max_column_lineage_per_connector: 5000 # Increase column lineage limit
If ingestion fails, validate credentials, permissions, connectivity, and scope filters first. Then review ingestion logs for source-specific errors and adjust configuration accordingly.