Back to Datahub

Azure Data Factory Post

metadata-ingestion/docs/sources/azure-data-factory/azure-data-factory_post.md

1.5.0.38.9 KB
Original Source

Capabilities

Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.

:::note Not Azure Fabric This connector is for Azure Data Factory (classic), not Azure Fabric's Data Factory. Azure Fabric support is planned for a future release. :::

Authentication Methods

The connector supports multiple authentication methods:

MethodBest ForConfiguration
Service PrincipalProduction environmentsauthentication_method: service_principal
Managed IdentityAzure-hosted deployments (VMs, AKS, App Service)authentication_method: managed_identity
Azure CLILocal developmentauthentication_method: cli (run az login first)
DefaultAzureCredentialFlexible environmentsauthentication_method: default

For service principal setup, see Register an application with Microsoft Entra ID.

Lineage Extraction

Which Activities Produce Lineage?

The connector extracts table-level lineage from these ADF activity types:

Activity TypeLineage Behavior
Copy ActivityCreates lineage from input dataset(s) to output dataset
Data FlowExtracts sources, sinks, and transformation script
Lookup ActivityCreates input lineage from the lookup dataset
ExecutePipelineCreates pipeline-to-pipeline lineage to the child pipeline

Lineage is enabled by default (include_lineage: true).

How Lineage Resolution Works

For lineage to connect properly to datasets ingested from other sources (e.g., Snowflake, BigQuery), the connector needs to know which DataHub platform each ADF linked service corresponds to.

Step 1: Automatic Platform Mapping

The connector automatically maps ADF linked service types to DataHub platforms. For example, a Snowflake linked service maps to the snowflake platform.

<details> <summary>View all supported linked service mappings</summary>
ADF Linked Service TypeDataHub Platform
AzureBlobStorageabs
AzureBlobFSabs
AzureDataLakeStoreabs
AzureFileStorageabs
AzureSqlDatabasemssql
AzureSqlDWmssql
AzureSynapseAnalyticsmssql
AzureSqlMImssql
SqlServermssql
AzureDatabricksdatabricks
AzureDatabricksDeltaLakedatabricks
AmazonS3s3
AmazonS3Compatibles3
AmazonRedshiftredshift
GoogleCloudStoragegcs
GoogleBigQuerybigquery
Snowflakesnowflake
PostgreSqlpostgres
AzurePostgreSqlpostgres
MySqlmysql
AzureMySqlmysql
Oracleoracle
OracleServiceCloudoracle
Db2db2
Teradatateradata
Verticavertica
Hivehive
Sparkspark
Hdfshdfs
Salesforcesalesforce
SalesforceServiceCloudsalesforce
SalesforceMarketingCloudsalesforce

Unsupported linked service types log a warning and skip lineage for that dataset.

</details>

Step 2: Platform Instance Mapping (for cross-recipe lineage)

If you're ingesting the same data sources with other DataHub connectors (e.g., Snowflake, BigQuery), you need to ensure the platform_instance values match. Use platform_instance_map to map your ADF linked service names to the platform instance used in your other recipes:

yaml
# ADF Recipe
source:
  type: azure-data-factory
  config:
    subscription_id: ${AZURE_SUBSCRIPTION_ID}
    platform_instance_map:
      # Key: Your ADF linked service name (exact match required)
      # Value: The platform_instance from your other source recipe
      "snowflake-prod-connection": "prod_warehouse"
      "bigquery-analytics": "analytics_project"
yaml
# Corresponding Snowflake Recipe (platform_instance must match)
source:
  type: snowflake
  config:
    platform_instance: "prod_warehouse" # Must match the value in platform_instance_map
    # ... other config

Without matching platform_instance values, lineage will create separate dataset entities instead of connecting to your existing ingested datasets.

Data Flow Transformation Scripts

For Data Flow activities, the connector extracts the transformation script and stores it in the dataTransformLogic aspect, visible in the DataHub UI under activity details.

Column-Level Lineage

The connector extracts column-level lineage from Copy activities, enabled by default (include_column_lineage: true).

Supported Mapping Formats

FormatDescriptionADF Configuration
Dictionary FormatLegacy format with direct source-to-sink column mappingtranslator.columnMappings: {"src_col": "sink_col"}
List FormatCurrent format with structured source/sink objectstranslator.mappings: [{source: {name}, sink: {name}}]
Auto-mappingInferred 1:1 mappings when no explicit mappings and source schema availableTabularTranslator with no columnMappings or mappings

Limitations

  • Copy Activity Only: Column lineage is currently extracted only from Copy activities. Other activity types (Data Flow, Lookup, etc.) produce table-level lineage only.
  • Schema Availability: Auto-mapping inference requires source dataset schema information (defined in ADF dataset's schema or structure property). If schema is unavailable, only explicit mappings are extracted.

Execution History

Pipeline runs are extracted as DataProcessInstance entities by default:

yaml
source:
  type: azure-data-factory
  config:
    include_execution_history: true # default
    execution_history_days: 7 # 1-90 days

This provides run status, duration, timestamps, trigger info, parameters, and activity-level details.

Advanced: Multi-Environment Setup

When to Use platform_instance

Use the ADF connector's platform_instance config to distinguish separate ADF deployments when ingesting from multiple subscriptions or tenants:

ScenarioRiskSolution
Single subscriptionNoneNot needed
Multiple subscriptionsLowRecommended
Multiple tenantsHigh - name collision riskRequired
yaml
# Multi-tenant example
source:
  type: azure-data-factory
  config:
    subscription_id: "tenant-a-sub"
    platform_instance: "tenant-a" # Prevents URN collisions

:::warning Factory names are unique within Azure, but different tenants could have identically-named factories. Use platform_instance to prevent entity overwrites. :::

URN Format

Pipeline URNs follow this format:

urn:li:dataFlow:(azure-data-factory,{factory_name}.{pipeline_name},{env})

With platform_instance:

urn:li:dataFlow:(azure-data-factory,{platform_instance}.{factory_name}.{pipeline_name},{env})

For Azure naming rules, see Azure Data Factory naming rules.

Limitations

Module behavior is constrained by source APIs, permissions, and metadata exposed by the platform. Refer to capability notes for unsupported or conditional features.

Troubleshooting

If ingestion fails, validate credentials, permissions, connectivity, and scope filters first. Then review ingestion logs for source-specific errors and adjust configuration accordingly.