metadata-ingestion/docs/sources/iceberg/iceberg_post.md
Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.
There are multiple servers compatible with the Iceberg Catalog specification. DataHub's iceberg connector uses pyiceberg
library to extract metadata from them. The recipe for the source consists of 2 parts:
catalog part which is passed as-is to the pyiceberg library and configures the connection and its details (i.e. authentication).
The name of catalog specified in the recipe has no consequence, it is just a formal requirement from the library.
Only one catalog will be considered for the ingestion.env or stateful_ingestion which are standard
DataHub's ingestor configuration parameters and are described in the Config Details chapter.This chapter showcases several examples of setting up connections to an Iceberg catalog, varying based on the underlying implementation. Iceberg is designed to have catalog and warehouse separated, which is reflected in how we configure it. It is especially visible when using Iceberg REST Catalog - which can use many blob storages (AWS S3, Azure Blob Storage, MinIO) as a warehouse.
Note that, for advanced users, it is possible to specify a custom catalog client implementation via py-catalog-impl
configuration option - refer to pyiceberg documentation on details.
The minimal configuration for connecting to Glue catalog with S3 warehouse:
source:
type: "iceberg"
config:
catalog:
my_catalog:
type: "glue"
s3.region: "us-west-2"
region_name: "us-west-2"
Where us-west-2 is the region from which you want to ingest. The above configuration will work assuming your pod or environment in which
you run your datahub CLI is already authenticated to AWS and has proper permissions granted (see below). If you need
to specify secrets directly, use the following configuration as the template:
source:
type: "iceberg"
config:
catalog:
demo:
type: "glue"
s3.region: "us-west-2"
s3.access-key-id: "${AWS_ACCESS_KEY_ID}"
s3.secret-access-key: "${AWS_SECRET_ACCESS_KEY}"
s3.session-token: "${AWS_SESSION_TOKEN}"
aws_access_key_id: "${AWS_ACCESS_KEY_ID}"
aws_secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
aws_session_token: "${AWS_SESSION_TOKEN}"
region_name: "us-west-2"
This example uses references to fill credentials (either from Secrets defined in Managed Ingestion or environmental variables). It is possible (but not recommended due to security concerns) to provide those values in plaintext, directly in the recipe.
The role used by the ingestor for ingesting metadata from Glue Iceberg Catalog and S3 warehouse is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["glue:GetDatabases", "glue:GetTables", "glue:GetTable"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:ListBucket", "s3:GetObjectVersion"],
"Resource": [
"arn:aws:s3:::<bucket used by the warehouse>",
"arn:aws:s3:::<bucket used by the warehouse>/*"
]
}
]
}
The following configuration assumes MinIO defines authentication using the s3.* prefix. Note the specification of s3.endpoint, assuming
MinIO listens on port 9000 at minio-host. The uri parameter points at Iceberg REST Catalog (IRC) endpoint (in this case iceberg-catalog:8181).
source:
type: "iceberg"
config:
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog:8181"
s3.access-key-id: "${AWS_ACCESS_KEY_ID}"
s3.secret-access-key: "${AWS_SECRET_ACCESS_KEY}"
s3.region: "eu-east-1"
s3.endpoint: "http://minio-host:9000"
This example assumes IRC requires token authentication (via Authorization header). There are more options available,
see https://py.iceberg.apache.org/configuration/#rest-catalog for details. Moreover, the assumption here is that the
environment (i.e. pod) is already authenticated to perform actions against AWS S3.
source:
type: "iceberg"
config:
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog-uri"
token: "token-value"
s3.region: "us-west-2"
Unlike other parameters provided in the dictionary under the catalog key, connection parameter is a custom feature in
DataHub, allowing to inject connection resiliency parameters to the REST connection made by the ingestor. connection
allows for 2 parameters:
timeout is provided as amount of seconds, it needs to be whole number (or null to turn it off)retry is a complex object representing parameters used to create urllib3 Retry object.
There are many possible parameters, most important would be total (total retries) and backoff_factor. See the linked docs
for the details.source:
type: "iceberg"
config:
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog-uri"
connection:
retry:
backoff_factor: 0.5
total: 3
timeout: 120
DataHub supports ingesting metadata from Google BigLake via the Iceberg REST Catalog API. BigLake provides unified governance and security for data across data lakes and data warehouses.
PyIceberg 0.9+ natively supports BigLake authentication using Google Cloud's Application Default Credentials (ADC).
GCP Project with BigLake API enabled:
gcloud services enable biglake.googleapis.com --project=YOUR_PROJECT_ID
Service Account with required permissions:
biglake.catalogs.getbiglake.tables.getbiglake.tables.listbiglake.databases.getbiglake.databases.listBigLake Catalog created in your GCP project:
gcloud alpha biglake catalogs create CATALOG_NAME \
--location=REGION \
--project=PROJECT_ID
BigLake authentication uses Application Default Credentials (ADC). DataHub provides automatic OAuth scope fixing for seamless integration.
Setup:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
export GCP_PROJECT_ID=your-project-id
export GCS_WAREHOUSE_BUCKET=your-bucket-name
source:
type: iceberg
config:
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: "" # End-user credentials mode
connection:
timeout: 120
retry:
total: 5
backoff_factor: 0.3
Key Configuration Parameters:
auth.type: google - Uses Application Default Credentialsauth.google.scopes - OAuth scopes required for BigLake accessheader.x-goog-user-project - Specifies the GCP project for billingheader.X-Iceberg-Access-Delegation: "" - Uses end-user credentials modeWhen using auth.type: google with explicit scopes:
Discovers credentials using Google Cloud's Application Default Credentials (ADC) chain:
GOOGLE_APPLICATION_CREDENTIALS pointing to service account JSON (most common)gcloud auth application-default loginUses explicit OAuth scopes: The auth.google.scopes configuration ensures the correct cloud-platform scope is used for BigLake access
google.auth.default()google-auth library to be installed (included in DataHub dependencies)Handles token refresh: Automatic token refresh with no manual management needed
For production environments using Managed Ingestion (via the DataHub UI), you can securely store your GCP service account credentials as DataHub secrets instead of using environment variables.
Step 1: Create a Secret in DataHub
BIGLAKE_SERVICE_ACCOUNT_JSON)Step 2: Reference the Secret in Your Recipe
Use the ${SECRET_NAME} syntax to reference your secret in the ingestion recipe:
source:
type: iceberg
config:
env: prod
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
credentials_json: ${BIGLAKE_SERVICE_ACCOUNT_JSON}
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: ""
sink:
type: datahub-rest
config:
server: ${DATAHUB_GMS_URL}
token: ${DATAHUB_GMS_TOKEN}
The secret will be automatically resolved at runtime when the ingestion executes.
Alternative: Using Structured Credentials
You can also use individual secrets for each credential component, which provides better validation:
source:
type: iceberg
config:
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
credentials:
project_id: ${GCP_PROJECT_ID}
private_key_id: ${GCP_PRIVATE_KEY_ID}
private_key: ${GCP_PRIVATE_KEY}
client_email: ${GCP_CLIENT_EMAIL}
client_id: ${GCP_CLIENT_ID}
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: ${GCP_PROJECT_ID}
header.X-Iceberg-Access-Delegation: ""
Create these individual secrets in DataHub:
GCP_PROJECT_IDGCP_PRIVATE_KEY_IDGCP_PRIVATE_KEY (the private key value from your service account JSON)GCP_CLIENT_EMAILGCP_CLIENT_IDStep 3: Deploy via DataHub UI
Important: Vended credentials require your BigLake catalog to be configured with CREDENTIAL_MODE_SERVICE_ACCOUNT. Most BigLake catalogs use CREDENTIAL_MODE_END_USER by default, which does not support vended credentials.
If you get an error stating "X-Iceberg-Access-Delegation header must not contain vended-credentials when credential mode is CREDENTIAL_MODE_END_USER", your catalog doesn't support this feature. Use the standard configuration with header.X-Iceberg-Access-Delegation: "" instead.
For catalogs that support vended credentials, set header.X-Iceberg-Access-Delegation: vended-credentials:
source:
type: iceberg
config:
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: vended-credentials # Only for CREDENTIAL_MODE_SERVICE_ACCOUNT
When to use vended credentials:
BigLake will generate short-lived service account token scoped to the specific tables being accessed.
Error: "invalid_scope: Invalid OAuth scope or ID token audience provided"
This error occurs when OAuth scopes are not properly configured. To fix:
auth.google.scopes is set to ["https://www.googleapis.com/auth/cloud-platform"] in your configurationgoogle-auth library is installed: pip install google-authGOOGLE_APPLICATION_CREDENTIALS points to a valid service account JSON fileError: "X-Iceberg-Access-Delegation header must not contain vended-credentials when credential mode is CREDENTIAL_MODE_END_USER"
header.X-Iceberg-Access-Delegation: "" (empty string) in your configurationCREDENTIAL_MODE_SERVICE_ACCOUNTError: "Authentication failed"
gcloud auth application-default print-access-tokengcloud services list --enabled | grep biglakeError: "Catalog not found"
gcloud alpha biglake catalogs list --location=REGION --project=PROJECThttps://biglake.googleapis.com/v1/projects/{PROJECT}/locations/{REGION}/catalogs/{CATALOG}Error: "Permission denied on GCS warehouse"
gsutil iam ch serviceAccount:SA_EMAIL:roles/storage.objectViewer gs://BUCKET_NAME
Error: "User project header required"
header.x-goog-user-project is set to your GCP project IDThis example targets Postgres as the sql-type Iceberg catalog and uses Azure DLS as the warehouse.
source:
type: "iceberg"
config:
catalog:
demo:
type: sql
uri: postgresql+psycopg2://user:[email protected]:5432/icebergcatalog
adlfs.tenant-id: <Azure tenant ID>
adlfs.account-name: <Azure storage account name>
adlfs.client-id: <Azure Client/Application ID>
adlfs.client-secret: <Azure Client Secret>
This ingestion source maps the following Source System Concepts to DataHub Concepts:
<!-- Remove all unnecessary/irrelevant DataHub Concepts -->| Source Concept | DataHub Concept | Notes |
|---|---|---|
iceberg | Data Platform | |
| Table | Dataset | An Iceberg table is registered inside a catalog using a name, where the catalog is responsible for creating, dropping and renaming tables. Catalogs manage a collection of tables that are usually grouped into namespaces. The name of a table is mapped to a Dataset name. If a Platform Instance is configured, it will be used as a prefix: <platform_instance>.my.namespace.table. |
| Table property | User (a.k.a CorpUser) | The value of a table property can be used as the name of a CorpUser owner. This table property name can be configured with the source option user_ownership_property. |
| Table property | CorpGroup | The value of a table property can be used as the name of a CorpGroup owner. This table property name can be configured with the source option group_ownership_property. |
| Table parent folders (excluding warehouse catalog location) | Container | Available in a future release |
| Table schema | SchemaField | Maps to the fields defined within the Iceberg table schema definition. |
DataHub also implements the Iceberg REST Catalog. See the Iceberg Catalog documentation for more details.
For advanced Iceberg behavior and tuning, refer to:
Module behavior is constrained by source APIs, permissions, and metadata exposed by the platform. Refer to capability notes for unsupported or conditional features.
If ingestion fails, validate credentials, permissions, connectivity, and scope filters first. Then review ingestion logs for source-specific errors and adjust configuration accordingly.
processing_threadsEach processing thread will open several files/sockets to download manifest files from blob storage. If you experience
exceptions appearing when increasing processing_threads configuration parameter, try to increase limit of open
files (e.g. using ulimit in Linux).