docs/en/data_source/catalog/paimon_catalog.md
import Beta from '../../_assets/commonMarkdown/_beta.mdx'
StarRocks supports Paimon catalogs from v3.1 onwards.
A Paimon catalog is a kind of external catalog that enables you to query data from Apache Paimon without ingestion.
Also, you can directly transform and load data from Paimon by using INSERT INTO based on Paimon catalogs.
To ensure successful SQL workloads on your Paimon cluster, your StarRocks cluster must be able to access the storage system and metastore of your Paimon cluster. StarRocks supports the following storage systems and metastores:
You can only use Paimon catalogs to query data. You cannot use Paimon catalogs to drop, delete, or insert data into your Paimon cluster.
| Paimon Type | StarRocks Type |
|---|---|
BINARY | VARBINARY |
VARBINARY | VARBINARY |
CHAR | CHAR(length) |
VARCHAR | VARCHAR |
BOOLEAN | BOOLEAN |
DECIMAL | DECIMAL(precision, scale) |
TINYINT | TINYINT |
SMALLINT | SMALLINT |
INT | INT |
BIGINT | BIGINT |
FLOAT | FLOAT |
DOUBLE | DOUBLE |
DATE | DATE |
TIME | TIME |
TIMESTAMP | DATETIME |
LocalZonedTimestamp | DATETIME |
ARRAY | ARRAY<element_type> |
MAP | MAP<key_type, value_type> |
ROW/STRUCT | STRUCT<field1:type1, ...> |
Before you create a Paimon catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Paimon cluster.
If your Paimon cluster uses AWS S3 as storage, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.
The following authentication methods are recommended:
Of the above-mentioned three authentication methods, instance profile is the most widely used.
For more information, see Preparation for authentication in AWS IAM.
If you choose HDFS as storage, configure your StarRocks cluster as follows:
export HADOOP_USER_NAME="<user_name>" at the beginning of the fe/conf/hadoop_env.sh file of each FE and at the beginning of the be/conf/hadoop_env.sh file of each BE or the cn/conf/hadoop_env.sh file of each CN. After you set the username in these files, restart each FE and each BE or CN to make the parameter settings take effect. You can set only one username for each StarRocks cluster.NOTE
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
kinit -kt keytab_path principal command on each FE and each BE or CN to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically.JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf" to the $FE_HOME/conf/fe.conf file of each FE and to the $BE_HOME/conf/be.conf file of each BE or the $CN_HOME/conf/cn.conf file of each CN. In this example, /etc/krb5.conf is the save path of the krb5.conf file. You can modify the path based on your needs.CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "paimon",
CatalogParams,
StorageCredentialParams,
)
The name of the Paimon catalog. The naming conventions are as follows:
The description of the Paimon catalog. This parameter is optional.
The type of your data source. Set the value to paimon.
A set of parameters about how StarRocks accesses the metadata of your Paimon cluster.
The following table describes the parameter you need to configure in CatalogParams.
| Parameter | Required | Description |
|---|---|---|
| paimon.catalog.type | Yes | The type of metastore that you use for your Paimon cluster. Set this parameter to filesystem or hive. |
| paimon.catalog.warehouse | Yes | The warehouse storage path of your Paimon data. |
| hive.metastore.uris | No | The URI of your Hive metastore. Format: thrift://<metastore_IP_address>:<metastore_port>. If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas (,), for example, "thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>". |
NOTE
If you use Hive metastore, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the
/etc/hostspath before you query Paimon data. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
If you use HDFS as storage, you do not need to configure StorageCredentialParams.
If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams.
If you choose AWS S3 as storage for your Paimon cluster, take one of the following actions:
To choose the instance profile-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "true",
"aws.s3.endpoint" = "<aws_s3_endpoint>"
To choose the assumed role-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<iam_role_arn>",
"aws.s3.endpoint" = "<aws_s3_endpoint>"
To choose the IAM user-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>",
"aws.s3.endpoint" = "<aws_s3_endpoint>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| aws.s3.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false. Default value: false. |
| aws.s3.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS S3 bucket. If you use the assumed role-based authentication method to access AWS S3, you must specify this parameter. |
| aws.s3.endpoint | Yes | The endpoint that is used to connect to your AWS S3 bucket. For example, https://s3.us-west-2.amazonaws.com. |
| aws.s3.access_key | No | The access key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
| aws.s3.secret_key | No | The secret key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS S3 and how to configure an access control policy in AWS IAM Console, see Authentication parameters for accessing AWS S3.
If you choose an S3-compatible storage system, such as MinIO, as storage for your Paimon cluster, configure StorageCredentialParams as follows to ensure a successful integration:
"aws.s3.enable_ssl" = "false",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| aws.s3.enable_ssl | Yes | Specifies whether to enable SSL connection. |
Valid values: true and false. Default value: true. | ||
| aws.s3.enable_path_style_access | Yes | Specifies whether to enable path-style access. |
Valid values: true and false. Default value: false. For MinIO, you must set the value to true. | ||
Path-style URLs use the following format: https://s3.<region_code>.amazonaws.com/<bucket_name>/<key_name>. For example, if you create a bucket named DOC-EXAMPLE-BUCKET1 in the US West (Oregon) Region, and you want to access the alice.jpg object in that bucket, you can use the following path-style URL: https://s3.us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET1/alice.jpg. | ||
| aws.s3.endpoint | Yes | The endpoint that is used to connect to your S3-compatible storage system instead of AWS S3. |
| aws.s3.access_key | Yes | The access key of your IAM user. |
| aws.s3.secret_key | Yes | The secret key of your IAM user. |
If you choose Blob Storage as storage for your Paimon cluster, take one of the following actions:
To choose the Shared Key authentication method, configure StorageCredentialParams as follows:
"azure.blob.storage_account" = "<storage_account_name>",
"azure.blob.shared_key" = "<storage_account_shared_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.blob.storage_account | Yes | The username of your Blob Storage account. |
| azure.blob.shared_key | Yes | The shared key of your Blob Storage account. |
To choose the SAS Token authentication method, configure StorageCredentialParams as follows:
"azure.blob.storage_account" = "<storage_account_name>",
"azure.blob.container" = "<container_name>",
"azure.blob.sas_token" = "<storage_account_SAS_token>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.blob.storage_account | Yes | The username of your Blob Storage account. |
| azure.blob.container | Yes | The name of the blob container that stores your data. |
| azure.blob.sas_token | Yes | The SAS token that is used to access your Blob Storage account. |
If you choose Data Lake Storage Gen2 as storage for your Paimon cluster, take one of the following actions:
To choose the Managed Identity authentication method, configure StorageCredentialParams as follows:
"azure.adls2.oauth2_use_managed_identity" = "true",
"azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>",
"azure.adls2.oauth2_client_id" = "<service_client_id>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.oauth2_use_managed_identity | Yes | Specifies whether to enable the Managed Identity authentication method. Set the value to true. |
| azure.adls2.oauth2_tenant_id | Yes | The ID of the tenant whose data you want to access. |
| azure.adls2.oauth2_client_id | Yes | The client (application) ID of the managed identity. |
To choose the Shared Key authentication method, configure StorageCredentialParams as follows:
"azure.adls2.storage_account" = "<storage_account_name>",
"azure.adls2.shared_key" = "<storage_account_shared_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.storage_account | Yes | The username of your Data Lake Storage Gen2 storage account. |
| azure.adls2.shared_key | Yes | The shared key of your Data Lake Storage Gen2 storage account. |
To choose the Service Principal authentication method, configure StorageCredentialParams as follows:
"azure.adls2.oauth2_client_id" = "<service_client_id>",
"azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>",
"azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.oauth2_client_id | Yes | The client (application) ID of the service principal. |
| azure.adls2.oauth2_client_secret | Yes | The value of the new client (application) secret created. |
| azure.adls2.oauth2_client_endpoint | Yes | The OAuth 2.0 token endpoint (v1) of the service principal or application. |
If you choose Data Lake Storage Gen1 as storage for your Paimon cluster, take one of the following actions:
To choose the Managed Service Identity authentication method, configure StorageCredentialParams as follows:
"azure.adls1.use_managed_service_identity" = "true"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls1.use_managed_service_identity | Yes | Specifies whether to enable the Managed Service Identity authentication method. Set the value to true. |
To choose the Service Principal authentication method, configure StorageCredentialParams as follows:
"azure.adls1.oauth2_client_id" = "<application_client_id>",
"azure.adls1.oauth2_credential" = "<application_client_credential>",
"azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls1.oauth2_client_id | Yes | The client (application) ID of the service principal. |
| azure.adls1.oauth2_credential | Yes | The value of the new client (application) secret created. |
| azure.adls1.oauth2_endpoint | Yes | The OAuth 2.0 token endpoint (v1) of the service principal or application. |
If you choose Google GCS as storage for your Paimon cluster, take one of the following actions:
To choose the VM-based authentication method, configure StorageCredentialParams as follows:
"gcp.gcs.use_compute_engine_service_account" = "true"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.use_compute_engine_service_account | FALSE | TRUE | Specifies whether to directly use the service account that is bound to your Compute Engine. |
To choose the service account-based authentication method, configure StorageCredentialParams as follows:
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>",
"gcp.gcs.service_account_private_key" = "<google_service_private_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.service_account_email | "" | "[email protected]" | The email address in the JSON file generated at the creation of the service account. |
| gcp.gcs.service_account_private_key_id | "" | "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" | The private key ID in the JSON file generated at the creation of the service account. |
| gcp.gcs.service_account_private_key | "" | "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" | The private key in the JSON file generated at the creation of the service account. |
To choose the impersonation-based authentication method, configure StorageCredentialParams as follows:
Make a VM instance impersonate a service account:
"gcp.gcs.use_compute_engine_service_account" = "true",
"gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.use_compute_engine_service_account | FALSE | TRUE | Specifies whether to directly use the service account that is bound to your Compute Engine. |
| gcp.gcs.impersonation_service_account | "" | "hello" | The service account that you want to impersonate. |
Make a service account (temporarily named as meta service account) impersonate another service account (temporarily named as data service account):
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>",
"gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>",
"gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.service_account_email | "" | "[email protected]" | The email address in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.service_account_private_key_id | "" | "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" | The private key ID in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.service_account_private_key | "" | "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" | The private key in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.impersonation_service_account | "" | "hello" | The data service account that you want to impersonate. |
The following examples create a Paimon catalog named paimon_catalog_fs whose metastore type paimon.catalog.type is set to filesystem to query data from your Paimon cluster.
If you choose the instance profile-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>",
"aws.s3.use_instance_profile" = "true",
"aws.s3.endpoint" = "<s3_endpoint>"
);
If you choose the assumed role-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>",
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role",
"aws.s3.endpoint" = "<s3_endpoint>"
);
If you choose the IAM user-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<s3_paimon_warehouse_path>",
"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>",
"aws.s3.endpoint" = "<s3_endpoint>"
);
Use MinIO as an example. Run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<paimon_warehouse_path>",
"aws.s3.enable_ssl" = "true",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
);
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<blob_paimon_warehouse_path>",
"azure.blob.storage_account" = "<blob_storage_account_name>",
"azure.blob.shared_key" = "<blob_storage_account_shared_key>"
);
If you choose the SAS Token authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<blob_paimon_warehouse_path>",
"azure.blob.storage_account" = "<blob_storage_account_name>",
"azure.blob.container" = "<blob_container_name>",
"azure.blob.sas_token" = "<blob_storage_account_SAS_token>"
);
If you choose the Managed Service Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<adls1_paimon_warehouse_path>",
"azure.adls1.use_managed_service_identity" = "true"
);
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<adls1_paimon_warehouse_path>",
"azure.adls1.oauth2_client_id" = "<application_client_id>",
"azure.adls1.oauth2_credential" = "<application_client_credential>",
"azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
);
If you choose the Managed Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>",
"azure.adls2.oauth2_use_managed_identity" = "true",
"azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>",
"azure.adls2.oauth2_client_id" = "<service_client_id>"
);
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>",
"azure.adls2.storage_account" = "<storage_account_name>",
"azure.adls2.shared_key" = "<shared_key>"
);
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<adls2_paimon_warehouse_path>",
"azure.adls2.oauth2_client_id" = "<service_client_id>",
"azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>",
"azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
);
If you choose the VM-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>",
"gcp.gcs.use_compute_engine_service_account" = "true"
);
If you choose the service account-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>",
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>",
"gcp.gcs.service_account_private_key" = "<google_service_private_key>"
);
If you choose the impersonation-based authentication method:
If you make a VM instance impersonate a service account, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>",
"gcp.gcs.use_compute_engine_service_account" = "true",
"gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
);
If you make a service account impersonate another service account, run a command like below:
CREATE EXTERNAL CATALOG paimon_catalog_fs
PROPERTIES
(
"type" = "paimon",
"paimon.catalog.type" = "filesystem",
"paimon.catalog.warehouse" = "<gcs_paimon_warehouse_path>",
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>",
"gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>",
"gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
);
You can use SHOW CATALOGS to query all catalogs in the current StarRocks cluster:
SHOW CATALOGS;
You can also use SHOW CREATE CATALOG to query the creation statement of an external catalog. The following example queries the creation statement of a Paimon catalog named paimon_catalog_fs:
SHOW CREATE CATALOG paimon_catalog_fs;
You can use DROP CATALOG to drop an external catalog.
The following example drops a Paimon catalog named paimon_catalog_fs:
DROP Catalog paimon_catalog_fs;
You can use one of the following syntaxes to view the schema of a Paimon table:
View schema
DESC[RIBE] <catalog_name>.<database_name>.<table_name>;
View schema and location from the CREATE statement
SHOW CREATE TABLE <catalog_name>.<database_name>.<table_name>;
Use SHOW DATABASES to view the databases in your Paimon cluster:
SHOW DATABASES FROM <catalog_name>;
Use SET CATALOG to switch to the destination catalog in the current session:
SET CATALOG <catalog_name>;
Then, use USE to specify the active database in the current session:
USE <db_name>;
Or, you can use USE to directly specify the active database in the destination catalog:
USE <catalog_name>.<db_name>;
Use SELECT to query the destination table in the specified database:
SELECT count(*) FROM <table_name> LIMIT 10;
Suppose you have an OLAP table named olap_tbl, you can transform and load data like below:
INSERT INTO default_catalog.olap_db.olap_tbl SELECT * FROM paimon_table;