docs/en/data_source/catalog/unified_catalog.md
import Beta from '../../_assets/commonMarkdown/_beta.mdx'
A unified catalog is a type of external catalog that is provided by StarRocks from v3.2 onwards to handle tables from Apache Hive™, Apache Iceberg, Apache Hudi, Delta Lake, and Apache Kudu data sources as a unified data source without ingestion. With unified catalogs, you can:
To ensure successful SQL workloads on your unified data source, your StarRocks cluster must be able to access the storage system and metastore of your unified data source. StarRocks supports the following storage systems and metastores:
Distributed file system (HDFS) or object storage like AWS S3, Microsoft Azure Storage, Google GCS, or other S3-compatible storage system (for example, MinIO)
Metastore like Hive metastore or AWS Glue
NOTE
If you choose AWS S3 as storage, you can use HMS or AWS Glue as metastore. If you choose any other storage system, you can only use HMS as metastore.
One unified catalog supports integrations with only a single storage system and a single metastore service. Therefore, make sure all the data sources you want to integrate as a unified data source with StarRocks use the same storage system and metastore service.
See the "Usage notes" section in Hive catalog, Iceberg catalog, Hudi catalog, Delta Lake catalog, Paimon catalog, and Kudu catalog to understand the file formats and data types supported.
Format-specific operations are supported only for specific table formats. For example, CREATE TABLE and DROP TABLE are supported only for Hive and Iceberg, and REFRESH EXTERNAL TABLE is supported only for Hive and Hudi.
When you create a table within a unified catalog by using the CREATE TABLE statement, use the ENGINE parameter to specify the table format (Hive or Iceberg).
Before you create a unified catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your unified data source.
If you use AWS S3 as storage or AWS Glue as metastore, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources. For more information, see Authenticate to AWS resources - Preparations.
If you choose HDFS as storage, configure your StarRocks cluster as follows:
export HADOOP_USER_NAME="<user_name>" at the beginning of the fe/conf/hadoop_env.sh file of each FE and at the beginning of the be/conf/hadoop_env.sh file of each BE or the cn/conf/hadoop_env.sh file of each CN. After you set the username in these files, restart each FE and each BE or CN to make the parameter settings take effect. You can set only one username for each StarRocks cluster.NOTE
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
kinit -kt keytab_path principal command on each FE and each BE or CN to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically.JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf" to the $FE_HOME/conf/fe.conf file of each FE and to the $BE_HOME/conf/be.conf file of each BE or the $CN_HOME/conf/cn.conf file of each CN. In this example, /etc/krb5.conf is the save path of the krb5.conf file. You can modify the path based on your needs.CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "unified",
MetastoreParams,
StorageCredentialParams,
MetadataUpdateParams,
PaimonCatalogParams,
KuduCatalogParams
)
The name of the unified catalog. The naming conventions are as follows:
The description of the unified catalog. This parameter is optional.
The type of your data source. Set the value to unified.
A set of parameters about how StarRocks integrates with your metastore.
If you choose Hive metastore as the metastore of your unified data source, configure MetastoreParams as follows:
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "<hive_metastore_uri>"
NOTE
Before querying data, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the /etc/hosts path. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
The following table describes the parameters you need to configure in MetastoreParams.
| Parameter | Required | Description |
|---|---|---|
| unified.metastore.type | Yes | The type of metastore that you use for your unified data source. Set the value to hive. |
| hive.metastore.uris | Yes | The URI of your Hive metastore. Format: thrift://<metastore_IP_address>:<metastore_port>. If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas (,), for example, "thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>". |
If you choose AWS Glue as the metastore of your data source, which is supported only when you choose AWS S3 as storage, take one of the following actions:
To choose the instance profile-based authentication method, configure MetastoreParams as follows:
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.region" = "<aws_glue_region>"
To choose the assumed role-based authentication method, configure MetastoreParams as follows:
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.iam_role_arn" = "<iam_role_arn>",
"aws.glue.region" = "<aws_glue_region>"
To choose the IAM user-based authentication method, configure MetastoreParams as follows:
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "false",
"aws.glue.access_key" = "<iam_user_access_key>",
"aws.glue.secret_key" = "<iam_user_secret_key>",
"aws.glue.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in MetastoreParams.
| Parameter | Required | Description |
|---|---|---|
| unified.metastore.type | Yes | The type of metastore that you use for your unified data source. Set the value to glue. |
| aws.glue.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication. Valid values: true and false. Default value: false. |
| aws.glue.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS Glue Data Catalog. If you use the assumed role-based authentication method to access AWS Glue, you must specify this parameter. |
| aws.glue.region | Yes | The region in which your AWS Glue Data Catalog resides. Example: us-west-1. |
| aws.glue.access_key | No | The access key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
| aws.glue.secret_key | No | The secret key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS Glue and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS Glue.
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
If you use HDFS as storage, you do not need to configure StorageCredentialParams.
If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams.
If you choose AWS S3 as storage, take one of the following actions:
To choose the instance profile-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "<aws_s3_region>"
To choose the assumed role-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<iam_role_arn>",
"aws.s3.region" = "<aws_s3_region>"
To choose the IAM user-based authentication method, configure StorageCredentialParams as follows:
"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>",
"aws.s3.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| aws.s3.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false. Default value: false. |
| aws.s3.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS S3 bucket. If you use the assumed role-based authentication method to access AWS S3, you must specify this parameter. |
| aws.s3.region | Yes | The region in which your AWS S3 bucket resides. Example: us-west-1. |
| aws.s3.access_key | No | The access key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
| aws.s3.secret_key | No | The secret key of your IAM user. If you use the IAM user-based authentication method to access AWS S3, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS S3 and how to configure an access control policy in AWS IAM Console, see Authentication parameters for accessing AWS S3.
If you choose an S3-compatible storage system, such as MinIO, as storage, configure StorageCredentialParams as follows to ensure a successful integration:
"aws.s3.enable_ssl" = "false",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| aws.s3.enable_ssl | Yes | Specifies whether to enable SSL connection. |
Valid values: true and false. Default value: true. | ||
| aws.s3.enable_path_style_access | Yes | Specifies whether to enable path-style access. |
Valid values: true and false. Default value: false. For MinIO, you must set the value to true. | ||
Path-style URLs use the following format: https://s3.<region_code>.amazonaws.com/<bucket_name>/<key_name>. For example, if you create a bucket named DOC-EXAMPLE-BUCKET1 in the US West (Oregon) Region, and you want to access the alice.jpg object in that bucket, you can use the following path-style URL: https://s3.us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET1/alice.jpg. | ||
| aws.s3.endpoint | Yes | The endpoint that is used to connect to your S3-compatible storage system instead of AWS S3. |
| aws.s3.access_key | Yes | The access key of your IAM user. |
| aws.s3.secret_key | Yes | The secret key of your IAM user. |
If you choose Blob Storage as storage, take one of the following actions:
To choose the Shared Key authentication method, configure StorageCredentialParams as follows:
"azure.blob.storage_account" = "<storage_account_name>",
"azure.blob.shared_key" = "<storage_account_shared_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.blob.storage_account | Yes | The username of your Blob Storage account. |
| azure.blob.shared_key | Yes | The shared key of your Blob Storage account. |
To choose the SAS Token authentication method, configure StorageCredentialParams as follows:
"azure.blob.storage_account" = "<storage_account_name>",
"azure.blob.container" = "<container_name>",
"azure.blob.sas_token" = "<storage_account_SAS_token>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.blob.storage_account | Yes | The username of your Blob Storage account. |
| azure.blob.container | Yes | The name of the blob container that stores your data. |
| azure.blob.sas_token | Yes | The SAS token that is used to access your Blob Storage account. |
If you choose Data Lake Storage Gen2 as storage, take one of the following actions:
To choose the Managed Identity authentication method, configure StorageCredentialParams as follows:
"azure.adls2.oauth2_use_managed_identity" = "true",
"azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>",
"azure.adls2.oauth2_client_id" = "<service_client_id>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.oauth2_use_managed_identity | Yes | Specifies whether to enable the Managed Identity authentication method. Set the value to true. |
| azure.adls2.oauth2_tenant_id | Yes | The ID of the tenant whose data you want to access. |
| azure.adls2.oauth2_client_id | Yes | The client (application) ID of the managed identity. |
To choose the Shared Key authentication method, configure StorageCredentialParams as follows:
"azure.adls2.storage_account" = "<storage_account_name>",
"azure.adls2.shared_key" = "<storage_account_shared_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.storage_account | Yes | The username of your Data Lake Storage Gen2 storage account. |
| azure.adls2.shared_key | Yes | The shared key of your Data Lake Storage Gen2 storage account. |
To choose the Service Principal authentication method, configure StorageCredentialParams as follows:
"azure.adls2.oauth2_client_id" = "<service_client_id>",
"azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>",
"azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls2.oauth2_client_id | Yes | The client (application) ID of the service principal. |
| azure.adls2.oauth2_client_secret | Yes | The value of the new client (application) secret created. |
| azure.adls2.oauth2_client_endpoint | Yes | The OAuth 2.0 token endpoint (v1) of the service principal or application. |
If you choose Data Lake Storage Gen1 as storage, take one of the following actions:
To choose the Managed Service Identity authentication method, configure StorageCredentialParams as follows:
"azure.adls1.use_managed_service_identity" = "true"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls1.use_managed_service_identity | Yes | Specifies whether to enable the Managed Service Identity authentication method. Set the value to true. |
To choose the Service Principal authentication method, configure StorageCredentialParams as follows:
"azure.adls1.oauth2_client_id" = "<application_client_id>",
"azure.adls1.oauth2_credential" = "<application_client_credential>",
"azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Required | Description |
|---|---|---|
| azure.adls1.oauth2_client_id | Yes | The client (application) ID of the service principal. |
| azure.adls1.oauth2_credential | Yes | The value of the new client (application) secret created. |
| azure.adls1.oauth2_endpoint | Yes | The OAuth 2.0 token endpoint (v1) of the service principal or application. |
If you choose Google GCS as storage, take one of the following actions:
To choose the VM-based authentication method, configure StorageCredentialParams as follows:
"gcp.gcs.use_compute_engine_service_account" = "true"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.use_compute_engine_service_account | false | true | Specifies whether to directly use the service account that is bound to your Compute Engine. |
To choose the service account-based authentication method, configure StorageCredentialParams as follows:
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>",
"gcp.gcs.service_account_private_key" = "<google_service_private_key>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.service_account_email | "" | "[email protected]" | The email address in the JSON file generated at the creation of the service account. |
| gcp.gcs.service_account_private_key_id | "" | "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" | The private key ID in the JSON file generated at the creation of the service account. |
| gcp.gcs.service_account_private_key | "" | "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" | The private key in the JSON file generated at the creation of the service account. |
To choose the impersonation-based authentication method, configure StorageCredentialParams as follows:
Make a VM instance impersonate a service account:
"gcp.gcs.use_compute_engine_service_account" = "true",
"gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.use_compute_engine_service_account | false | true | Specifies whether to directly use the service account that is bound to your Compute Engine. |
| gcp.gcs.impersonation_service_account | "" | "hello" | The service account that you want to impersonate. |
Make a service account (temporarily named as meta service account) impersonate another service account (temporarily named as data service account):
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>",
"gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>",
"gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
The following table describes the parameters you need to configure in StorageCredentialParams.
| Parameter | Default value | Value example | Description |
|---|---|---|---|
| gcp.gcs.service_account_email | "" | "[email protected]" | The email address in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.service_account_private_key_id | "" | "61d257bd8479547cb3e04f0b9b6b9ca07af3b7ea" | The private key ID in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.service_account_private_key | "" | "-----BEGIN PRIVATE KEY----xxxx-----END PRIVATE KEY-----\n" | The private key in the JSON file generated at the creation of the meta service account. |
| gcp.gcs.impersonation_service_account | "" | "hello" | The data service account that you want to impersonate. |
A set of parameters about how StarRocks updates the cached metadata of Hive, Hudi, and Delta Lake. This parameter set is optional. For more information about the policies for updating cached metadata from Hive, Hudi, and Delta Lake, see Hive catalog, Hudi catalog, and Delta Lake catalog.
In most cases, you can ignore MetadataUpdateParams and do not need to tune the policy parameters in it, because the default values of these parameters already provide you with an out-of-the-box performance.
However, if the frequency of data updates in Hive, Hudi, or Delta Lake is high, you can tune these parameters to further optimize the performance of automatic asynchronous updates.
| Parameter | Required | Description |
|---|---|---|
| enable_metastore_cache | No | Specifies whether StarRocks caches the metadata of Hive, Hudi, or Delta Lake tables. Valid values: true and false. Default value: true. The value true enables the cache, and the value false disables the cache. |
| enable_remote_file_cache | No | Specifies whether StarRocks caches the metadata of the underlying data files of Hive, Hudi, or Delta Lake tables or partitions. Valid values: true and false. Default value: true. The value true enables the cache, and the value false disables the cache. |
| metastore_cache_refresh_interval_sec | No | The time interval at which StarRocks asynchronously updates the metadata of Hive, Hudi, or Delta Lake tables or partitions cached in itself. Unit: seconds. Default value: 7200, which is 2 hours. |
| remote_file_cache_refresh_interval_sec | No | The time interval at which StarRocks asynchronously updates the metadata of the underlying data files of Hive, Hudi, or Delta Lake tables or partitions cached in itself. Unit: seconds. Default value: 60. |
| metastore_cache_ttl_sec | No | The time interval at which StarRocks automatically discards the metadata of Hive, Hudi, or Delta Lake tables or partitions cached in itself. Unit: seconds. Default value: 86400, which is 24 hours. |
| remote_file_cache_ttl_sec | No | The time interval at which StarRocks automatically discards the metadata of the underlying data files of Hive, Hudi, or Delta Lake tables or partitions cached in itself. Unit: seconds. Default value: 129600, which is 36 hours. |
A set of parameters about how to connect Paimon Catalog. This parameter set is optional.
| Parameter | Required | Description |
|---|---|---|
| paimon.catalog.warehouse | No | The warehouse storage path of your Paimon data. |
A set of parameters about how to connect Kudu Catalog. This parameter set is optional.
| Parameter | Required | Description |
|---|---|---|
| kudu.master | No | Specifies the Kudu Master address, which defaults to localhost:7051. |
| kudu.schema-emulation.enabled | No | option to enable or disable the schema emulation. By default, it is turned off (false), which means that all tables belong to the default schema. |
| kudu.schema-emulation.prefix | No | The prefix for schema emulation should only be set when kudu.schema-emulation.enabled = true. The default prefix used is empty string: . |
The following examples create a unified catalog named unified_catalog_hms or unified_catalog_glue, depending on the type of metastore you use, to query data from your unified data source.
If you use HDFS as storage, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083"
);
If you use Hive metastore, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "us-west-2"
);
If you use AWS Glue with Amazon EMR, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_glue
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.region" = "us-west-2",
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "us-west-2"
);
If you use Hive metastore, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role",
"aws.s3.region" = "us-west-2"
);
If you use AWS Glue with Amazon EMR, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_glue
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.iam_role_arn" = "arn:aws:iam::081976408565:role/test_glue_role",
"aws.glue.region" = "us-west-2",
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "arn:aws:iam::081976408565:role/test_s3_role",
"aws.s3.region" = "us-west-2"
);
If you use Hive metastore, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_access_key>",
"aws.s3.region" = "us-west-2"
);
If you use AWS Glue with Amazon EMR, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_glue
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "false",
"aws.glue.access_key" = "<iam_user_access_key>",
"aws.glue.secret_key" = "<iam_user_secret_key>",
"aws.glue.region" = "us-west-2",
"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>",
"aws.s3.region" = "us-west-2"
);
Use MinIO as an example. Run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"aws.s3.enable_ssl" = "true",
"aws.s3.enable_path_style_access" = "true",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
);
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.blob.storage_account" = "<blob_storage_account_name>",
"azure.blob.shared_key" = "<blob_storage_account_shared_key>"
);
If you choose the SAS Token authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.blob.storage_account" = "<blob_storage_account_name>",
"azure.blob.container" = "<blob_container_name>",
"azure.blob.sas_token" = "<blob_storage_account_SAS_token>"
);
If you choose the Managed Service Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.adls1.use_managed_service_identity" = "true"
);
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.adls1.oauth2_client_id" = "<application_client_id>",
"azure.adls1.oauth2_credential" = "<application_client_credential>",
"azure.adls1.oauth2_endpoint" = "<OAuth_2.0_authorization_endpoint_v2>"
);
If you choose the Managed Identity authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.adls2.oauth2_use_managed_identity" = "true",
"azure.adls2.oauth2_tenant_id" = "<service_principal_tenant_id>",
"azure.adls2.oauth2_client_id" = "<service_client_id>"
);
If you choose the Shared Key authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.adls2.storage_account" = "<storage_account_name>",
"azure.adls2.shared_key" = "<shared_key>"
);
If you choose the Service Principal authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"azure.adls2.oauth2_client_id" = "<service_client_id>",
"azure.adls2.oauth2_client_secret" = "<service_principal_client_secret>",
"azure.adls2.oauth2_client_endpoint" = "<service_principal_client_endpoint>"
);
If you choose the VM-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"gcp.gcs.use_compute_engine_service_account" = "true"
);
If you choose the service account-based authentication method, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<google_service_private_key_id>",
"gcp.gcs.service_account_private_key" = "<google_service_private_key>"
);
If you choose the impersonation-based authentication method:
If you make a VM instance impersonate a service account, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"gcp.gcs.use_compute_engine_service_account" = "true",
"gcp.gcs.impersonation_service_account" = "<assumed_google_service_account_email>"
);
If you make a service account impersonate another service account, run a command like below:
CREATE EXTERNAL CATALOG unified_catalog_hms
PROPERTIES
(
"type" = "unified",
"unified.metastore.type" = "hive",
"hive.metastore.uris" = "thrift://xx.xx.xx.xx:9083",
"gcp.gcs.service_account_email" = "<google_service_account_email>",
"gcp.gcs.service_account_private_key_id" = "<meta_google_service_account_email>",
"gcp.gcs.service_account_private_key" = "<meta_google_service_account_email>",
"gcp.gcs.impersonation_service_account" = "<data_google_service_account_email>"
);
You can use SHOW CATALOGS to query all catalogs in the current StarRocks cluster:
SHOW CATALOGS;
You can also use SHOW CREATE CATALOG to query the creation statement of an external catalog. The following example queries the creation statement of a unified catalog named unified_catalog_glue:
SHOW CREATE CATALOG unified_catalog_glue;
You can use one of the following methods to switch to a unified catalog and a database in it:
Use SET CATALOG to specify a unified catalog in the current session, and then use USE to specify an active database:
-- Switch to a specified catalog in the current session:
SET CATALOG <catalog_name>
-- Specify the active database in the current session:
USE <db_name>
Directly use USE to switch to a unified catalog and a database in it:
USE <catalog_name>.<db_name>
You can use DROP CATALOG to drop an external catalog.
The following example drops a unified catalog named unified_catalog_glue:
DROP CATALOG unified_catalog_glue;
You can use one of the following syntaxes to view the schema of a table from a unified catalog:
View schema
DESC[RIBE] <catalog_name>.<database_name>.<table_name>
View schema and location from the CREATE statement
SHOW CREATE TABLE <catalog_name>.<database_name>.<table_name>
To query data from a unified catalog, follow these steps:
Use SHOW DATABASES to view the databases in your unified data source with which the unified catalog is associated:
SHOW DATABASES FROM <catalog_name>
Use SELECT to query the destination table in the specified database:
SELECT count(*) FROM <table_name> LIMIT 10
You can use INSERT INTO to load the data of a Hive, Iceberg, Hudi, Delta Lake, or Kudu table into a StarRocks table created within a unified catalog.
The following example loads the data of the Hive table hive_table into the StarRocks table test_tbl created in the database test_database that belongs to the unified catalog unified_catalog:
INSERT INTO unified_catalog.test_database.test_table SELECT * FROM hive_table
Similar to the internal catalog of StarRocks, if you have the CREATE DATABASE privilege on a unified catalog, you can use the CREATE DATABASE statement to create a database in that catalog.
NOTE
You can grant and revoke privileges by using GRANT and REVOKE.
StarRocks supports creating only Hive and Iceberg databases in unified catalogs.
Switch to a unified catalog, and then use the following statement to create a database in that catalog:
CREATE DATABASE <database_name>
[properties ("location" = "<prefix>://<path_to_database>/<database_name.db>")]
The location parameter specifies the file path in which you want to create the database, which can be in either HDFS or cloud storage.
location parameter defaults to <warehouse_location>/<database_name.db>, which is supported by Hive metastore if you do not specify that parameter at database creation.location parameter does not have a default value, and therefore you must specify that parameter at database creation.The prefix varies based on the storage system you use:
| Storage system | Prefix value |
|---|---|
| HDFS | hdfs |
| Google GCS | gs |
| Azure Blob Storage | <ul><li>If your storage account allows access over HTTP, the prefix is wasb.</li><li>If your storage account allows access over HTTPS, the prefix is wasbs.</li></ul> |
| Azure Data Lake Storage Gen1 | adl |
| Azure Data Lake Storage Gen2 | <ul><li>If your storage account allows access over HTTP, theprefix is abfs.</li><li>If your storage account allows access over HTTPS, the prefix is abfss.</li></ul> |
| AWS S3 or other S3-compatible storage (for example, MinIO) | s3 |
Similar to the internal databases of StarRocks, if you have the DROP privilege on a database created within a unified catalog, you can use the DROP DATABASE statement to drop that database. You can only drop empty databases.
NOTE
You can grant and revoke privileges by using GRANT and REVOKE.
StarRocks supports dropping only Hive and Iceberg databases from unified catalogs.
When you drop a database from a unified catalog, the database's file path on your HDFS cluster or cloud storage will not be dropped along with the database.
Switch to a unified catalog, and then use the following statement to drop a database in that catalog:
DROP DATABASE <database_name>
Similar to the internal databases of StarRocks, if you have the CREATE TABLE privilege on a database created within a unified catalog, you can use the CREATE TABLE or [CREATE TABLE AS SELECT ../../sql-reference/sql-statements/table_bucket_part_index/CREATE_TABLE_AS_SELECT.mdELECT.md) statement to create a table in that database.
NOTE
You can grant and revoke privileges by using GRANT and REVOKE.
StarRocks supports creating only Hive and Iceberg tables in unified catalogs.
Switch to a Hive Catalog and a database in it. Then, use CREATE TABLE to create a Hive or Iceberg table in that database:
CREATE TABLE <table_name>
(column_definition1[, column_definition2, ...]
ENGINE = {|hive|iceberg}
[partition_desc]
For more information, see Create a Hive table and Create an Iceberg table.
The following example creates a Hive table named hive_table. The table consists of three columns action, id, and dt, of which id and dtare partition columns.
CREATE TABLE hive_table
(
action varchar(65533),
id int,
dt date
)
ENGINE = hive
PARTITION BY (id,dt);
Similar to the internal tables of StarRocks, if you have the INSERT privilege on a table created within a unified catalog, you can use the INSERT statement to sink the data of a StarRocks table to that Unified Catalog table (currently only Parquet-formatted Unified Catalog tables are supported).
NOTE
You can grant and revoke privileges by using GRANT and REVOKE.
StarRocks supports sinking data only to Hive and Iceberg tables in unified catalogs.
Switch to a Hive Catalog and a database in it. Then, use INSERT INTO to insert data into a Hive or Iceberg table in that database:
INSERT {INTO | OVERWRITE} <table_name>
[ (column_name [, ...]) ]
{ VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
-- If you want to sink data to specified partitions, use the following syntax:
INSERT {INTO | OVERWRITE} <table_name>
PARTITION (par_col1=<value> [, par_col2=<value>...])
{ VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
For more information, see Sink data to a Hive table and Sink data to an Iceberg table.
The following example inserts three data rows to a Hive table named hive_table:
INSERT INTO hive_table
VALUES
("buy", 1, "2023-09-01"),
("sell", 2, "2023-09-02"),
("buy", 3, "2023-09-03");
Similar to the internal tables of StarRocks, if you have the DROP privilege on a table created within a unified catalog, you can use the DROP TABLE statement to drop that table.
NOTE
You can grant and revoke privileges by using GRANT and REVOKE.
StarRocks supports dropping only Hive and Iceberg tables from unified catalogs.
Switch to a Hive Catalog and a database in it. Then, use DROP TABLE to drop a Hive or Iceberg table in that database:
DROP TABLE <table_name>
For more information, see Drop a Hive table and Drop an Iceberg table.
The following example drops a Hive table named hive_table:
DROP TABLE hive_table FORCE