content/influxdb3/clustered/install/set-up-cluster/prerequisites.md
InfluxDB Clustered requires the following prerequisite external dependencies:
Kubernetes provides the kubectl command line tool for communicating with a
Kubernetes cluster's control plane. kubectl is used to manage your InfluxDB
cluster.
Follow instructions to install kubectl on your local machine:
[!Note] InfluxDB Clustered Kubernetes deployments require
kubectl1.27 or higher.
Deploy a Kubernetes cluster. The deployment process depends on your Kubernetes environment or Kubernetes cloud provider. Refer to the Kubernetes documentation or your cloud provider's documentation for information about deploying a Kubernetes cluster.
Ensure kubectl can connect to your Kubernetes cluster.
Your Manage kubeconfig file
defines cluster connection credentials.
Create two namespaces--influxdb and kubit. Use
kubectl create namespace to create the
namespaces:
kubectl create namespace influxdb && \
kubectl create namespace kubit
Install an ingress controller in the cluster and a mechanism to obtain a valid TLS certificate (for example: cert-manager or provide the certificate PEM manually out of band). To use the InfluxDB-specific ingress controller, install Ingress NGINX.
Ensure your Kubernetes cluster can access the InfluxDB container registry, or, if running in an air-gapped environment, a local container registry to which you can copy the InfluxDB images.
As a starting point for a production workload, InfluxData recommends the following sizing for {{% product-name %}} components:
{{< tabs-wrapper >}} {{% tabs %}} AWS Google Cloud Platform Microsoft Azure On-Prem {{% /tabs %}} {{% tab-content %}}
<!--------------------------------- BEGIN AWS --------------------------------->{{% /tab-content %}} {{% tab-content %}}
<!--------------------------------- BEGIN GCP --------------------------------->{{% /tab-content %}} {{% tab-content %}}
<!-------------------------------- BEGIN Azure -------------------------------->{{% /tab-content %}} {{% tab-content %}}
<!------------------------------- BEGIN ON-PREM ------------------------------->{{% /tab-content %}} {{< /tabs-wrapper >}}
Your sizing may need to be different based on your environment, cloud provider, and workload, but this is a reasonable starting size for your initial testing.
The kubecfg kubit operator (maintained by InfluxData)
simplifies the installation and management of the InfluxDB Clustered package.
It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster.
[!Note]
The InfluxDB Clustered Helm chart includes the kubit operator
If using the InfluxDB Clustered Helm chart to deploy your InfluxDB cluster, you do not need to install the kubit operator separately. The Helm chart installs the kubit operator.
Use kubectl to install the kubecfg kubit
operator v0.0.22 or later.
kubectl apply -k 'https://github.com/kubecfg/kubit//kustomize/global?ref=v0.0.22'
Kubernetes ingress routes HTTP/S requests to services within the cluster and requires deploying an ingress controller. You can provide your own ingress or you can install Nginx Ingress Controller to use the InfluxDB-defined ingress.
[!Important]
Allow gRPC/HTTP2
InfluxDB Clustered components use gRPC/HTTP2 protocols. If using an external load balancer, you may need to explicitly enable these protocols on your load balancers.
InfluxDB Clustered supports AWS S3 or S3-compatible storage (including Google Cloud Storage, Azure Blob Storage, and MinIO) for storing InfluxDB Parquet files. Refer to your object storage provider's documentation for information about setting up an object store:
{{% caption %}} * This list does not represent all S3-compatible object stores that work with InfluxDB Clustered. Other S3-compatible object stores should work as well. {{% /caption %}}
[!Important]
Object storage recommendations
We strongly recommend the following:
Enable object versioning
Enable object versioning in your object store. Refer to your object storage provider's documentation for information about enabling object versioning.
Run the object store in a separate namespace or outside of Kubernetes
Run the Object store in a separate namespace from InfluxDB or external to Kubernetes entirely. Doing so makes management of the InfluxDB cluster easier and helps to prevent accidental data loss. While deploying everything in the same namespace is possible, we do not recommend it for production environments.
Ensure the identity you use to connect to your S3-compatible object store has the correct permissions to allow InfluxDB to perform all the actions it needs to.
{{< expand-wrapper >}} {{% expand "View example AWS S3 access policy" %}}
The IAM role that you use to access AWS S3 should have the following policy:
{{% code-placeholders "S3_BUCKET_NAME" %}}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*",
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME",
},
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*",
}
]
}
{{% /code-placeholders %}}
Replace the following:
S3_BUCKET_NAME{{% /code-placeholder-key %}}: Name of your AWS S3 bucket{{% /expand %}}
{{% expand "View the requirements for Google Cloud Storage" %}}
To use Google Cloud Storage (GCS) as your object store, your IAM principal should be granted the roles/storage.objectUser role.
For example, if using Google Service Accounts:
{{% code-placeholders "GCP_SERVICE_ACCOUNT|GCP_BUCKET" %}}
gcloud storage buckets add-iam-policy-binding \
gs://GCP_BUCKET \
--member="serviceAccount:GCP_SERVICE_ACCOUNT" \
--role="roles/storage.objectUser"
{{% /code-placeholders %}}
Replace the following:
GCP_SERVICE_ACCOUNT{{% /code-placeholder-key %}}: Google Service Account nameGCP_BUCKET{{% /code-placeholder-key %}}: GCS bucket name{{% /expand %}}
{{% expand "View the requirements for Azure Blob Storage" %}}
To use Azure Blob Storage as your object store, your service principal
should be granted the Storage Blob Data Contributor role.
This is a built-in role for Azure which encompasses common permissions.
You can assign it using the following command:
{{% code-placeholders "PRINCIPAL|AZURE_SUBSCRIPTION|AZURE_RESOURCE_GROUP|AZURE_STORAGE_ACCOUNT|AZURE_STORAGE_CONTAINER" %}}
az role assignment create \
--role "Storage Blob Data Contributor" \
--assignee PRINCIPAL \
--scope "/subscriptions/AZURE_SUBSCRIPTION/resourceGroups/AZURE_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/AZURE_STORAGE_ACCOUNT/blobServices/default/containers/AZURE_STORAGE_CONTAINER"
{{% /code-placeholders %}}
Replace the following:
PRINCIPAL{{% /code-placeholder-key %}}: A user, group, or service principal that the role should be assigned toAZURE_SUBSCRIPTION{{% /code-placeholder-key %}}: Your Azure subscriptionAZURE_RESOURCE_GROUP{{% /code-placeholder-key %}}: The resource group that your Azure Blob storage account belongs toAZURE_STORAGE_ACCOUNT{{% /code-placeholder-key %}}: Azure Blob storage account nameAZURE_STORAGE_CONTAINER{{% /code-placeholder-key %}}: Container name in your Azure Blob storage account{{% /expand %}}
{{< /expand-wrapper >}}
[!Note] To configure permissions with MinIO, use the example AWS access policy.
The InfluxDB Catalog that stores metadata related to your time series data requires a PostgreSQL or PostgreSQL-compatible database (AWS Aurora, hosted PostgreSQL, etc.). The process for installing and setting up your PostgreSQL-compatible database depends on the database and database provider you use. Refer to your database's or provider's documentation for setting up your PostgreSQL-compatible database.
[!Note] We strongly recommended running the PostgreSQL-compatible database in a separate namespace from InfluxDB or external to Kubernetes entirely. Doing so makes management of the InfluxDB cluster easier and helps to prevent accidental data loss.
While deploying everything in the same namespace is possible, we do not recommend it for production environments.
The InfluxDB Ingester
needs local or attached storage to store the Write-Ahead Log (WAL).
The read and write speed of the attached storage affects the write performance
of the Ingester, so the faster the storage device, the better your write
performance will be.
The recommended minimum size of the local storage is 2 gibibytes (2Gi).
Installation and setup of local or attached storage depends on your underlying hardware or cloud provider. Refer to your provider's documentation for information about installing and configuring local storage.
{{< page-nav prev="/influxdb3/clustered/install/set-up-cluster/" prevText="Back" next="/influxdb3/clustered/install/set-up-cluster/configure-cluster" nextText="Configure your cluster" >}}