doc/user/content/self-managed-deployments/installation/install-on-aws.md
Materialize provides a set of modular Terraform modules that can be used to deploy all services required for Materialize to run on AWS. The module is intended to provide a simple set of examples on how to deploy Materialize. It can be used as is or modules can be taken from the example and integrated with existing DevOps tooling.
{{% self-managed/materialize-components-sentence %}} The example on this page deploys a complete Materialize environment on AWS using the modular Terraform setup from this repository.
{{< warning >}}
{{< self-managed/terraform-disclaimer >}}
{{< /warning >}}
This example provisions the following infrastructure:
| Resource | Description |
|---|---|
| VPC | 10.0.0.0/16 with DNS hostnames and support enabled |
| Subnets | 3 private subnets (10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24) and 3 public subnets (10.0.101.0/24, 10.0.102.0/24, 10.0.103.0/24) across availability zones us-east-1a, us-east-1b, us-east-1c |
| NAT Gateway | Single NAT Gateway for all private subnets |
| Internet Gateway | For public subnet connectivity |
| Resource | Description |
|---|---|
| EKS Cluster | Version 1.32 with CloudWatch logging (API, audit) |
| Base Node Group | 2 nodes (t4g.medium) for Karpenter and CoreDNS |
| Karpenter | Auto-scaling controller with two node classes: Generic nodepool (t4g.xlarge instances for general workloads) and Materialize nodepool (r7gd.2xlarge instances with swap enabled and dedicated taints to run materialize instance workloads) |
| Resource | Description |
|---|---|
| RDS PostgreSQL | Version 15, db.t3.large instance |
| Storage | 50GB allocated, autoscaling up to 100GB |
| Deployment | Single-AZ (non-production configuration) |
| Backups | 7-day retention |
| Security | Dedicated security group with access from EKS cluster and nodes |
| Resource | Description |
|---|---|
| S3 Bucket | Dedicated bucket for Materialize persistence |
| Encryption | Disabled (for testing; enable in production) |
| Versioning | Disabled (for testing; enable in production) |
| IAM Role | IRSA role for Kubernetes service account access |
| Resource | Description |
|---|---|
| AWS Load Balancer Controller | For managing Network Load Balancers |
| cert-manager | Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal |
| Self-signed ClusterIssuer | Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication. |
| Resource | Description |
|---|---|
| Operator | Materialize Kubernetes operator in the materialize namespace |
| Instance | Single Materialize instance in the materialize-environment namespace |
| Network Load Balancer | Dedicated NLB for access to Materialize {{< yaml-table data="self_managed/default_ports" >}} |
An active AWS account with appropriate permissions to create:
{{< yaml-table data="self_managed/license_key" >}}
{{< warning >}}
{{< self-managed/terraform-disclaimer >}}
{{< /warning >}}
{{< tip >}}
The simple example used in this tutorial enables Password
authentication
for the Materialize instance. To use a different authentication method, update
authenticator_kind.
See Authentication for the supported
authentication mechanisms.
{{< /tip >}}
Open a terminal window.
Clone the Materialize Terraform repository and go to the
aws/examples/simple directory.
git clone https://github.com/MaterializeInc/materialize-terraform-self-managed.git
cd materialize-terraform-self-managed/aws/examples/simple
Ensure your AWS CLI is configured with the appropriate profile, substitute
<your-aws-profile> with the profile to use:
# Set your AWS profile for the session
export AWS_PROFILE=<your-aws-profile>
Create a terraform.tfvars file with the following variables:
name_prefix: Prefix for all resource names (e.g., simple-demo)aws_region: AWS region for deployment (e.g., us-east-1)aws_profile: AWS CLI profile to uselicense_key: Materialize license keytags: Map of tags to apply to resourcesname_prefix = "simple-demo"
aws_region = "us-east-1"
aws_profile = "your-aws-profile"
license_key = "your-materialize-license-key"
tags = {
environment = "demo"
}
# internal_load_balancer = false # default = true (internal load balancer). You can set to false = public load balancer.
# ingress_cidr_blocks = ["x.x.x.x/n", ...]
# k8s_apiserver_authorized_networks = ["x.x.x.x/n", ...]
{{% include-from-yaml data="self_managed/installation" name="installation-tfvars-variables-optional" %}}
Initialize the Terraform directory to download the required providers and modules:
terraform init
Apply the Terraform configuration to create the infrastructure.
terraform apply
If you are satisfied with the planned changes, type yes when prompted to
proceed.
{{< tip >}}
If you previously logged in to Amazon ECR Public, a cached auth token may cause 403 errors even when pulling public images. To remove the token, run:
docker logout public.ecr.aws
Then, re-apply the Terraform configuration.
{{< /tip >}}
From the output, you will need the following fields to connect using the Materialize Console and PostgreSQL-compatible clients/drivers:
nlb_dns_nameexternal_login_password_mz_system.terraform output -raw <field_name>
{{< tip >}}
Your shell may show an ending marker (such as %) because the
output did not end with a newline. Do not include the marker when using the value.
{{< /tip >}}
Configure kubectl to connect to your cluster, replacing:
<your-eks-cluster-name> with the your cluster name; i.e., the
eks_cluster_name in the Terraform output. For the
sample example, your cluster name has the form {prefix_name}-eks; e.g.,
simple-demo-eks.
<your-region> with the region of your cluster. Your region can be
found in your terraform.tfvars file; e.g., us-east-1.
# aws eks update-kubeconfig --name <your-eks-cluster-name> --region <your-region>
aws eks update-kubeconfig --name $(terraform output -raw eks_cluster_name) --region <your-region>
Using the nlb_dns_name and external_login_password_mz_system from the Terraform
output, you can connect to Materialize via the Materialize Console or
PostgreSQL-compatible tools/drivers using the following ports:
{{< yaml-table data="self_managed/default_ports" >}}
{{% include-from-yaml data="self_managed/installation" name="installation-access-methods" %}}
To connect to the Materialize Console, open a browser to
https://<nlb_dns_name>:8080, substituting your <nlb_dns_name>.
From the terminal, you can type:
open "https://$(terraform output -raw nlb_dns_name):8080/materialize"
{{< tip >}}
{{% include-from-yaml data="self_managed/installation" name="install-uses-self-signed-cluster-issuer" %}}
{{< /tip >}}
Log in as mz_system, using external_login_password_mz_system as the
password.
Create new users and log out.
In general, other than the initial login to create new users for new
deployments, avoid using mz_system since mz_system also used by the
Materialize Operator for upgrades and maintenance tasks.
For more information on authentication and authorization for Self-Managed Materialize, see:
Login as one of the created user.
psql{{% include-from-yaml data="self_managed/installation" name="installation-access-methods" %}}
To connect using psql, in the connection string, specify:
mz_system as the user<nlb_dns_name> as the host6875 as the port:psql "postgres://mz_system@$(terraform output -raw nlb_dns_name):6875/materialize"
When prompted for the password, enter the
external_login_password_mz_system value.
Create new users and log out.
In general, other than the initial login to create new users for new
deployments, avoid using mz_system since mz_system also used by the
Materialize Operator for upgrades and maintenance tasks.
For more information on authentication and authorization for Self-Managed Materialize, see:
Login as one of the created user.
{{< tip >}}
To reduce cost in your demo environment, you can tweak subnet CIDRs
and instance types in main.tf.
{{< /tip >}}
You can customize each Terraform module independently.
For details on the Terraform modules, see both the top level and AWS specific READMEs.
For details on recommended instance sizing and configuration, see the AWS deployment guide.
See also:
{{% self-managed/cleanup-cloud %}}