deployment/terraform/modules/aws/README.md
This directory contains Terraform modules to provision the core AWS infrastructure for Onyx:
vpc: Creates a VPC with public/private subnets sized for EKSeks: Provisions an Amazon EKS cluster, essential addons (EBS CSI, metrics server, cluster autoscaler), and optional IRSA for S3 accesspostgres: Creates an Amazon RDS for PostgreSQL instance and returns a connection URLredis: Creates an ElastiCache for Redis replication groups3: Creates an S3 bucket and locks access to a provided S3 VPC endpointopensearch: Creates an Amazon OpenSearch domain for managed search workloadsonyx: A higher-level composition that wires the above modules together for a complete, opinionated stackUse the onyx module if you want a working EKS + Postgres + Redis + S3 stack with sane defaults. Use the individual modules if you need more granular control.
The snippet below shows a minimal working example that:
kubernetes and helm providers against the created clusteronyx modulelocals {
region = "us-west-2"
}
provider "aws" {
region = local.region
}
module "onyx" {
# If your root module is next to this modules/ directory:
# source = "./modules/aws/onyx"
# If referencing from this repo as a template, adjust the path accordingly.
source = "./modules/aws/onyx"
region = local.region
name = "onyx" # used as a prefix and workspace-aware
postgres_username = "pgusername"
postgres_password = "your-postgres-password"
# create_vpc = true # default true; set to false to use an existing VPC (see below)
}
resource "null_resource" "wait_for_cluster" {
provisioner "local-exec" {
command = "aws eks wait cluster-active --name ${module.onyx.cluster_name} --region ${local.region}"
}
}
data "aws_eks_cluster" "eks" {
name = module.onyx.cluster_name
depends_on = [null_resource.wait_for_cluster]
}
data "aws_eks_cluster_auth" "eks" {
name = module.onyx.cluster_name
depends_on = [null_resource.wait_for_cluster]
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
}
# Optional: expose handy outputs at the root module level
output "cluster_name" {
value = module.onyx.cluster_name
}
output "postgres_connection_url" {
value = module.onyx.postgres_connection_url
sensitive = true
}
output "redis_connection_url" {
value = module.onyx.redis_connection_url
sensitive = true
}
Apply with:
terraform init
terraform apply
If you already have a VPC and subnets, disable VPC creation and provide IDs, CIDR, and the ID of the existing S3 gateway endpoint in that VPC:
module "onyx" {
source = "./modules/aws/onyx"
region = local.region
name = "onyx"
postgres_username = "pgusername"
postgres_password = "your-postgres-password"
create_vpc = false
vpc_id = "vpc-xxxxxxxx"
private_subnets = ["subnet-aaaa", "subnet-bbbb", "subnet-cccc"]
public_subnets = ["subnet-dddd", "subnet-eeee", "subnet-ffff"]
vpc_cidr_block = "10.0.0.0/16"
s3_vpc_endpoint_id = "vpce-xxxxxxxxxxxxxxxxx"
}
onyxvpc, eks, postgres, redis, and s3name and the current Terraform workspacecluster_name: EKS cluster namepostgres_connection_url (sensitive): postgres://...redis_connection_url (sensitive): hostname:portInputs (common):
name (default onyx), region (default us-west-2), tagspostgres_username, postgres_passwordcreate_vpc (default true) or existing VPC details and s3_vpc_endpoint_idwaf_allowed_ip_cidrs, waf_common_rule_set_count_rules, rate limits, geo restrictions, and logging retentionenable_opensearch, sizing, credentials, and log retentionvpcvpc_id, private_subnets, public_subnets, vpc_cidr_block, s3_vpc_endpoint_idekscluster_name, cluster_endpoint, cluster_certificate_authority_data, s3_access_role_arn (if created)Key inputs include:
cluster_name, cluster_version (default 1.33)vpc_id, subnet_idspublic_cluster_enabled (default true), private_cluster_enabled (default false)cluster_endpoint_public_access_cidrs (optional)eks_managed_node_groups (defaults include a main and a vespa-dedicated group with GP3 volumes)s3_bucket_names (optional list). If set, creates an IRSA role and Kubernetes service account for S3 accesspostgresredisauth_token and instance sizings3opensearchOnce the cluster is active, deploy application workloads via Helm. You can use the chart in deployment/helm/charts/onyx.
# Set kubeconfig to your new cluster (if you’re not using the TF providers for kubernetes/helm)
aws eks update-kubeconfig --name $(terraform output -raw cluster_name) --region ${AWS_REGION:-us-west-2}
kubectl create namespace onyx --dry-run=client -o yaml | kubectl apply -f -
# If using AWS S3 via IRSA created by the EKS module, consider disabling MinIO
# Replace the path below with the absolute or correct relative path to the onyx Helm chart
helm upgrade --install onyx /path/to/onyx/deployment/helm/charts/onyx \
--namespace onyx \
--set minio.enabled=false \
--set serviceAccount.create=false \
--set serviceAccount.name=onyx-s3-access
Notes:
ServiceAccount named onyx-s3-access (by default in namespace onyx) when s3_bucket_names is provided. Use that service account in the Helm chart to avoid static S3 credentials.minio.enabled=true (default) and skip IRSA.onyx module automatically includes the workspace in resource names.