doc/install/aws/_index.md
{{< details >}}
{{< /details >}}
This page offers a walkthrough of a common configuration for GitLab on AWS using the official Linux package. You should customize it to accommodate your needs.
[!note] For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box Linux package installation and implement a snapshot strategy for backing up the data. See the 20 RPS or 1,000 user reference architecture for more information.
[!note] This document is an installation guide for a proof of concept instance. It is not a reference architecture, and it does not result in a highly available configuration. It's highly recommended to use the GitLab Environment Toolkit (GET) instead.
Following this guide exactly results in a proof of concept instance that roughly equates to a scaled down version of a two availability zone implementation of the Non-HA 40 RPS or 2,000 User Reference Architecture. The 2K reference architecture is not HA because it is primarily intended to provide some scaling while keeping costs and complexity low. The 60 RPS or 3,000 User Reference Architecture is the smallest size that is GitLab HA. It has additional service roles to achieve HA, most notably it uses Gitaly Cluster (Praefect) to achieve HA for Git repository storage and specifies triple redundancy.
GitLab maintains and tests two main types of Reference Architectures. The Linux package architectures are implemented on instance compute while Cloud Native Hybrid architectures maximize the use of a Kubernetes cluster. Cloud Native Hybrid reference architecture specifications are addendum sections to the Reference Architecture size pages that start by describing the Linux package architecture. For example, the 60 RPS or 3,000 User Cloud Native Reference Architecture is in the subsection titled Cloud Native Hybrid reference architecture with Helm Charts (alternative) in the 60 RPS or 3,000 User Reference Architecture page.
The Infrastructure as Code tooling GitLab Environment Tool (GET) is the best place to start for building using the Linux package on AWS and most especially if you are targeting an HA setup. While it does not automate everything, it does complete complex setups like Gitaly Cluster (Praefect) for you. GET is open source so anyone can build on top of it and contribute improvements to it.
The GitLab Environment Toolkit (GET) is a set of opinionated Terraform and Ansible scripts. These scripts help with the deployment of Linux package or Cloud Native Hybrid environments on selected cloud providers and are used by GitLab developers for GitLab Dedicated (for example).
You can use the GitLab Environment Toolkit to deploy a Cloud Native Hybrid environment on AWS. However, it's not required and may not support every valid permutation. That said, the scripts are presented as-is and you can adapt them accordingly.
For the most part, we make use of the Linux package in our setup, but we also leverage native AWS services. Instead of using the Linux package-bundled PostgreSQL and Redis, we use Amazon RDS and ElastiCache.
In this guide, we go through a multi-node setup where we start by configuring our Virtual Private Cloud and subnets to later integrate services such as RDS for our database server and ElastiCache as a Redis cluster to finally manage them in an auto scaling group with custom scaling policies.
In addition to having a basic familiarity with AWS and Amazon EC2, you need:
[!note] It can take a few hours to validate a certificate provisioned through ACM. To avoid delays later, request your certificate as soon as possible.
The following diagram outlines the recommended architecture.
GitLab uses the following AWS services, with links to pricing information:
As we are using Amazon S3 object storage, our EC2 instances must have read, write, and list permissions for our S3 buckets. To avoid embedding AWS keys in our GitLab configuration, we make use of an IAM Role to allow our GitLab instance with this access. We must create an IAM policy to attach to our IAM role:
Go to the IAM dashboard and select Policies in the left menu.
Select Create policy, select the JSON tab, and add a policy. We want to follow security best practices and grant least privilege, giving our role only the permissions needed to perform the required actions.
gl- as shown in the diagram, add the following policy:{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::gl-*/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::gl-*"
}
]
}
Select Next to review the policy. Give your policy a name (we use gl-s3-policy), and select Create policy.
AWS service. For the Use case, select EC2 for both the dropdown list and radio buttons and select Next.gl-s3-policy we previously created, select it, and select Next.GitLabS3Access). If required, add some tags. Select Create role.We use this role when we create a launch template later on.
[!note] GitLab supports AWS Instance Metadata Service Version 2 (IMDSv2). GitLab automatically uses IMDSv2 when available and falls back to IMDSv1 if needed. You can safely require IMDSv2 on your EC2 instances for enhanced security.
We start by creating a VPC for our GitLab cloud infrastructure, then we can create subnets to have public and private instances in at least two Availability Zones (AZs). Public subnets require a Route Table keep and an associated Internet Gateway.
We now create a VPC, a virtual networking environment that you control:
Sign in to Amazon Web Services.
Select Your VPCs from the left menu and then select Create VPC.
At the "Name tag" enter gitlab-vpc and at the "IPv4 CIDR block" enter
10.0.0.0/16. If you don't require dedicated hardware, you can leave
"Tenancy" as default. Select Create VPC when ready.
Select the VPC, select Actions, select Edit VPC Settings and check Enable DNS resolution. Select Save when done.
Now, let's create some subnets in different Availability Zones. Make sure that each subnet is associated to the VPC we just created and that CIDR blocks don't overlap. This also allows us to enable multi AZ for redundancy.
We create private and public subnets to match load balancers and RDS instances as well:
Select Subnets from the left menu.
Select Create subnet. Give it a descriptive name tag based on the IP,
for example gitlab-public-10.0.0.0, select the VPC we created previously, select an availability zone (we use us-west-2a),
and at the IPv4 CIDR block let's give it a 24 subnet 10.0.0.0/24:
Follow the same steps to create all subnets:
| Name tag | Type | Availability Zone | CIDR block |
|---|---|---|---|
gitlab-public-10.0.0.0 | public | us-west-2a | 10.0.0.0/24 |
gitlab-private-10.0.1.0 | private | us-west-2a | 10.0.1.0/24 |
gitlab-public-10.0.2.0 | public | us-west-2b | 10.0.2.0/24 |
gitlab-private-10.0.3.0 | private | us-west-2b | 10.0.3.0/24 |
Once all the subnets are created, enable Auto-assign IPv4 for the two public subnets:
Now, still on the same dashboard, go to Internet Gateways and create a new one:
Select Internet Gateways from the left menu.
Select Create internet gateway, give it the name gitlab-gateway and
select Create.
Select it from the table, and then under the Actions dropdown list choose "Attach to VPC".
Choose gitlab-vpc from the list and hit Attach.
Instances deployed in our private subnets must connect to the internet for updates, but should not be reachable from the public internet. To achieve this, we make use of NAT Gateways deployed in each of our public subnets:
Zonal.gitlab-public-10.0.0.0 from the dropdown list.Create a second NAT gateway but this time place it in the second public subnet, gitlab-public-10.0.2.0.
We must create a route table for our public subnets to reach the internet via the internet gateway we created in the previous step.
On the VPC dashboard:
gitlab-public and choose gitlab-vpc under "VPC".We now must add our internet gateway as a new target and have it receive traffic from any destination.
gitlab-public
route to show the options at the bottom.0.0.0.0/0
as the destination. In the target column, select the Internet Gateway and select the gitlab-gateway we created previously.
Select Save changes when done.Next, we must associate the public subnets to the route table:
We also must create two private route tables so that instances in each private subnet can reach the internet via the NAT gateway in the corresponding public subnet in the same availability zone.
gitlab-private-a and gitlab-private-b.0.0.0.0/0 and the target is one of the NAT gateways we created earlier.
gitlab-public-10.0.0.0 as the target for the new route in the gitlab-private-a route table.gitlab-public-10.0.2.0 as the target for the new route in the gitlab-private-b.gitlab-private-10.0.1.0 with gitlab-private-a.gitlab-private-10.0.3.0 with gitlab-private-b.We create a load balancer to evenly distribute inbound traffic across our GitLab application servers. Based on the scaling policies we create later, instances are added to or removed from our load balancer as needed. Additionally, the load balancer performs health checks on our instances.
AWS offers two approaches for this architecture:
Choose the approach that best fits your deployment:
NLB Only:
graph TB
subgraph Diagram1["NLB Only"]
U1["Users"]
NLB1["Network Load Balancer
(Port 22, 80, 443)"] R1A["Rails Node 1 (Port 22, 80)"] R1B["Rails Node 2 (Port 22, 80)"]
U1 -->|SSH| NLB1
U1 -->|HTTP| NLB1
U1 -->|HTTPS| NLB1
NLB1 -->|Port 22| R1A
NLB1 -->|Port 22| R1B
NLB1 -->|"Port 80, 443"| R1A
NLB1 -->|"Port 80, 443"| R1B
end
```
Hybrid NLB/ALB:
graph TB
subgraph Diagram2["Hybrid NLB/ALB"]
U2["Users"]
NLB2["Network Load Balancer
(Port 22, 443)"] ALB["Application Load Balancer (Port 443)"] R2A["Rails Node 1 (Port 22, 80)"] R2B["Rails Node 2 (Port 22, 80)"]
U2 -->|SSH| NLB2
U2 -->|HTTPS| NLB2
NLB2 -->|Port 22| R2A
NLB2 -->|Port 22| R2B
NLB2 -->|Port 443| ALB
ALB -->|Port 80| R2A
ALB -->|Port 80| R2B
end
{{< tabs >}}
{{< tab title="Network Load Balancer (NLB) Only" >}}
This section describes the simpler NLB-only approach where a single Network Load Balancer handles all traffic types, routing SSH, HTTP, and HTTPS directly to Rails nodes.
We need a security group for this architecture:
1. **NLB Security Group** (`gitlab-nlb-sec-group`):
- Inbound: TCP port 22 from anywhere (or restrict to trusted IP ranges for SSH)
- Inbound: TCP port 80 from anywhere
- Inbound: TCP port 443 from anywhere
- Outbound: All traffic
To create this security group:
1. From the EC2 dashboard, select **Security Groups** from the left menu bar.
1. Select **Create security group**.
1. Give it a descriptive name and description, and select the `gitlab-vpc` from the **VPC** dropdown list.
1. Add the inbound rules as specified above.
1. When done, select **Create security group**.
Create the target groups:
1. On the EC2 dashboard, select **Target Groups** from the left menu bar.
1. Select **Create target group** for the **SSH Target Group**:
| Setting | Value |
|---------|-------|
| Target type | Instances |
| Target group name | `gitlab-nlb-ssh-target` |
| Protocol | TCP |
| Port | 22 |
| VPC | `gitlab-vpc` |
| Health check protocol | TCP |
Select **Next** twice, then **Create target group**. You will register targets later.
1. Select **Create target group** again for the **HTTP Target Group**:
| Setting | Value |
|---------|-------|
| Target type | Instances |
| Target group name | `gitlab-nlb-http-target` |
| Protocol | TCP |
| Port | 80 |
| VPC | `gitlab-vpc` |
| Health check protocol | HTTP |
| Health check path | `/-/readiness` |
> [!note]
> You must add [the VPC IP Address Range (CIDR)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-security-groups.html) to the [IP allowlist](../../administration/monitoring/ip_allowlist.md) for the [Health check endpoints](../../administration/monitoring/health_check.md).
Select **Next**, choose **Register Later**, then **Next** twice and **Create target group**.
Create the network load balancer:
1. On the EC2 dashboard, look for **Load Balancers** in the left navigation bar and select **Create Load Balancer**.
1. Choose **Network Load Balancer** and select **Create**.
1. Configure the load balancer with the following settings:
| Setting | Value |
|---------|-------|
| Load Balancer name | `gitlab-nlb` |
| Scheme | Internet-facing |
| IP address type | IPv4 |
| VPC | `gitlab-vpc` |
| Mapping | Select both public subnets |
| Security group | `gitlab-nlb-sec-group` |
1. In the **Listeners and routing** section, configure:
| Protocol | Port | Target group |
|----------|------|--------------|
| TCP | 22 | `gitlab-nlb-ssh-target` |
| TCP | 80 | `gitlab-nlb-http-target` |
| TLS | 443 | `gitlab-nlb-http-target` |
For the TLS listener on port 443, under **Security Policy** settings:
- **Policy name**: Select a predefined security policy from the dropdown list. See [Predefined SSL Security Policies for Network Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies) in the AWS documentation. Check the GitLab codebase for a list of [supported SSL ciphers and protocols](https://gitlab.com/gitlab-org/gitlab/-/blob/9ee7ad433269b37251e0dd5b5e00a0f00d8126b4/lib/support/nginx/gitlab-ssl#L97-99).
- **Default SSL/TLS server certificate**: Select an SSL/TLS certificate from ACM or upload a certificate to IAM.
1. Select **Create load balancer**.
> [!note]
> Targets for the `gitlab-nlb-ssh-target` and `gitlab-nlb-http-target` target groups are automatically registered when instances launch in the [auto scaling group](#create-an-auto-scaling-group) created later in this guide.
{{< /tab >}}
{{< tab title="Hybrid NLB->ALB Approach" >}}
This section describes a hybrid approach where a Network Load Balancer handles SSH traffic and an Application Load Balancer handles HTTP/HTTPS traffic. The NLB routes TCP port 22 (SSH) directly to Rails nodes and TCP port 443 (HTTPS) to the ALB, and the ALB terminates SSL/TLS and routes HTTP traffic to Rails nodes on port 80. This approach enables AWS WAF integration and better separation of concerns.
We need three security groups for this architecture:
1. **NLB Security Group** (`gitlab-nlb-sec-group`):
- Inbound: TCP port 22 from anywhere (or restrict to trusted IP ranges for SSH)
- Inbound: TCP port 443 from anywhere (or restrict to trusted IP ranges for HTTPS)
- Outbound: TCP port 22 to `gitlab-rails-sec-group`
- Outbound: TCP port 443 to `gitlab-alb-sec-group`
1. **ALB Security Group** (`gitlab-alb-sec-group`):
- Inbound: TCP port 443 from `gitlab-nlb-sec-group`
- Inbound: TCP port 80 from `gitlab-rails-sec-group`
- Outbound: TCP port 80 to `gitlab-rails-sec-group`
1. **Rails Security Group** (`gitlab-rails-sec-group`):
- Inbound: TCP port 22 from `gitlab-nlb-sec-group`
- Inbound: TCP port 80 from `gitlab-alb-sec-group`
To create these security groups:
1. From the EC2 dashboard, select **Security Groups** from the left menu bar.
1. Select **Create security group** for the **SSH Target Group**:
1. Give each a descriptive name and description, and select the `gitlab-vpc` from the **VPC** dropdown list.
1. Add the inbound rules as specified above. When selecting a source, choose **Security group** and select the appropriate security group from the dropdown.
1. When done, select **Create security group**.
Create the target groups:
1. On the EC2 dashboard, select **Target Groups** from the left menu bar.
1. Create the **NLB SSH Target Group** with the following settings:
| Setting | Value |
|---------|-------|
| Target type | Instances |
| Target group name | `gitlab-nlb-ssh-target` |
| Protocol | TCP |
| Port | 22 |
| VPC | `gitlab-vpc` |
| Health check protocol | TCP |
Select **Next** twice, then **Create target group**. You will register targets later.
1. Select **Create target group** again for the **NLB to ALB Target Group**:
| Setting | Value |
|---------|-------|
| Target type | Application Load Balancer |
| Target group name | `gitlab-nlb-alb-target` |
| Protocol | TCP |
| Port | 443 |
| VPC | `gitlab-vpc` |
| Health check protocol | HTTPS |
| Health check path | `/-/readiness` |
Select **Next**, choose **Register Later** for the Application Load Balancer, then **Next** and **Create target group**.
1. Select **Create target group** again for the **ALB HTTP Target Group**:
| Setting | Value |
|---------|-------|
| Target type | Instance |
| Target group name | `gitlab-alb-http-target` |
| Protocol | HTTP |
| Port | 80 |
| VPC | `gitlab-vpc` |
| Protocol version | HTTP1.1 |
| Health check protocol | HTTP |
| Health check path | `/-/readiness` |
> [!note]
> You must add [the VPC IP Address Range (CIDR)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-security-groups.html) to the [IP allowlist](../../administration/monitoring/ip_allowlist.md) for the [Health check endpoints](../../administration/monitoring/health_check.md).
Select **Next**, choose **Register Later**, then **Next** twice and **Create target group**.
Create the application load balancer:
1. On the EC2 dashboard, look for **Load Balancers** in the left navigation bar and select **Create Load Balancer**.
1. Choose **Application Load Balancer** and select **Create**.
1. Configure the load balancer with the following settings:
| Setting | Value |
|---------|-------|
| Load Balancer name | `gitlab-alb` |
| Scheme | Internet-facing |
| IP address type | IPv4 |
| VPC | `gitlab-vpc` |
| Mapping | Select both public subnets `gitlab-public-10.0.0.0` and `gitlab-public-10.0.2.0`|
| Security group | `gitlab-alb-sec-group` |
1. In the **Listeners and routing** section, configure:
| Protocol | Port | Action | Target group |
|----------|------|--------|--------------|
| HTTPS | 443 | Forward to | `gitlab-alb-http-target` |
For the HTTPS listener, select your ACM certificate and choose an appropriate security policy (see [Predefined SSL Security Policies for Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html)).
1. Select **Create load balancer**.
Create the network load balancer:
1. On the EC2 dashboard, look for **Load Balancers** in the left navigation bar and select **Create Load Balancer**.
1. Choose **Network Load Balancer** and select **Create**.
1. Configure the load balancer with the following settings:
| Setting | Value |
|---------|-------|
| Load Balancer name | `gitlab-nlb` |
| Scheme | Internet-facing |
| IP address type | IPv4 |
| VPC | `gitlab-vpc` |
| Mapping | Select both public subnets `gitlab-public-10.0.0.0` and `gitlab-public-10.0.2.0`|
| Security group | `gitlab-nlb-sec-group` |
1. In the **Listeners and routing** section, configure:
| Protocol | Port | Target group |
|----------|------|--------------|
| TCP | 22 | `gitlab-nlb-ssh-target` |
| TCP | 443 | `gitlab-nlb-alb-target` |
1. Select **Create load balancer**.
Register the ALB as a target for the NLB:
1. On the EC2 dashboard, select **Target Groups** from the left menu bar.
1. Select the `gitlab-nlb-alb-target` target group.
1. On the **Targets** tab, select **Register targets**.
1. Select the `gitlab-alb` Application Load Balancer and select **Register pending targets**.
1. Select **Save**.
> [!note]
> Targets for the `gitlab-nlb-ssh-target` and `gitlab-alb-http-target` target groups are automatically registered when instances launch in the [auto scaling group](#create-an-auto-scaling-group) created later in this guide.
{{< /tab >}}
{{< /tabs >}}
After the NLB load balancer is up and running, you can revisit your Security Groups to refine the access only through the NLB and any other requirements you might have.
Some attributes can only be configured after the load balancer has been created. Here are a couple of features you might configure based on your requirements:
- [Client IP preservation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation) is enabled for the target groups by default. This allows the IP of the client connected in the Load Balancer to be preserved in the GitLab application. You can enable/disable this based on your requirements.
- [Proxy Protocol](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol) is disabled for the target groups by default. This allows the Load Balancer to send additional information in the proxy protocol headers. If you want to enable this, make sure that other environment components like internal load balancers, NGINX, etc. are configured as well. For this POC we only need to enable it in the [GitLab node later](#proxy-protocol).
### Configure DNS for Load Balancer
On the Route 53 dashboard, select **Hosted zones** in the left navigation bar:
1. Select an existing hosted zone or, if you do not already have one for your domain, select **Create Hosted Zone**, enter your domain name, and select **Create**.
1. Select **Create record** and provide the following values:
1. **Name**: Use the domain name (the default value) or enter a subdomain.
1. **Type**: Select **A - IPv4 address**.
1. **Alias**: Defaults to **disabled**. Enable this option.
1. **Route traffic to**: Select **Alias to Network Load Balancer**.
1. **Region**: Select the region where the Network Load Balancer resides.
1. **Choose network load balancer**: Select the Network Load Balancer we created earlier.
1. **Routing Policy**: We use **Simple** but you can choose a different policy based on your use case.
1. **Evaluate Target Health**: We set this to **No** but you can choose to have the load balancer route traffic based on target health.
1. Select **Create**.
1. If you registered your domain through Route 53, you're done. If you used a different domain registrar, you must update your DNS records with your domain registrar. You must:
1. Select **Hosted zones** and select the domain you added previously.
1. You see a list of `NS` records. From your domain registrar's administrator panel, add each of these as `NS` records to your domain's DNS records. These steps may vary between domain registrars. If you're stuck, Google **"name of your registrar" add DNS records** and you should find a help article specific to your domain registrar.
The steps for doing this vary depending on which registrar you use and is beyond the scope of this guide.
## PostgreSQL with RDS
For our database server we use Amazon RDS for PostgreSQL which offers Multi AZ
for redundancy ([Aurora is **not** supported](https://gitlab.com/gitlab-partners-public/aws/aws-known-issues/-/issues/10)). First we create a security group and subnet group, then we
create the actual RDS instance.
### RDS Security Group
We need a security group for our database that allows inbound traffic from the instances we deploy in our `gitlab-nlb-sec-group` later on:
1. From the EC2 dashboard, select **Security Groups** from the left menu bar.
1. Select **Create security group**.
1. Give it a name (we use `gitlab-rds-sec-group`), a description, and select the `gitlab-vpc` from the **VPC** dropdown list.
1. In the **Inbound rules** section, select **Add rule** and set the following:
1. **Type**: search for and select the **PostgreSQL** rule.
1. **Source type**: set as "Custom".
1. **Source**: select the appropriate security group based on your load balancer approach:
- **NLB only**: `gitlab-nlb-sec-group`
- **Hybrid NLB->ALB**: `gitlab-rails-sec-group`
1. When done, select **Create security group**.
### RDS Subnet Group
1. Go to the RDS dashboard and select **Subnet Groups** from the left menu.
1. Select **Create DB Subnet Group**.
1. Under **Subnet group details**, enter a name (we use `gitlab-rds-group`), a description, and choose the `gitlab-vpc` from the VPC dropdown list.
1. From the **Availability Zones** dropdown list, select the Availability Zones that include the subnets you've configured. In our case, we add `us-west-2a` and `us-west-2b`.
1. From the **Subnets** dropdown list, select the two private subnets (`10.0.1.0/24` and `10.0.3.0/24`) as we defined them in the [subnets section](#subnets).
1. Select **Create** when ready.
### Create the database
> [!warning]
> Avoid using burstable instances (t class instances) for the database as this could lead to performance issues due to CPU credits running out during sustained periods of high load.
Now, it's time to create the database:
1. Go to the RDS dashboard, select **Databases** from the left menu, and select **Create database**.
1. Select **Standard Create** for the database creation method.
1. Select **PostgreSQL** as the database engine and select the minimum PostgreSQL version as defined for your GitLab version in our [database requirements](../requirements.md#postgresql).
1. Because this is a production server, let's choose **Production** from the **Templates** section.
1. Under **Availability & durability**, select **Multi-AZ DB instance** to have a standby RDS instance provisioned in a different [Availability Zone](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
1. Under **Settings**, use:
- `gitlab-db-ha` for the DB instance identifier.
- `gitlab` for a master username.
- A very secure password for the master password.
Make a note of these as we need them later.
1. For the DB instance size, select **Standard classes** and select an instance size that meets your requirements from the dropdown list. We use a `db.m5.large` instance.
1. Under **Storage**, configure the following:
1. Select **Provisioned IOPS (SSD)** from the storage type dropdown list. Provisioned IOPS (SSD) storage is best suited for this use (though you can choose General Purpose (SSD) to reduce the costs). Read more about it at [Storage for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html).
1. Allocate storage and set provisioned IOPS. We use the minimum values, `100` and `1000`.
1. Enable storage autoscaling (optional) and set a maximum storage threshold.
1. Under **Connectivity**, configure the following:
1. Under the **Virtual Private Cloud (VPC)** dropdown list select the VPC we created earlier (`gitlab-vpc`).
1. Under the **DB subnet group** select the subnet group (`gitlab-rds-group`) we created earlier.
1. Set public access to **No**.
1. Under **VPC security group**, select **Choose existing** and select the `gitlab-rds-sec-group` we created previously from the dropdown list.
1. Under **Additional configuration** leave the database port as the default `5432`.
1. For **Database authentication**, select **Password authentication**.
1. Expand the **Additional configuration** section and complete the following:
1. The initial database name. We use `gitlabhq_production`.
1. Configure your preferred backup settings.
1. The only other change we make here is to disable auto minor version updates under **Maintenance**.
1. Leave all the other settings as is or tweak according to your needs.
1. If you're happy, select **Create database**.
Now that the database is created, let's move on to setting up Redis with ElastiCache.
## Redis with ElastiCache
ElastiCache is an in-memory hosted caching solution. Redis maintains its own
persistence and is used to store session data, temporary cache information, and background job queues for the GitLab application.
### Create a Redis Security Group
1. Go to the EC2 dashboard.
1. Select **Security Groups** from the left menu.
1. Select **Create security group** and fill in the details. Give it a name (we use `gitlab-redis-sec-group`),
add a description, and choose the VPC we created earlier (`gitlab-vpc`).
1. In the **Inbound rules** section, select **Add rule** and add a **Custom TCP** rule, set port `6379`, and set the "Custom" source based on your load balancer approach:
- **NLB only**: `gitlab-nlb-sec-group`
- **Hybrid NLB->ALB**: `gitlab-rails-sec-group`
1. When done, select **Create security group**.
### Redis Subnet Group
1. Go to the ElastiCache dashboard from your AWS console.
1. Go to **Subnet Groups** in the left menu, and create a new subnet group (we name ours `gitlab-redis-group`).
Select the VPC we created earlier (`gitlab-vpc`) and ensure the selected subnets table only contains the [private subnets](#subnets).
1. Select **Create** when ready.

### Create the Redis Cluster
1. Go back to the ElastiCache dashboard.
1. Select **Redis caches** on the left menu and select **Create Redis cache** to create a new
Redis cluster.
1. Under **Deployment option** select **Design your own cache**.
1. Under **Creation method** select **Cluster cache**.
1. Under **Cluster mode** select **Disabled** as it is [not supported](../../administration/redis/replication_and_failover_external.md#requirements). Even without cluster mode on, you still get the
chance to deploy Redis in multiple availability zones.
1. Under **Cluster info** give the cluster a name (`gitlab-redis`) and a description.
1. Under **Location** select **AWS Cloud** and enable **Multi-AZ** option.
1. In the Cluster settings section:
1. For the Engine version, select the Redis version as defined for your GitLab version in our [Redis requirements](../requirements.md#redis).
1. Leave the port as `6379` because this is what we previously used in our Redis security group.
1. Select the node type (at least `cache.t3.medium`, but adjust to your needs) and the number of replicas.
1. In the Connectivity settings section:
1. **Network type**: IPv4
1. **Subnet groups**: Select **Choose existing subnet group** and choose the `gitlab-redis-group` we had previously created.
1. In the Availability Zone placements section:
1. Manually select the preferred availability zones, and under "Replica 2"
choose a different zone than the other two.

1. Select **Next**.
1. In the security settings, edit the security groups and choose the
`gitlab-redis-sec-group` we had previously created. Select **Next**.
1. Leave the rest of the settings to their default values or edit to your liking.
1. When done, select **Create**.
## Setting up Bastion Hosts
Because our GitLab instances are in private subnets, we need a way to connect
to these instances with SSH for actions that include making configuration changes
and performing upgrades. One way of doing this is by using a [bastion host](https://en.wikipedia.org/wiki/Bastion_host),
sometimes also referred to as a jump box.
> [!note]
> If you do not want to maintain bastion hosts, you can set up [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) for access to instances. This is beyond the scope of this document.
### Create Bastion Host A
1. Go to the EC2 Dashboard and select **Launch instance**.
1. In the **Name and tags** section, set the **Name** to `Bastion Host A`.
1. Select the latest **Ubuntu Server LTS (HVM)** AMI. Check the GitLab documentation for the [latest supported OS version](../package/_index.md).
1. Choose an instance type. We use a `t2.micro` as we only use the bastion host to SSH into our other instances.
1. In the **Key pair** section, select **Create new key pair**.
1. Give the key pair a name (we use `bastion-host-a`) and save the `bastion-host-a.pem` file for later use.
1. Edit the Network settings section:
1. Under **VPC**, select the `gitlab-vpc` from the dropdown list.
1. Under **Subnet**, select the public subnet we created earlier (`gitlab-public-10.0.0.0`).
1. Check that under **Auto-assign Public IP** you have **Disabled** selected. An [Elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) is assigned later to the host in the [next section](#assign-elastic-ip-to-the-bastion-host-a).
1. Under **Firewall** select **Create security group**, enter a **Security group name** (we use `bastion-sec-group`), and add a description.
1. We enable SSH access from anywhere (`0.0.0.0/0`). If you want stricter security, specify a single IP address or an IP address range in CIDR notation.
1. For storage, we leave everything as default and only add an 8 GB root volume. We do not store anything on this instance.
1. Review all your settings and, if you're happy, select **Launch Instance**.
#### Assign Elastic IP to the Bastion Host A
1. Go to the EC2 Dashboard and select **Network & Security**.
1. Select **Elastic IPs** and set the `Network border group` to `us-west-2`.
1. Select **Allocate**.
1. Select the Elastic IP address that was created.
1. Select **Actions** and choose **Associate Elastic IP address**.
1. Under the **Resource Type** select **Instance** and choose the `Bastion Host A` host under the **Instance** dropdown list.
1. Select **Associate**.
#### Confirm that you can SSH into the instance
1. On the EC2 Dashboard, select **Instances** in the left menu.
1. Select **Bastion Host A** from your list of instances.
1. Select **Connect** and follow the connection instructions.
1. If you are able to connect successfully, let's move on to setting up our second bastion host for redundancy.
### Create Bastion Host B
1. Create an EC2 instance following the same steps used previously with the following changes:
1. For the **Subnet**, select the second public subnet we created earlier (`gitlab-public-10.0.2.0`).
1. Under the **Add Tags** section, we set `Key: Name` and `Value: Bastion Host B`
so that we can identify our two instances.
1. For the security group, select the existing `bastion-sec-group` we previously created.
### Use SSH Agent Forwarding
EC2 instances running Linux use private key files for SSH authentication. You connect to your bastion host using an SSH client and the private key file stored on your client. Because the private key file is not present on the bastion host, you are not able to connect to your instances in private subnets.
Storing private key files on your bastion host is a bad idea. To get around this, use SSH agent forwarding on your client.
For example, the command-line `ssh` client uses agent forwarding with its `-A` switch, like this:
```shell
ssh -A user@<bastion-public-IP-address>
See Securely Connect to Linux Instances Running in a Private Amazon VPC for a step-by-step guide on how to use SSH agent forwarding for other clients.
We need a preconfigured, custom GitLab AMI to use in our launch configuration later. As a starting point, we use the official GitLab AMI to create a GitLab instance. Then, we add our custom configuration for PostgreSQL, Redis, and Gitaly. If you prefer, instead of using the official GitLab AMI, you can also spin up an EC2 instance of your choosing and manually install GitLab.
From the EC2 dashboard:
GitLab.c5.2xlarge, which is sufficient to accommodate 100 users).gitlab) and save the gitlab.pem file for later use.VPC: Select gitlab-vpc, the VPC we created earlier.
Subnet: Select gitlab-private-10.0.1.0 from the list of subnets we created earlier.
Auto-assign Public IP: Select Disable.
Firewall: Choose Select existing security group and select the appropriate security group based on your load balancer approach:
gitlab-nlb-sec-group and bastion-sec-groupgitlab-rails-sec-group and bastion-sec-groupThe bastion-sec-group allows SSH access from the bastion hosts for management and configuration tasks using SSH Agent Forwarding.
Connect to your GitLab instance via Bastion Host A using SSH Agent Forwarding. Once connected, add the following custom configuration:
Because we're adding our SSL certificate at the load balancer, we do not need the GitLab built-in support for Let's Encrypt. Let's Encrypt is enabled by default when using an https domain, so we must explicitly disable it:
Open /etc/gitlab/gitlab.rb and disable it:
letsencrypt['enable'] = false
Save the file and reconfigure for the changes to take effect:
sudo gitlab-ctl reconfigure
[!note] If the
gitlabuser has therds_superuserrole, GitLab can install the required extensions automatically. In that case, the manual steps below are not needed.
From your GitLab instance, connect to the RDS instance to verify access and to install the required PostgreSQL extensions.
To find the host or endpoint, go to Amazon RDS > Databases and select the database you created earlier. Look for the endpoint under the Connectivity & security tab.
For -h, use only the RDS endpoint hostname - omit the trailing colon and port number:
sudo /opt/gitlab/embedded/bin/psql -U gitlab -h <rds-endpoint> -d gitlabhq_production
Then install each required extension using CREATE EXTENSION:
CREATE EXTENSION IF NOT EXISTS btree_gist;
CREATE EXTENSION IF NOT EXISTS ...;
Verify the installed extensions with \dx.
Edit /etc/gitlab/gitlab.rb, find the external_url 'http://<domain>' option
and change it to the https domain you are using.
Look for the GitLab database settings and uncomment as necessary. In our current case we specify the database adapter, encoding, host, name, username, and password:
# Disable the built-in Postgres
postgresql['enable'] = false
# Fill in the connection details
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "mypassword"
gitlab_rails['db_host'] = "<rds-endpoint>"
Next, we must configure the Redis section by adding the host and uncommenting the port:
# Disable the built-in Redis
redis['enable'] = false
# Fill in the connection details
gitlab_rails['redis_host'] = "<redis-endpoint>"
gitlab_rails['redis_port'] = 6379
# Adjust based on your Redis setting
gitlab_rails['redis_ssl'] = true
Finally, reconfigure GitLab for the changes to take effect:
sudo gitlab-ctl reconfigure
You can also run a check and a service status to make sure everything has been setup correctly:
sudo gitlab-rake gitlab:check
sudo gitlab-ctl status
[!warning] In this architecture, having a single Gitaly server creates a single point of failure. Use Gitaly Cluster (Praefect) to remove this limitation.
Gitaly is a service that provides high-level RPC access to Git repositories. It should be enabled and configured on a separate EC2 instance in one of the private subnets we configured previously.
Let's create an EC2 instance where we install Gitaly:
Gitaly.m5.xlarge.gitaly) and save the gitaly.pem file for later use.gitlab-vpc from the dropdown list.gitlab-private-10.0.1.0).gitlab-gitaly-sec-group), and add a description.
8075 to the Port Range. For the Source, select the appropriate security group based on your load balancer approach:
gitlab-nlb-sec-groupgitlab-rails-sec-groupbastion-sec-group so that we can connect using SSH Agent Forwarding from the Bastion hosts.20 GiB and change the Volume Type to Provisioned IOPS SSD (io1). (The volume size is an arbitrary value. Create a volume big enough for your repository storage requirements.)
1000 (20 GiB x 50 IOPS). You can provision up to 50 IOPS per GiB. If you select a larger volume, increase the IOPS accordingly. Workloads where many small files are written in a serialized manner, like git, requires performant storage, hence the choice of Provisioned IOPS SSD (io1).[!note] Instead of storing configuration and repository data on the root volume, you can also choose to add an additional EBS volume for repository storage. Follow the same guidance mentioned previously. See the Amazon EBS pricing page.
Now that we have our EC2 instance ready, follow the documentation to install GitLab and set up Gitaly on its own server. Perform the client setup steps from that document on the GitLab instance we created previously.
[!warning] We do not recommend using EFS because it can negatively impact the performance of GitLab. For more information, see the documentation about avoiding cloud-based file systems.
If you do decide to use EFS, ensure that the PosixUser
attribute is either omitted or correctly specified with the UID and GID of the git user on the system that Gitaly is
installed. The UID and GID can be retrieved with the following commands:
# UID
id -u git
# GID
id -g git
Additionally, you should not configure multiple access points,
especially if they specify different credentials. An application other than Gitaly can manipulate permissions on
the Gitaly storage directories in a way that prevents Gitaly from operating correctly. For an example of this problem, see
omnibus-gitlab issue 8893.
As we are terminating SSL at our load balancer, follow the steps at Supporting proxied SSL to configure this in /etc/gitlab/gitlab.rb.
Remember to run sudo gitlab-ctl reconfigure after saving the changes to the gitlab.rb file.
The public SSH keys for users allowed to access GitLab are stored in /var/opt/gitlab/.ssh/authorized_keys. Typically we'd use shared storage so that all the instances are able to access this file when a user performs a Git action over SSH. Because we do not have shared storage in our setup, we update our configuration to authorize SSH users via indexed lookup in the GitLab database.
Follow the instructions at Set up fast SSH key lookup to switch from using the authorized_keys file to the database.
If you do not configure fast lookup, Git actions over SSH results in the following error:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Ordinarily we would manually copy the contents (primary and public keys) of /etc/ssh/ on the primary application server to /etc/ssh on all secondary servers. This prevents false man-in-the-middle-attack alerts when accessing servers in your cluster behind a load balancer.
We automate this by creating static host keys as part of our custom AMI. As these host keys are also rotated every time an EC2 instance boots up, "hard coding" them into our custom AMI serves as a workaround.
On your GitLab instance run the following:
sudo mkdir /etc/ssh_static
sudo cp -R /etc/ssh/* /etc/ssh_static
In /etc/ssh/sshd_config update the following:
# HostKeys for protocol version 2
HostKey /etc/ssh_static/ssh_host_rsa_key
HostKey /etc/ssh_static/ssh_host_dsa_key
HostKey /etc/ssh_static/ssh_host_ecdsa_key
HostKey /etc/ssh_static/ssh_host_ed25519_key
Because we're not using NFS for shared storage, we use Amazon S3 buckets to store backups, artifacts, LFS objects, uploads, merge request diffs, container registry images, and more. Our documentation includes instructions on how to configure object storage for each of these data types, and other information about using object storage with GitLab.
[!note] Because we are using the AWS IAM profile we created earlier, be sure to omit the AWS access key and secret access key/value pairs when configuring object storage. Instead, use
'use_iam_profile' => truein your configuration as shown in the object storage documentation linked previously.When using IAM roles for S3 access, GitLab supports both IMDSv1 and IMDSv2 and automatically uses IMDSv2 when available.
Remember to run sudo gitlab-ctl reconfigure after saving the changes to the gitlab.rb file.
That concludes the configuration changes for our GitLab instance. Next, we create a custom AMI based on this instance to use for our launch configuration and auto scaling group.
We must add the VPC IP Address Range (CIDR) of the gitlab-vpc we created earlier to the IP allowlist for the Health check endpoints
Edit /etc/gitlab/gitlab.rb:
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '10.0.0.0/16']
Reconfigure GitLab:
sudo gitlab-ctl reconfigure
If Proxy protocol is enabled in the load balancer we created earlier, we must also enable this on the gitlab.rb file.
Edit /etc/gitlab/gitlab.rb:
nginx['proxy_protocol'] = true
nginx['real_ip_trusted_addresses'] = [ "127.0.0.0/8", "IP_OF_THE_PROXY/32"]
Reconfigure GitLab:
sudo gitlab-ctl reconfigure
Using the domain name you used when setting up DNS for the load balancer, you should now be able to visit GitLab in your browser.
Depending on how you installed GitLab and if you did not change the password by any other means, the default password is either:
/etc/gitlab/initial_root_password.To change the default password, sign in as the root user with the default password and change it in the user profile.
When our auto scaling group spins up new instances, we are able to sign in with username root and the newly created password.
On the EC2 dashboard:
GitLab instance we created earlier.GitLab-Source for both).Now we have a custom AMI that we use to create our launch configuration the next step.
From the EC2 dashboard:
Select Launch Templates from the left menu and select create launch template.
Enter a name for your launch template (we use gitlab-launch-template).
Select Launch template contents and select My AMIs tab/
Select Owned by me and select the GitLab-Source custom AMI we created previously.
Select an instance type best suited for your needs (at least a c5.2xlarge).
In the Key pair section, select Create new key pair.
gitlab-launch-template) and save the gitlab-launch-template.pem file for later use.The root volume is 8 GiB by default and should be enough given that we do not store any data there. Select Configure Security Group.
Check Select existing security group and select the appropriate security group based on your load balancer approach:
gitlab-nlb-sec-group and bastion-sec-groupgitlab-rails-sec-group and bastion-sec-groupThe bastion-sec-group allows SSH access from the bastion hosts for management and configuration tasks using SSH Agent Forwarding.
In the Advanced details section:
GitLabS3Access role we created earlier.Review all your settings and, if you're happy, select Create launch template.
From the EC2 dashboard:
Select Auto scaling groups from the left menu and select Create Auto Scaling group.
Enter a Group name (we use gitlab-auto-scaling-group).
Under Launch template, select the launch template we created earlier. Select Next
In the Network settings section:
gitlab-vpc from the dropdown list.gitlab-private-10.0.1.0 and gitlab-private-10.0.3.0).In the Load Balancing settings section:
gitlab-nlb-ssh-target and gitlab-nlb-http-targetgitlab-nlb-ssh-target and gitlab-alb-http-target
The auto scaling group will automatically register all launched instances to these target groups.300 seconds.For Group size, set Desired capacity to 2.
In the Scaling settings section:
2.4.Finally, configure notifications and tags as you see fit, review your changes, and create the auto scaling group.
After the auto scaling group is created, we need to create a scale up and down policy in Cloudwatch and assign them.
CPUUtilization for metrics from EC2 instances By Auto Scaling Group we created earlier.1 capacity unit when CPUUtilization is greater than or equal to 60%.Scale Up Policy.1 capacity unit when CPUUtilization is less than or equal to 45%.Scale Down Policy.As the auto scaling group is created, you see your new instances spinning up in your EC2 dashboard. You also see the new instances added to your load balancer. After the instances pass the heath check, they are ready to start receiving traffic from the load balancer.
Because our instances are created by the auto scaling group, go back to your instances and terminate the instance we previously created manually. We only needed this instance to create our custom AMI.
Apart from Amazon CloudWatch, which you can enable on various services, GitLab provides its own integrated monitoring solution based on Prometheus. For more information about how to set it up, see GitLab Prometheus.
GitLab also has various health check endpoints that you can ping and get reports.
If you want to take advantage of GitLab CI/CD, you have to set up at least one runner.
Read more on configuring an autoscaling GitLab Runner on AWS.
GitLab provides a tool to back up and restore its Git data, database, attachments, LFS objects, and so on.
Some important things to know:
To back up GitLab:
SSH into your instance.
Take a backup:
sudo gitlab-backup create
To restore GitLab, first review the restore documentation, and primarily the restore prerequisites. Then, follow the steps under the Linux package installations section.
GitLab releases a new version every month on the release date. Whenever a new version is released, you can update your GitLab instance:
SSH into your instance
Take a backup:
sudo gitlab-backup create
Update the repositories and install GitLab:
sudo apt update
sudo apt install gitlab-ee
After a few minutes, the new version should be up and running.
Read more on how to use GitLab releases as AMIs.
In this guide, we went mostly through scaling and some redundancy options, your mileage may vary.
Keep in mind that all solutions come with a trade-off between cost/complexity and uptime. The more uptime you want, the more complex the solution. And the more complex the solution, the more work is involved in setting up and maintaining it.
Have a read through these other resources and feel free to open an issue to request additional material:
If your instances are failing the load balancer's health checks, verify that they are returning a status 200 from the health check endpoint we configured earlier. Any other status, including redirects like status 302, causes the health check to fail.
You may have to set a password on the root user to prevent automatic redirects on the sign-in endpoint before health checks pass.
The change you requested was rejected (422)If you see this page when trying to set a password via the web interface, make sure external_url in gitlab.rb matches the domain you are making a request from, and run sudo gitlab-ctl reconfigure after making any changes to it.
When the GitLab deployment is scaled up to more than one node, some job logs may not be uploaded to object storage properly. Incremental logging is required for CI to use object storage.
Enable incremental logging if it has not already been enabled.