documentation/deployment/gcp.md
import FileSystemChoice from "../../src/components/DRY/_questdb_file_system_choice.mdx" import MinimumHardware from "../../src/components/DRY/_questdb_production_hardware-minimums.mdx"
Google Compute Engine offers a variety of VM instances tuned for different workloads.
Do not use instances containing the letter A, such as C4A. These are ARM architecture instances,
using Axion processors.
Either AMD EPYC CPUs (D letter) or Intel Xeon (no letter) are appropriate for x86_64 deployments.
We recommend starting with C-Series instances, and reviewing other instance types if your workload demands it.
You should deploy using an x86_64 Linux distribution, such as Ubuntu.
For storage, we recommend using Hyperdisk Balanced disks,
and provisioning them at 5000 IOPS/300 MBps until you have tested your workload.
:::warning Hyperdisk Balanced is not supported on all machine types. N2 instances do not support Hyperdisk. Use N4, C3, or C4 series instances with Hyperdisk Balanced. :::
Hyperdisk Extreme generally requires much higher vCPU counts - for example, it cannot be used on C3 machines
smaller than 88 vCPUs.
Google Filestore is a managed NFS service that can be used as a replication transport layer in QuestDB Enterprise.
Filestore should not be used as primary storage for QuestDB. However, it
is well-suited for replication when low latency is required. The fs::
transport over NFS provides sub-200ms replication lag with
aggressive tuning, compared to ~1s+ with
object store transport (GCS).
To use Filestore for replication:
fs:: transport in server.conf:replication.object.store=fs::root=/mnt/questdb-repl/final;atomic_write_dir=/mnt/questdb-repl/scratch;
Use the backup feature to manage WAL file retention on the NFS mount.
On GKE, expose the Filestore share as a PersistentVolume with
ReadWriteMany access mode using the
Filestore CSI driver,
so both primary and replica pods can mount it simultaneously.
:::note Filestore Zonal and Basic SSD tiers may require a quota increase before use. Basic HDD is typically available by default. :::
QuestDB supports Google Cloud Storage as its replication object store in the Enterprise edition. GCS is the simplest and cheapest replication transport, but has higher latency (~1s+) due to object store API overhead.
To get started, create a bucket for the database to use. Then follow the Enterprise Quick Start steps to create a connection string and configure QuestDB.
NetApp Volumes
is a managed NFS service on GCP backed by NetApp ONTAP. Like Filestore, it can
be used as a low-latency replication transport via the fs:: prefix. The
QuestDB configuration is identical to Filestore.
:::note
NetApp Volumes requires enabling the netapp.googleapis.com API and may
require separate quota allocation.
:::
c3-standard-4 or c3d-standard-4 (4 vCPUs, 16 GB RAM)Hyperdisk Balanced (30 GiB) volume provisioned with 3000 IOPS/140 MBps.Hyperdisk Balanced (100 GiB) volume provisioned with 3000 IOPS/140 MBps.Linux Ubuntu 24.04 LTS x86_64.ext4c3-highmem-8 or c3d-highmem-8 (8 vCPUs, 64 GB RAM)Hyperdisk Balanced (30 GiB) volume provisioned with 5000 IOPS/300 MBps.Hyperdisk Balanced (300 GiB) volume provisioned with 5000 IOPS/300 MBps.Linux Ubuntu 24.04 LTS x86_64.zfs:::note
You can use the highcpu and highmem variants to adjust the standard 4:1 RAM/vCPU
ratio to 2:1 or 8:1 respectively. Higher RAM can improve performance dramatically
if it means your working set data will fit entirely into memory.
:::
This guide describes how to run QuestDB on a new Google Cloud Platform (GCP) Compute Engine instance. After completing this guide, you will have an instance with QuestDB running in a container using the official QuestDB Docker image, as well as a network rule that enables communication over HTTP and PostgreSQL wire protocol.
import Screenshot from "@theme/Screenshot"
<Screenshot alt="The Create Instance wizard on Google Cloud platform" height={598} src="images/guides/google-cloud-platform/create-instance.webp" width={650} />
Give the instance a name - this example uses questdb-europe-west3
Choose a Region and Zone where you want to deploy the instance - this
example uses europe-west3 (Frankfurt) and the default zone
Choose a machine configuration. The default choice, ec2-medium, is a
general-purpose instance with 4GB memory and should be enough to run this
example.
{" "} <Screenshot alt="Deploying a QuestDB instance on Google Cloud Platform Compute Engine" height={695} src="images/guides/google-cloud-platform/create-vm.webp" width={650} />
To add a running QuestDB container on instance startup, scroll down and click
the Deploy Container button. Then, provide the latest QuestDB Docker
image in the Container image textbox.
questdb/questdb:latest
Click the Select button at the bottom of the dropdown to complete the container configuration.
Your docker configuration should look like this:
{" "} <Screenshot alt="Configuring a Docker container to launch in a new QuestDB instance on Google Cloud Platform Compute Engine" height={695} src="images/guides/google-cloud-platform/create-vm-docker.webp" width={650} />
Before creating the instance, we need to assign it a Network tag so that we can add a firewall rule that exposes QuestDB-related ports to the internet. This is required for you to access the database from outside your VPC. To create a Network tag:
questdb<Screenshot alt="Applying a Network tag to a Compute Engine VM Instance on Google Cloud Platform" height={610} src="images/guides/google-cloud-platform/add-network-tag.webp" width={650} />
You can now launch the instance by clicking Create at the bottom of the dialog.
Now that we've created our instance with a questdb network tag, we need to
create a corresponding firewall rule to associate with that tag. This rule will
expose the required ports for accessing QuestDB. With a network tag, we can
easily apply the new firewall rule to our newly created instance as well as any
other QuestDB instances that we create in the future.
questdb in the Name fieldquestdb in the Target tags textbox. This will apply the firewall
rule to the new instance that was created above0.0.0.0/0, which allows ingress from any IP address. We
recommend that you make this rule more restrictive, and naturally that you
include your current IP address within the chosen range.8812,9000 in the textbox.<Screenshot alt="Creating a firewall rule in for VPC networking on Google Cloud Platform" height={654} src="images/guides/google-cloud-platform/firewall-rules.webp" width={650} />
All VM instances on Compute Engine in this account which have the Network
tag questdb will now have this firewall rule applied.
The ports we have opened are:
9000 for the REST API and Web Console8812 for the PostgreSQL wire protocolTo verify that the instance is running, navigate to Compute Engine -> VM Instances. A status indicator should show the instance as running:
<Screenshot alt="A QuestDB instance running on Google Cloud Platform showing a success status indicator" height={186} src="images/guides/google-cloud-platform/instance-available.webp" width={650} />
To verify that the QuestDB deployment is operating as expected:
http://<external_ip>:9000 in a browserThe Web Console should now be visible:
<Screenshot alt="The QuestDB Web Console running on a VM instance on Google Cloud Platform" height={405} src="images/guides/google-cloud-platform/gcp-portal.webp" width={650} />
Alternatively, a request may be sent to the REST API exposed on port 9000:
curl -G \
--data-urlencode "query=SELECT * FROM telemetry_config" \
<external_ip>:9000/exec
If you're using Pulumi to manage your infrastructure, you can create a QuestDB instance with the following:
import pulumi
import pulumi_gcp as gcp
# Create a Google Cloud Network
firewall = gcp.compute.Firewall(
"questdb-firewall",
network="default",
allows=[
gcp.compute.FirewallAllowArgs(
protocol="tcp",
ports=["9000", "8812"],
),
],
target_tags=["questdb"],
source_ranges=["0.0.0.0/0"],
)
# Create a Compute Engine Instance
instance = gcp.compute.Instance(
"questdb-instance",
machine_type="e2-medium",
zone="us-central1-a",
boot_disk={
"initialize_params": {
"image": "ubuntu-os-cloud/ubuntu-2004-lts",
},
},
network_interfaces=[
gcp.compute.InstanceNetworkInterfaceArgs(
network="default",
access_configs=[{}], # Ephemeral public IP
)
],
metadata_startup_script="""#!/bin/bash
sudo apt-get update
sudo apt-get install -y docker.io
sudo docker run -d -p 9000:9000 -p 8812:8812 \
--env QDB_HTTP_USER="admin" \
--env QDB_HTTP_PASSWORD="quest" \
questdb/questdb
""",
tags=["questdb"],
)
# Export the instance's name and public IP
pulumi.export("instanceName", instance.name)
pulumi.export("instance_ip", instance.network_interfaces[0].access_configs[0].nat_ip)