showcase/shell-docs/src/content/snippets/shared/premium/self-hosting.mdx
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
CopilotKit Intelligence — the platform that powers threads, shared state, the inspector, and observability — can be self-hosted on your own Kubernetes cluster using the copilot-intelligence Helm chart. Self-hosting is a licensed deployment mode: you run the control plane and data plane inside your own network boundary, bring your own Postgres and Redis (or run bundled Bitnami subcharts), point the chart at your own OIDC provider, and manage secrets with External Secrets Operator or direct Kubernetes Secrets. The chart deploys three core workloads — app-api (the backend service, port 4201), app-frontend (the web UI, port 8080), and an optional realtime-gateway (a WebSocket service for realtime sync, port 4401) — plus supporting Services, Ingress, HPAs, PodDisruptionBudgets, and ExternalSecret resources.
If none of these apply, use Copilot Cloud — it is the fastest path to a working Intelligence deployment and requires no cluster operations.
Before starting, make sure the following are in place. The How the Intelligence Platform Works page explains the layering in more depth.
License and registry access:
oci://ghcr.io/copilotkit/charts/copilot-intelligenceCluster and tooling:
kubectl configured against the target cluster with an admin-equivalent contextPlatform prerequisites (cluster-wide, installed once):
nginx-ingress or the AWS Load Balancer Controllercert-manager (or a cloud-managed certificate alternative such as AWS ACM) for TLS on the public hostnamesExternal Secrets Operator if you plan to sync secrets from AWS Secrets Manager, HashiCorp Vault, or GCP Secret Manager (strongly recommended for production)External dependencies (reachable from the cluster):
Optional:
Ensure `kubectl` points to the cluster that will run Intelligence.
```bash title="Terminal"
kubectl config current-context
kubectl auth can-i create namespace --all-namespaces
```
The context shown should be the target cluster, and the permission check should return `yes`. If either is wrong, fix your kubeconfig before proceeding.
These components are cluster-wide and installed once per cluster, independently of the application chart.
<Tabs items={["AWS (EKS)", "On-prem / generic"]}>
<Tab value="AWS (EKS)">
```bash title="Terminal"
# AWS Load Balancer Controller (kube-system)
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=<YOUR_CLUSTER_NAME>
# cert-manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
-n cert-manager --create-namespace \
--set installCRDs=true
# External Secrets Operator
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
-n external-secrets --create-namespace
```
</Tab>
<Tab value="On-prem / generic">
```bash title="Terminal"
# NGINX Ingress Controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
-n ingress-nginx --create-namespace
# cert-manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
-n cert-manager --create-namespace \
--set installCRDs=true
```
</Tab>
</Tabs>
After each controller is running, its pods should be `Ready` in their respective namespaces.
Intelligence needs Postgres, Redis, and an OIDC issuer. You can either point the chart at managed services you already run, or enable the bundled Bitnami subcharts for in-cluster Postgres and Redis (appropriate for evaluation and small self-hosted installs).
**Using managed services (recommended for production):**
- Create a Postgres database and user. Record the host, port (default `5432`), database name, username, and password.
- Create a Redis instance with TLS enabled. Record the host, port (default `6379`), and password.
- Configure an OIDC client in your identity provider. Record the issuer URL, client ID, and client secret.
**Using the bundled in-cluster subcharts:**
Set `postgresql.enabled: true` and `redis-subchart.enabled: true` in your values file (covered in the next step). A matching `StorageClass` must exist in the cluster. The bundled Keycloak subchart is available via `keycloak.enabled: true` if you also need a quick OIDC provider for evaluation; do not use the bundled Keycloak for production workloads.
The chart ships with example values files for the two most common shapes. Pull the chart, then copy the example closest to your environment.
```bash title="Terminal"
helm pull oci://ghcr.io/copilotkit/charts/copilot-intelligence --version 0.1.0 --untar
# AWS-flavored (ALB, IRSA, External Secrets from AWS Secrets Manager)
cp copilot-intelligence/values-aws-example.yaml my-values.yaml
# Or on-prem-flavored (nginx, manual Kubernetes Secrets)
cp copilot-intelligence/values-onprem-example.yaml my-values.yaml
```
Edit `my-values.yaml` to set at minimum:
- `database.host`, `database.port`, `database.name` — your Postgres connection
- `redis.host`, `redis.port`, `redis.tls` — your Redis connection
- `auth.issuer` — your OIDC provider's issuer URL
- `auth.existingSecret` — name of the Kubernetes Secret containing `auth-secret`, `auth-client-id`, `auth-client-secret`
- `ingress.ui.host` — the hostname users will load the Intelligence UI on (for example `intelligence.example.com`)
- `ingress.api.host` — an optional dedicated API hostname (defaults to `ingress.ui.host`)
- `ingress.tls` — TLS configuration for the hosts above
See the [Configuration reference](#configuration-reference) section for the full set of values.
You have two paths — External Secrets Operator (recommended) or direct Kubernetes Secrets.
**External Secrets Operator:**
1. Ensure your secret backend (AWS Secrets Manager, Vault, etc.) has entries for the database URL, Redis URL, and auth credentials.
2. Create a `ClusterSecretStore` (or `SecretStore`) that references that backend.
3. In `my-values.yaml`, set `externalSecrets.enabled: true`, `externalSecrets.store.kind`, and `externalSecrets.store.name` to match. The chart generates `ExternalSecret` resources that sync those entries into Kubernetes Secrets at the names `app-api` expects.
**Direct Kubernetes Secrets:**
Set `externalSecrets.enabled: false` in your values file, then create the Secrets manually:
```bash title="Terminal"
kubectl create namespace copilot-intelligence
kubectl create secret generic cpki-db \
--from-literal=database-url='postgresql://user:pass@host:5432/intelligence' \
-n copilot-intelligence
kubectl create secret generic cpki-redis \
--from-literal=redis-url='rediss://:password@host:6379' \
-n copilot-intelligence
kubectl create secret generic cpki-auth \
--from-literal=auth-secret='<32+ character random string>' \
--from-literal=auth-client-id='<OIDC client id>' \
--from-literal=auth-client-secret='<OIDC client secret>' \
-n copilot-intelligence
```
The exact Secret names referenced in your values file must match whatever you create.
```bash title="Terminal"
helm install copilot-intelligence ./copilot-intelligence \
-f my-values.yaml \
-n copilot-intelligence \
--create-namespace \
--wait \
--timeout 10m
```
`--wait` blocks until the `Deployments` report healthy replicas; `--timeout 10m` allows enough time for image pulls and the initial database migration job.
Check that every pod is `Running` and the ingress is ready:
```bash title="Terminal"
kubectl get pods -n copilot-intelligence
kubectl get ingress -n copilot-intelligence
```
You should see `app-api`, `app-frontend`, and — if enabled — `realtime-gateway` pods running. The migrations `Job` will appear as `Completed`.
Confirm the API health check reports `ok`:
```bash title="Terminal"
curl https://<ingress.api.host>/api/health
```
The endpoint returns `200 OK` only when the database is reachable — a failed health check is almost always a database connectivity problem.
Finally, browse to `https://<ingress.ui.host>` and log in via your OIDC provider. A successful login confirms end-to-end wiring.
**Upgrade** — bump the chart version in your `helm pull` command, regenerate example values to diff against, then run:
```bash title="Terminal"
helm upgrade copilot-intelligence ./copilot-intelligence \
-f my-values.yaml \
-n copilot-intelligence \
--wait
```
**Uninstall** — releases leave PersistentVolumes in place by default if you enabled bundled subcharts; delete them manually if you intend to tear down state.
```bash title="Terminal"
helm uninstall copilot-intelligence -n copilot-intelligence
```
The tables below summarize the most common values. For every option, see values.yaml in the pulled chart.
| Key | Description | Default |
|---|---|---|
global.imageRegistry | Registry prefix for unqualified image names | "" |
global.imagePullSecrets | Image pull secrets for private registries | [] |
global.storageClass | StorageClass override for bundled subcharts | "" |
| Key | Description | Default |
|---|---|---|
database.host | Postgres host | "" (required) |
database.port | Postgres port | 5432 |
database.name | Database name | intelligence |
database.existingSecret | Pre-existing Secret with database-url | "" |
| Key | Description | Default |
|---|---|---|
redis.host | Redis host | "" (required) |
redis.port | Redis port | 6379 |
redis.tls | Require TLS (ElastiCache defaults to on) | true |
redis.existingSecret | Pre-existing Secret with redis-url | "" |
| Key | Description | Default |
|---|---|---|
auth.deploymentMode | self-hosted or hosted | self-hosted |
auth.issuer | OIDC issuer URL | "" (required) |
auth.existingSecret | Secret with auth-secret, auth-client-id, auth-client-secret | "" |
auth.defaultOrganizationId | Default organization ID in self-hosted mode | default |
| Key | Description | Default |
|---|---|---|
ingress.enabled | Create Ingress resources | true |
ingress.className | nginx or alb | nginx |
ingress.ui.host | UI hostname | "" (required) |
ingress.api.host | Dedicated API hostname (optional) | "" (falls back to ui host) |
ingress.realtimePlane.host | Dedicated realtime hostname (optional) | "" |
ingress.tls | TLS configuration | [] |
ingress.websocket.enabled | Add WebSocket-friendly annotations for the realtime plane | false |
ingress.annotations | Additional ingress annotations | {} |
appApi and appFrontend)Both services accept the same shape.
| Key | Description | Default (appApi) | Default (appFrontend) |
|---|---|---|---|
<svc>.enabled | Enable the service | true | true |
<svc>.replicaCount | Replicas | 2 | 2 |
<svc>.image.repository | Image repository | cpk-intelligence-app-api | cpk-intelligence-app-frontend |
<svc>.image.tag | Image tag (defaults to chart appVersion) | "" | "" |
<svc>.resources | CPU/memory requests and limits | 250m / 512Mi | 100m / 128Mi |
<svc>.autoscaling.enabled | Enable HPA | true | false |
<svc>.autoscaling.minReplicas | HPA minimum | 2 | 2 |
<svc>.autoscaling.maxReplicas | HPA maximum | 10 | 4 |
<svc>.serviceAccount.annotations | Annotations on the ServiceAccount (IRSA, workload identity) | {} | {} |
| Key | Description | Default |
|---|---|---|
realtimeGateway.enabled | Enable the gateway | false |
realtimeGateway.host | PHX_HOST override | "" |
realtimeGateway.existingSecret | Secret containing RUNNER_AUTH_SECRET and SECRET_KEY_BASE | "" |
realtimeGateway.beam.clustering.enabled | BEAM clustering across replicas | true |
realtimeGateway.beam.cookieSecret.name | Secret containing the BEAM cookie | cpki-beam-cookie |
Enabling the realtime gateway requires that either realtimeGateway.existingSecret is set, or that externalSecrets.secrets.realtimeGateway.enabled or selfHostedSecrets.enabled is true — the chart fails validation otherwise.
| Key | Description | Default |
|---|---|---|
externalSecrets.enabled | Generate ExternalSecret resources | true |
externalSecrets.store.kind | ClusterSecretStore or SecretStore | ClusterSecretStore |
externalSecrets.store.name | SecretStore name | "" (required when enabled) |
externalSecrets.refreshInterval | How often ESO syncs | 1h |
externalSecrets.secrets.* | Per-secret mappings — see values.yaml | — |
| Key | Description | Default |
|---|---|---|
postgresql.enabled | Deploy in-cluster Postgres | false |
postgresql.auth.password | Postgres password (set at deploy time) | "" |
redis-subchart.enabled | Deploy in-cluster Redis (aliased to avoid collision with redis.*) | false |
redis-subchart.auth.password | Redis password | "" |
keycloak.enabled | Deploy bundled Keycloak for quick eval | false |
| Key | Description | Default |
|---|---|---|
objectStorage.enabled | Persist AG-UI events from the realtime gateway to S3-compatible storage | false |
objectStorage.bucket | Bucket name | "" |
objectStorage.region | Bucket region | us-east-1 |
objectStorage.endpoint | S3-compatible endpoint override | "" |
objectStorage.existingSecret | Secret with static access keys (optional if using IRSA) | "" |
| Key | Description | Default |
|---|---|---|
migrations.enabled | Run the migrations Job as a pre-install/pre-upgrade hook | false |
migrations.image.repository | Migrations image repository | cpk-intelligence-db-migrations |
migrations.activeDeadlineSeconds | Job deadline | 1800 |
migrations.backoffLimit | Retry count before failing | 3 |
Per-service keys podDisruptionBudget, podAntiAffinity, and networkPolicy are available for high-availability and traffic-isolation requirements. See values.yaml for full shapes.