Back to Copilotkit

Self-Hosting Intelligence

showcase/shell-docs/src/content/docs/premium/self-hosting.mdx

1.57.130.5 KB
Original Source

What is this?

CopilotKit Intelligence — the platform that powers threads, shared state, the inspector, and observability — can be self-hosted on your own Kubernetes cluster using the copilot-intelligence Helm chart. Self-hosting is a licensed deployment mode: you run the control plane and data plane inside your own network boundary, bring your own Postgres and Redis (or run bundled Bitnami subcharts), point the chart at your own OIDC provider, and manage secrets with External Secrets Operator, direct Kubernetes Secrets, or chart-managed credentials. The chart deploys two always-on workloads — app-api (the backend service, port 4201) and app-frontend (the web UI, port 8080) — plus an optional realtime-gateway (a WebSocket service for realtime sync, port 4401), a database-migrations Job, and a thread-culler CronJob. Supporting resources include Services, an Ingress, HPAs, PodDisruptionBudgets, ConfigMaps, and (when ESO is enabled) ExternalSecret resources.

When should I use this?

  • Your organization requires CopilotKit Intelligence to run inside your own VPC or data center for compliance, data residency, or security reasons
  • You want to connect Intelligence to internal databases, identity providers, or secret stores that are not reachable from Copilot Cloud
  • You need to operate the platform under your existing Kubernetes tooling, CI/CD, and observability stack
  • You have an Enterprise Intelligence Platform license and the platform-engineering capacity to run a production Kubernetes workload

If none of these apply, use Copilot Cloud — it is the fastest path to a working Intelligence deployment and requires no cluster operations.

<Callout type="info" title="Validate locally before committing to a real cluster"> The chart installs the same way against a local Docker Desktop or k3d cluster as it does against a production cluster. Walk this guide end-to-end on your laptop first — the `values-quickstart-local.yaml` overlay shipped in the chart enables bundled in-cluster Postgres, Redis, and (optionally) Keycloak so you can validate without any external dependencies. The chart repo also ships `scripts/local-demo.sh`, which spins up a disposable k3d cluster, installs the released chart from GHCR, and brings up a bundled Keycloak in one command:
bash
./scripts/local-demo.sh --version <chart-version>

Use whichever local path you prefer; both follow the same install commands described below. </Callout>

Prerequisites

Before starting, make sure the following are in place. The How the Enterprise Intelligence Platform Works page explains the layering in more depth.

License and registry access:

  • A valid Enterprise Intelligence Platform license key (contact your CopilotKit account team if you do not have one)
  • Read access to the chart OCI registry at oci://ghcr.io/copilotkit/charts/intelligence (anonymous pulls are allowed for the released chart)
  • The latest released chart version. Check the chart releases on GHCR; substitute the value into the <chart-version> placeholder used throughout this guide (e.g. 0.1.0-rc.16).

Cluster and tooling:

  • Kubernetes ≥ 1.28
  • Helm ≥ 3.12
  • kubectl configured against the target cluster with an admin-equivalent context

Platform prerequisites (cluster-wide, installed once):

  • An ingress controller — either nginx-ingress or the AWS Load Balancer Controller
  • cert-manager (or a cloud-managed certificate alternative such as AWS ACM) for TLS on the public hostnames
  • External Secrets Operator if you plan to sync secrets from AWS Secrets Manager, HashiCorp Vault, or GCP Secret Manager (recommended for production, but not required — see Secrets)

External dependencies (reachable from the cluster):

  • PostgreSQL ≥ 14 — managed (Amazon RDS, Aurora, Cloud SQL) or operator-deployed in-cluster
  • Redis ≥ 7 (or a Valkey-compatible service such as Amazon ElastiCache)
  • An OIDC identity provider — Keycloak, Okta, Azure AD, Auth0, Google Workspace, or equivalent

Optional:

  • Amazon OpenSearch (only when analytics features are in use)
  • An S3-compatible object store (only when the realtime gateway is configured to persist AG-UI events)

Implementation

<Steps> <Step> ### Prepare your Kubernetes cluster
Ensure `kubectl` points to the cluster that will run Intelligence.

```bash title="Terminal"
kubectl config current-context
kubectl auth can-i create namespace --all-namespaces
```

The context shown should be the target cluster, and the permission check should return `yes`. If either is wrong, fix your kubeconfig before proceeding.
</Step> <Step> ### Install platform prerequisites
These components are cluster-wide and installed once per cluster, independently of the application chart.

<Tabs items={["AWS (EKS)", "On-prem / generic", "Local (Docker Desktop / k3d)"]}>
  <Tab value="AWS (EKS)">
    ```bash title="Terminal"
    # AWS Load Balancer Controller (kube-system)
    helm repo add eks https://aws.github.io/eks-charts
    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
      -n kube-system \
      --set clusterName=<YOUR_CLUSTER_NAME>

    # cert-manager
    helm repo add jetstack https://charts.jetstack.io
    helm install cert-manager jetstack/cert-manager \
      -n cert-manager --create-namespace \
      --set installCRDs=true

    # External Secrets Operator (optional — see Secrets step)
    helm repo add external-secrets https://charts.external-secrets.io
    helm install external-secrets external-secrets/external-secrets \
      -n external-secrets --create-namespace
    ```
  </Tab>
  <Tab value="On-prem / generic">
    ```bash title="Terminal"
    # NGINX Ingress Controller
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm install ingress-nginx ingress-nginx/ingress-nginx \
      -n ingress-nginx --create-namespace

    # cert-manager
    helm repo add jetstack https://charts.jetstack.io
    helm install cert-manager jetstack/cert-manager \
      -n cert-manager --create-namespace \
      --set installCRDs=true
    ```
  </Tab>
  <Tab value="Local (Docker Desktop / k3d)">
    ```bash title="Terminal"
    # NGINX Ingress Controller as ClusterIP — you will reach it via
    # `kubectl port-forward` later, so no LoadBalancer service is needed.
    helm upgrade --install ingress-nginx ingress-nginx \
      --repo https://kubernetes.github.io/ingress-nginx \
      --namespace ingress-nginx --create-namespace \
      --set controller.service.type=ClusterIP \
      --wait
    ```

    cert-manager and External Secrets Operator are not required for a local validation pass — TLS is terminated outside the cluster and secrets are managed by the chart (Path C below) or pre-created by hand (Path B).
  </Tab>
</Tabs>

After each controller is running, its pods should be `Ready` in their respective namespaces.
</Step> <Step> ### Provision external dependencies
Intelligence needs Postgres, Redis, and an OIDC issuer. You can either point the chart at managed services you already run, or enable the bundled Bitnami subcharts for in-cluster Postgres and Redis (appropriate for evaluation and small self-hosted installs).

**Using managed services (recommended for production):**

- Create a Postgres database and user. Record the host, port (default `5432`), database name, username, and password.
- Create a Redis instance with TLS enabled. Record the host, port (default `6379`), and password.
- Configure an OIDC client in your identity provider. Record the issuer URL, client ID, and client secret.

**Using the bundled in-cluster subcharts:**

Set `postgresql.enabled: true` and `redis-subchart.enabled: true` in your values file (covered in the next step). A matching `StorageClass` must exist in the cluster. The bundled Keycloak subchart is available via `keycloak.enabled: true` if you also need a quick OIDC provider for evaluation; do not use the bundled Keycloak for production workloads. See [Bundled Keycloak (eval only)](#bundled-keycloak-eval-only) for the realm and credentials it creates.

The chart already ships a tested overlay for this shape — `values-quickstart-local.yaml` — which enables bundled Postgres + Redis, sets `migrations.enabled: true`, sizes resources for a laptop, and creates disposable secrets so the install runs end-to-end with no manual prep. Layer your own overlay on top of it (see the next step) to plug in your IdP and ingress.
</Step> <Step> ### Create a values file
The released chart ships several example values files for the common deployment shapes. Pick the one closest to your environment and copy it into a working overlay you can edit. Pull and untar the chart so you have local copies to diff against:

```bash title="Terminal"
helm pull oci://ghcr.io/copilotkit/charts/intelligence --version <chart-version> --untar

# AWS-flavored (ALB, IRSA, External Secrets from AWS Secrets Manager)
cp intelligence/values-aws-example.yaml my-values.yaml

# Or on-prem-flavored (nginx, manual Kubernetes Secrets)
cp intelligence/values-onprem-example.yaml my-values.yaml

# Or self-hosted eval (bundled Keycloak + in-cluster Postgres/Redis)
cp intelligence/values-self-hosted-eval.yaml.example my-values.yaml
```

The chart untars into a directory named `intelligence/` (the published chart name on GHCR; the chart's `nameOverride` keeps release-prefixed resources named `cpki-*`).

Edit `my-values.yaml` to set at minimum:

- `database.host`, `database.port`, `database.name` — your Postgres connection (`name` defaults to `intelligence`)
- `redis.host`, `redis.port`, `redis.tls` — your Redis connection (TLS is on by default; managed Redis requires it)
- `auth.issuer` — your OIDC provider's issuer URL
- `auth.existingSecret` — name of the Kubernetes Secret containing `auth-secret`, `auth-client-id`, `auth-client-secret` (or use one of the alternate paths in [Secrets](#create-secrets))
- `ingress.ui.host` — the hostname users will load the Intelligence UI on (for example `intelligence.example.com`)
- `ingress.api.host` — optional dedicated API hostname. When omitted, the `ui.host` rule routes `/api` and `/auth` paths to `app-api`, so a single hostname is fine for most installs.
- `ingress.tls` — TLS configuration for the hosts above
- `migrations.enabled: true` — **required for first install**; defaults to `false`. Without it the database schema is never applied and `app-api` will crashloop. (The eval overlay `values-quickstart-local.yaml` sets this for you when you layer on top of it.)

<Callout type="warn" title="OIDC issuer URL — trailing slash matters">
  Some providers (Auth0 in particular) only accept the issuer URL with a trailing slash (e.g. `https://your-tenant.auth0.com/`). A missing or extra slash produces an opaque "issuer mismatch" failure at login time. Match the value exactly to what your provider's discovery endpoint advertises.
</Callout>

See the [Configuration reference](#configuration-reference) section for the full set of values.
</Step> <Step id="create-secrets"> ### Create secrets
The chart supports three paths for secrets management. Pick exactly one.

**Path A — External Secrets Operator (recommended for production):**

1. Ensure your secret backend (AWS Secrets Manager, Vault, etc.) has entries for the database URL, Redis URL, and auth credentials.
2. Create a `ClusterSecretStore` (or `SecretStore`) that references that backend.
3. In `my-values.yaml`, set `externalSecrets.enabled: true`, `externalSecrets.store.kind`, and `externalSecrets.store.name` to match. The chart then generates `ExternalSecret` resources that sync those entries into Kubernetes Secrets at the names `app-api` expects.

**Path B — Direct Kubernetes Secrets (you manage the rotations):**

Leave `externalSecrets.enabled: false` (the default) and create the Secrets manually before installing:

```bash title="Terminal"
kubectl create namespace copilot-intelligence

kubectl create secret generic cpki-db \
  --from-literal=database-url='postgresql://user:pass@host:5432/intelligence' \
  -n copilot-intelligence

kubectl create secret generic cpki-redis \
  --from-literal=redis-url='rediss://:password@host:6379' \
  -n copilot-intelligence

kubectl create secret generic cpki-auth \
  --from-literal=auth-secret="$(openssl rand -hex 32)" \
  --from-literal=auth-client-id='<OIDC client id>' \
  --from-literal=auth-client-secret='<OIDC client secret>' \
  -n copilot-intelligence
```

Reference these names in your values file via `database.existingSecret`, `redis.existingSecret`, and `auth.existingSecret`. The Secret keys are lowercase-hyphenated (`auth-secret`, `database-url`, `runner-auth-secret`); the workloads consume them as the corresponding uppercase env vars (`AUTH_SECRET`, `DATABASE_URL`, `RUNNER_AUTH_SECRET`).

**Path C — Chart-managed self-hosted secrets (simplest BYOC):**

Useful when you do not run a secret manager and prefer Helm to create the Kubernetes Secrets directly from values you provide at install time. Set `selfHostedSecrets.enabled: true` and supply the credentials inline:

```yaml title="my-values.yaml"
selfHostedSecrets:
  enabled: true
  db:
    url: "postgresql://user:pass@host:5432/intelligence"
  redis:
    url: "rediss://:password@host:6379"
  auth:
    # Auto-generated when left empty.
    secret: ""
    clientId: "<OIDC client id>"
    clientSecret: "<OIDC client secret>"
  realtimeGateway:
    # Auto-generated when left empty.
    runnerAuthSecret: ""
    secretKeyBase: ""
  beam:
    # Auto-generated when left empty.
    releaseCookie: ""
```

The chart auto-generates `auth.secret`, the realtime-gateway runner/key-base, and the BEAM cookie when those fields are empty, so you only need to provide what you actually have.
</Step> <Step> ### (Optional) Enable schema migrations
The chart can run database schema migrations as a pre-install `Job`. This is **disabled by default** (`migrations.enabled: false`). If you want the chart to apply migrations for you on install, set the following in `my-values.yaml`:

```yaml title="my-values.yaml"
migrations:
  enabled: true
```

With this enabled, `helm install` blocks the rollout until the migrations Job reports `Completed`. Leave it disabled if you manage schema migrations out-of-band (for example, via your existing CI/CD or DBA pipeline).
</Step> <Step> ### Install the chart
The release can be installed directly from the GHCR OCI registry — no local untar is required for the install itself. Use `helm upgrade --install` so the same command works for first-time installs and upgrades.

```bash title="Terminal"
helm upgrade --install copilot-intelligence \
  oci://ghcr.io/copilotkit/charts/intelligence \
  --version <chart-version> \
  -f my-values.yaml \
  -n copilot-intelligence \
  --create-namespace \
  --wait \
  --timeout 10m
```

Layering multiple values files is supported and is the recommended pattern for evaluation: combine the chart's bundled `values-quickstart-local.yaml` (in-cluster Postgres/Redis, eval-sized resources, `migrations.enabled: true`, disposable secrets) with your own overlay (IdP, ingress, anything cluster-specific). Pull the chart first so you have a local copy of `values-quickstart-local.yaml` to reference:

```bash title="Terminal"
helm upgrade --install copilot-intelligence \
  oci://ghcr.io/copilotkit/charts/intelligence \
  --version <chart-version> \
  -f intelligence/values-quickstart-local.yaml \
  -f my-values.yaml \
  -n copilot-intelligence --create-namespace \
  --wait --timeout 10m
```

`--wait` blocks until the `Deployments` report healthy replicas; `--timeout 10m` allows enough time for image pulls and (if you enabled it in the previous step) the initial database migration job. Right-most `-f` files win on conflicts, so put your overlay last.

<Callout type="info" title="When the migrations Job runs">
The migrations Job runs as a **pre-install/pre-upgrade** hook (weight `-5`) when secrets are pre-created (Path A or Path B above), so the schema is ready before app pods start. It runs as a **post-install/post-upgrade** hook (weight `5`) when secrets are managed by Helm (Path C, or when using `postgresql.enabled: true`), because the Secret resources don't exist until Helm has created them.
</Callout>
</Step> <Step> ### Verify the install
Check that every pod is `Running` and the ingress is ready:

```bash title="Terminal"
kubectl get pods -n copilot-intelligence
kubectl get ingress -n copilot-intelligence
```

You should see `app-api`, `app-frontend`, and — if enabled — `realtime-gateway` pods running. If you opted into migrations (`migrations.enabled: true`, see the values step above), the migrations `Job` will also appear as `Completed`; if you left migrations disabled, no Job is created.

Confirm the API health check reports `ok`:

```bash title="Terminal"
curl https://<ingress.api.host>/api/health
```

The endpoint returns `200 OK` only when the database is reachable — a failed health check is almost always a database connectivity problem.

Service-specific health endpoints, useful when port-forwarding to an individual pod:

| Service | Path |
|---|---|
| `app-api` | `/api/health` |
| `app-frontend` | `/healthz` |
| `realtime-gateway` | `/health` |

Finally, browse to `https://<ingress.ui.host>` and log in via your OIDC provider. A successful login confirms end-to-end wiring.

<Callout type="info" title="Local validation — port-forward the ingress controller">
On a local cluster (Docker Desktop, k3d) without a public DNS name, port-forward the **ingress controller** rather than the frontend service so the UI host rule still routes `/api` and `/auth` to `app-api`. Set `ingress.ui.host: "localhost"` in your overlay, then leave this terminal open for as long as you're using the app:

```bash title="Terminal"
kubectl -n ingress-nginx port-forward svc/ingress-nginx-controller 8080:80
```

Browse to `http://localhost:8080`. Port-forwarding the `app-frontend` service directly bypasses the ingress and breaks `/api` and `/auth` routing.
</Callout>
</Step> <Step> ### Upgrade and uninstall
**Upgrade** — bump the version in your install command and re-run it. Because the install command already uses `helm upgrade --install`, the same invocation works for both fresh installs and upgrades:

```bash title="Terminal"
helm upgrade --install copilot-intelligence \
  oci://ghcr.io/copilotkit/charts/intelligence \
  --version <new-chart-version> \
  -f my-values.yaml \
  -n copilot-intelligence \
  --wait
```

Before upgrading, regenerate the example values for the target version (`helm pull ... --version <new-chart-version> --untar`) and diff against your overlay to catch new keys.

**Uninstall** — releases leave PersistentVolumes in place by default if you enabled bundled subcharts; delete them manually if you intend to tear down state.

```bash title="Terminal"
helm uninstall copilot-intelligence -n copilot-intelligence
```
</Step> </Steps>

Bundled Keycloak (eval only)

When keycloak.enabled: true, the chart deploys the Bitnami Keycloak subchart with a pre-seeded realm and demo user. This is for evaluation and demos — not production. The realm import creates:

  • Realm: cpk-dev
  • OIDC client: cpk-self-hosted with secret cpk-self-hosted-secret (override via auth.keycloakClient.clientId / auth.keycloakClient.clientSecret)
  • Demo user: engineer / engineer (override via auth.keycloakDemoUser)
  • Redirect URIs / web origins: default ["*"] for eval flexibility (override via auth.keycloakClient.redirectUris / webOrigins)

The chart auto-wires auth.issuer to the in-cluster Keycloak service, so leaving auth.issuer empty when keycloak.enabled: true is intentional.

For production self-hosted deployments, leave keycloak.enabled: false and point auth.issuer at your own IdP.

Configuration reference

The tables below summarize the most common values. For every option, see values.yaml in the pulled chart.

Global

KeyDescriptionDefault
global.imageRegistryRegistry prefix for unqualified image names""
global.intelligenceImageRegistryRegistry prefix specifically for the five Intelligence service images""
global.imagePullSecretsImage pull secrets for private registries[]
global.storageClassStorageClass override for bundled subcharts""

Database

KeyDescriptionDefault
database.hostPostgres host"" (required)
database.portPostgres port5432
database.nameDatabase nameintelligence
database.existingSecretPre-existing Secret with database-url""
database.secretKeys.urlKey inside the Secret holding the connection stringdatabase-url

Redis

KeyDescriptionDefault
redis.hostRedis host"" (required)
redis.portRedis port6379
redis.tlsRequire TLS (ElastiCache defaults to on)true
redis.existingSecretPre-existing Secret with redis-url""
redis.secretKeys.urlKey inside the Secret holding the connection URLredis-url

OpenSearch (optional)

KeyDescriptionDefault
openSearch.hostOpenSearch domain endpoint""
openSearch.portPort443
openSearch.tlsRequire TLStrue
openSearch.existingSecretPre-existing Secret with opensearch-url""

Authentication

KeyDescriptionDefault
auth.deploymentModeself-hosted (single org) or hosted (multi-org)self-hosted
auth.issuerOIDC issuer URL (auto-set when keycloak.enabled: true)""
auth.existingSecretSecret with auth-secret, auth-client-id, auth-client-secret""
auth.defaultOrganizationIdDefault organization ID in self-hosted modedefault
auth.providerIdStable identifier for the OIDC providerenterprise-sso
auth.providerNameDisplay name shown in the UIEnterprise SSO
auth.trustHostTrust the X-Forwarded-Host header (set behind a reverse proxy)"true"

Ingress

KeyDescriptionDefault
ingress.enabledCreate Ingress resourcestrue
ingress.classNamenginx or albnginx
ingress.ui.hostUI hostname; the rule for this host routes /api and /auth to app-api and / to app-frontend"" (required)
ingress.api.hostOptional dedicated API hostname. When set, this hostname routes / to app-api. When empty, no separate API rule is created — the UI host already serves the API.""
ingress.realtimePlane.hostOptional dedicated realtime hostname (only used when realtimeGateway.enabled: true)""
ingress.tlsTLS configuration[]
ingress.websocket.enabledAdd WebSocket-friendly annotations (auto-enabled when realtime-gateway is enabled with nginx)false
ingress.annotationsAdditional ingress annotations{}

Services (appApi, appFrontend, realtimeGateway)

KeyDescriptionDefault (appApi)Default (appFrontend)Default (realtimeGateway)
<svc>.enabledEnable the servicetruetruefalse
<svc>.replicaCountReplicas222
<svc>.image.repositoryImage repository (published chart fully-qualifies these to ghcr.io/copilotkit/intelligence/<svc>)intelligence/app-apiintelligence/app-frontendintelligence/realtime-gateway
<svc>.image.tagImage tag (defaults to chart appVersion)""""""
<svc>.resourcesCPU/memory requests250m / 512Mi100m / 128Mi500m / 512Mi
<svc>.autoscaling.enabledEnable HPAtruefalsetrue
<svc>.autoscaling.minReplicasHPA minimum222
<svc>.autoscaling.maxReplicasHPA maximum10410
<svc>.serviceAccount.annotationsAnnotations on the ServiceAccount (IRSA, workload identity){}{}{}
<svc>.podAnnotationsPod template annotations (e.g. for Stakater Reloader on ESO secret rotation){}n/a{}

Realtime gateway (additional keys)

KeyDescriptionDefault
realtimeGateway.enabledEnable the gatewayfalse
realtimeGateway.hostPHX_HOST override""
realtimeGateway.existingSecretSecret containing keys runner-auth-secret and secret-key-base (mapped to env vars RUNNER_AUTH_SECRET / SECRET_KEY_BASE)""
realtimeGateway.beam.clustering.enabledBEAM clustering across replicastrue
realtimeGateway.beam.cookieSecret.nameSecret containing the BEAM cookiecpki-beam-cookie

Enabling the realtime gateway requires that either realtimeGateway.existingSecret is set, or that externalSecrets.secrets.realtimeGateway.enabled or selfHostedSecrets.enabled is true — the chart fails validation otherwise.

External Secrets Operator integration

KeyDescriptionDefault
externalSecrets.enabledGenerate ExternalSecret resourcesfalse
externalSecrets.store.kindClusterSecretStore or SecretStoreClusterSecretStore
externalSecrets.store.nameSecretStore name"" (required when enabled)
externalSecrets.refreshIntervalHow often ESO syncs1h
externalSecrets.secrets.*Per-secret mappings — see values.yaml

Self-hosted (chart-managed) secrets

KeyDescriptionDefault
selfHostedSecrets.enabledCreate Kubernetes Secrets from inline values; auto-generates blank fieldsfalse
selfHostedSecrets.db.urlPostgres connection URL"" (required when enabled)
selfHostedSecrets.redis.urlRedis connection URL"" (required when enabled)
selfHostedSecrets.auth.clientId / clientSecretOIDC client credentials"" (required when enabled)
selfHostedSecrets.auth.secretInternal auth signing secretauto-generated when empty
selfHostedSecrets.realtimeGateway.runnerAuthSecret / secretKeyBaseRuntime gateway secretsauto-generated when empty
selfHostedSecrets.beam.releaseCookieBEAM clustering cookieauto-generated when empty

Bundled subcharts (evaluation only)

KeyDescriptionDefault
postgresql.enabledDeploy in-cluster Postgresfalse
postgresql.auth.passwordPostgres password (set at deploy time)""
redis-subchart.enabledDeploy in-cluster Redis (aliased to avoid collision with redis.*)false
redis-subchart.auth.passwordRedis password""
keycloak.enabledDeploy bundled Keycloak for quick evalfalse

Object storage (realtime gateway event persistence)

KeyDescriptionDefault
objectStorage.enabledPersist AG-UI events from the realtime gateway to S3-compatible storagefalse
objectStorage.bucketBucket name""
objectStorage.regionBucket regionus-east-1
objectStorage.endpointS3-compatible endpoint override (e.g. for MinIO)""
objectStorage.forcePathStyleForce path-style addressing (required for MinIO)false
objectStorage.existingSecretSecret with static access keys (optional if using IRSA)""

Database migrations

KeyDescriptionDefault
migrations.enabledRun the migrations Job. Required for first install — defaults to false.false
migrations.image.repositoryMigrations image repositoryintelligence/db-migrations
migrations.activeDeadlineSecondsJob deadline1800
migrations.backoffLimitRetry count before failing3

The migrations Job runs as a pre-install/pre-upgrade Helm hook (weight -5) when secrets are pre-created (External Secrets path or manual existingSecret) and as a post-install/post-upgrade hook (weight 5) when secrets are managed by Helm itself (selfHostedSecrets.enabled or postgresql.enabled).

Thread culler (CronJob)

KeyDescriptionDefault
threadCuller.enabledRun a CronJob that soft-deletes stale threads in unlicensed deploymentsfalse
threadCuller.scheduleCron expression0 * * * *
threadCuller.staleHoursThreads older than this many hours (since last update) are culled"3"
threadCuller.batchSizeMaximum threads to cull per run"1000"
threadCuller.licenseSecret.existingSecretSecret containing COPILOTKIT_LICENSE_TOKEN. When set, the CronJob skips culling (licensed install). When empty, it culls.""

Shared config (CORS, logging)

KeyDescriptionDefault
config.logLevelLog level for all services (trace/debug/info/warn/error/fatal)info
config.nodeEnvNode environment; affects cookie security and runtime defaultsproduction
config.appFrontendOriginBrowser origin allowed to perform authenticated bootstrap writes""
config.publicAppOriginPublic UI origin used by server-side callbacks when distinct from appFrontendOrigin""
config.allowedOriginsAdditional CORS allowlist (comma-separated). Entries are exact origins (https://app.example.com) or Phoenix-style //host patterns""

Pod-level controls

Per-service keys podDisruptionBudget, podAntiAffinity, and networkPolicy are available for high-availability and traffic-isolation requirements. See values.yaml for full shapes.

Next steps

  • Understand how it works: How the Enterprise Intelligence Platform Works — architecture, multi-tenancy model, platform layering, and the decision between hosted and self-hosted
  • Premium features overview: CopilotKit Premium — all premium capabilities that require an Intelligence license
  • Use threads in your app: Threads — the persistent-conversation surface powered by the Enterprise Intelligence Platform you just deployed