design/defunct/one-pager-host-aware-stack-manager.md
Crossplane controllers manage external resources declaratively via Kubernetes Custom Resources. They do not need workload scheduling capabilities and only need to have access to Kubernetes API Server to act on custom resources. Hence, it should not be mandatory to have a full Kubernetes Cluster (with scheduling capabilities) as long as controllers are running somewhere and configured against Kubernetes API server instance.
Stack Manager is responsible for deployment of Crossplane controllers packaged as stacks. Currently, it assumes it is
the same Kubernetes API where controller pods deployed and custom resources exist which makes it mandatory to have a
dedicated Kubernetes Cluster per Crossplane installation.
Stack Manager should support configuring different Kubernetes API Servers for scheduling stack controllers and watching custom resources. This will enable deploying multiple isolated Crossplane instances watching dedicated Kubernetes API instances (i.e. multiple API instances running in different namespaces) on a single Host Kubernetes cluster.
Stack Manager Changes:
stack related CRs on Tenant Instance API Server but should schedule stack
installation jobs and controller deployments on Host Cluster.Packaging Changes:
The suggested way to access Kubernetes API from a pod is using in cluster config.
However, for this proposal, we need to authenticate two different Kubernetes API's inside the stack manager pod and
we can only use in cluster config for one of them and will take a kubeconfig as parameter for the other.
Option 1: Using in cluster config for Host Cluster: Host Cluster already auto-mounts service account tokens which we
can use.
Option 2: Using in cluster config for Tenant Instance: This will be consistent with other controllers (Crossplane
and stack controllers).
To leverage existing mechanisms to authenticate to host cluster, we will use Option 1. We will define RBAC rules for
stack manager which will be part of crossplane-controllers helm chart (see packaging changes)
that will be deployed into host cluster. This way, we can explicitly declare permissions of stack manager on Host
Cluster.
In case of a stack installation, Stack Manager should:
Stack, CRDs and RBAC resources on Tenant InstanceWhile creating resources (token secrets or controller deployments) on Host Cluster, Stack Manager will map objects
(jobs, deployments, secrets ...) on multiple namespaces of Tenant Instance into a single namespace on Host Cluster
by prepending <namespace>. to object name. For example:
| Tenant Instance | Host Cluster |
|---|---|
| Namespace / Name | Namespace / Name |
| crossplane-system / provider-gcp | tenant-n-controllers / crossplane-system.provider-gcp |
| dev / stack-wordpress | tenant-n-controllers / dev.stack-wordpress |
This mapping will cause a length limit to resource names as follows:
len(namespace) + len(resource-name) + 1 <= 253
Right now, this will effect:
However, considering the following, this should not cause a new limitation for Crossplane:
core.crossplane.io/parent-name: provider-gcp) and Kubernetes
Labels are limited to 63 charsStack manager can no longer rely on owner references for deletion of stack install jobs and controller deployments since stack related custom resources and jobs/deployments live in different Kubernetes API Servers. Label based cleanup will be implemented as a solution.
One extra artifact that needs to deleted per Stack uninstallation is the token secret created on Host Cluster. This will be achieved by setting owner of that secret as Stack Controller Deployment which also lives same Cluster/Namespace.
We need to deploy crossplane types and roles (CRDs and RBACs) separately from Crossplane and Stack Manager pods. This
could be realized by introducing some new helm parameters to existing Crossplane helm chart, however,
helm 3 does not support templating in CRDs directory which blocks
conditional installation.
We will introduce following two helm (helm 3) charts:
crossplane-types: this is the piece that will be deployed into Tenant Instance. This chart includes CRDs and
RBACs related to CRDs.
.
├── Chart.yaml
├── crds
│ ├── cache.crossplane.io_redisclusters.yaml
│ ├── compute.crossplane.io_kubernetesclusters.yaml
│ ├── compute.crossplane.io_machineinstances.yaml
│ ├── database.crossplane.io_mysqlinstances.yaml
│ ├── database.crossplane.io_postgresqlinstances.yaml
│ ├── kubernetes.crossplane.io_providers.yaml
│ ├── stacks.crossplane.io_clusterstackinstalls.yaml
│ ├── stacks.crossplane.io_stackinstalls.yaml
│ ├── stacks.crossplane.io_stacks.yaml
│ ├── storage.crossplane.io_buckets.yaml
│ ├── workload.crossplane.io_kubernetesapplicationresources.yaml
│ └── workload.crossplane.io_kubernetesapplications.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── crossplane-admin-clusterrole.yaml
│ ├── crossplane-clusterrole.yaml
│ ├── crossplane-clusterrolebinding.yaml
│ ├── crossplane-serviceaccount.yaml
│ ├── environment-personas-clusterroles.yaml
│ ├── namespace-personas-clusterroles.yaml
│ ├── stack-manager-clusterrolebinding.yaml
│ └── stack-manager-serviceaccount.yaml
├── values.yaml
└── values.yaml.tmpl
crossplane-controllers: this is the piece that will be deployed into Host Cluster. This chart includes deployments for
crossplane and stack-manager deployments and also RBACs necessary for stack-manager on Host Cluster.
.
├── Chart.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── crossplane-deployment.yaml
│ ├── stack-manager-deployment.yaml
│ ├── stack-manager-host-role.yaml
│ ├── stack-manager-host-rolebinding.yaml
│ └── stack-manager-host-serviceaccount.yaml
├── values.yaml
└── values.yaml.tmpl
Once we drop helm 2 support, we can create an umbrella chart for Crossplane which has dependency to these two charts and
use the same definitions for regular installations as well. One problem that we need to solve in this case would be to
define requirements.yaml to point correct helm repo/channel. This is left out of scope of this document.
A pod created inside a Kubernetes Cluster can contact Api Server as follows:
KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment
variables.service account.See accessing the API from a pod for more details.
To configure a Pod against Tenant Instance API Server:
KUBERNETES_SERVICE_HOST environment variable in pod definition as the name of the service backed by Tenant
Kubernetes API Server.KUBERNETES_SERVICE_PORT environment variable as the port of Tenant Instance API Server, e.g. 6443spec.automountServiceAccountToken as falsespec.serviceAccountNamespec.deprecatedServiceAccount/var/run/secrets/kubernetes.io/serviceaccount