designs/helm-chart-autogenerate-plugin.md
| Authors | Creation Date | Status | Extra |
|---|---|---|---|
| @dashanji,@camilamacedo86,@LCaparelli | Sep, 2023 | Implemented | - |
This proposal aims to introduce an optional mechanism that allows users to generate a Helm Chart from their Kubebuilder-scaffolded project. This will enable them to effectively package and distribute their solutions.
To achieve this goal, we are proposing a new native Kubebuilder
plugin (i.e., helm-chart/v1-alpha)
which will provide the necessary scaffolds. The plugin will function similarly
to the existing Grafana Plugin, generating or regenerating HelmChart files using the init and edit
sub-commands (i.e., kubebuilder init|edit --plugins helm-chart/v1-alpha).
An alternative solution could be to implement an alpha command,
similar to the helper provided to upgrade projects that would
provide the HelmChart under the distdirectory, similar to what
is done by helmify.
To enable the helm-chart generation when a project is initialized
kubebuilder init --plugins=
go/v4,helm/v1-alpha
To enable the helm-chart generation after the project be scaffolded
kubebuilder edit --plugins=
helm/v1-alpha
Note that the HelmChart should be scaffold under the
dist/directory in both scenarios:shellexample-project/ dist/ chart/
To sync the HelmChart with the latest changes and add the manifests generated
kubebuilder edit --plugins=
helm/v1-alpha
The above command will be responsible for ensuring that the Helm Chart is properly updated with the latest changes in the project, including the files generated by controller-gen when users run make manifests.
According to Helm Best Practices for Custom Resource Definitions, there are two main methods for handling CRDs:
crds/ directory. Helm installs these CRDs during the initial
install but does not manage upgrades or deletions.Raised Considerations and Concerns
templates folder facilitates upgrades but uninstalls CRDs when the operator is uninstalled. However, it allows users easier manage the CRDs and install them on upgrades. It is a common approach adopted by maintainers but is not considered a good practice by Helm itself.service/workload.
It would also make sense to include validating/mutating webhooks, requiring the scaffolding of separate main modules and image builds for
webhooks and controllers which does not shows to be compatible with Kubebuilder Golang scaffold.Proposed Solution
Follow the same approach adopted by Cert-Manager.
Add the CRDs under the template directory and have a spec in the values.yaml
which will define if the CRDs should or not be applied
helm install|upgrade \
myrelease \
--namespace my-namespace \
--set `crds.enabled=true`
Also, add another spec to the values.yaml to not allow the CRDs
be deleted when the helm is uninstalled:
# START annotations {{- if .Values.crds.keep }}
annotations:
helm.sh/resource-policy: keep
# END annotations {{- end }}
Additionally, we might want to scaffold separate charts for the APIs and support both. An example of this approach provided as feedback was karpenter-provider-aws.
We should either make clear the usage of both supported ways and clarify their limitations. However, the proposed solution would result in the following layout:
example-project/
dist/
chart/
example-project-crd/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ │ └── <CRDs YAML files generated under config/crds/>
└── values.yaml
example-project/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ └── <CRDs YAML files generated under config/crds/>
│ ├── ...
Helm charts allow maintainers to define dependencies via the Chart.yaml file.
However, in the initial version of this plugin at least, we do not need to consider management of dependencies.
Adding dependencies such as Cert-Manager and Prometheus directly in the Chart.yaml
could introduce issues since these components are intended to be installed only once per cluster.
Attempting to manage multiple installations could lead to conflicts and cause unintended behaviors,
especially in shared cluster environments.
To avoid these issues, the plugin for now will not scaffold this file and will not try to manage it. Instead, users will be responsible for managing these dependencies outside of the generated Helm chart, ensuring they are correctly installed and only installed once in the cluster.
Currently, projects scaffolded with Kubebuilder can be distributed via YAML. Users can run
make build-installer IMG=<some-registry>/<project-name>:tag, which will generate dist/install.yaml.
Therefore, its consumers can install the solution by applying this YAML file, such as:
kubectl apply -f https://raw.githubusercontent.com/<org>/<project-name>/<tag or branch>/dist/install.yaml.
However, many adopt solutions require the Helm Chart format, such as FluxCD. Therefore, maintainers are looking to also provide their solutions via Helm Chart. Users currently face the challenges of lacking an officially supported distribution mechanism for Helm Charts. They seek to:
Consequently, this proposal aims to introduce a method that allows Kubebuilder users to easily distribute their projects through Helm Charts, a strategy that many well-known projects have adopted:
NOTE: For further context see the discussion topic
values.ymal.go/v4 and kustomize/v2Location and Versioning: The new plugin should follow Kubebuilder standards and
be implemented under pkg/plugins/optional. It should be introduced as an alpha version
(v1alpha), similar to the Grafana plugin.
The data should be tracked in PROJECT File: Usage of the plugin should be tracked in the PROJECT
file with the input via flags and options if required. Example entry in the PROJECT file:
...
plugins:
helm.go.kubebuilder.io/v1-alpha:
options: ## (If ANY)
<flag/key>: <value>
Ensure that user-provided input is properly tracked, similar to how it's done in other plugins (see the code in the plugin.go) and the (code source to track the data) of the deploy-image plugin for reference.
NOTE We might not need options/flags in the first implementation. However, we should still track the plugin as we do for the Grafana plugin.
Following the structure implementation for the source code of this plugin:
.
├── helm-chart
│ └── v1alpha1
│ ├── init.go
│ ├── edit.go
│ ├── plugin.go
│ └── scaffolds
│ ├── init.go
│ ├── edit.go
│ └── internal
│ └── templates
For each subCommand we will need to check the resources which are scaffold for each subCommand via the kustomize plugin and ensure that we will implement the subCommand of the HelmChart plugin to the respective scaffolds as well.
controller-genUsers will need to call the subcommand edit passing the plugin to
ensure that the Helm chart is properly synced.
Therefore, the PostScaffold of this command could perform steps such as:
make manifests: Generate the latest CRDs and other manifests.cp config/crd/bases/*.yaml chart/example-project-crd/templates/crds/cp config/rbac/*.yaml chart/example-project/templates/rbac/cp config/webhook/*.yaml chart/example-project/templates/webhook/cp config/default/manager.yaml chart/example-project/templates/manager/manager.ymal0name: system with {{ .Values.Release.name }}.This ensures the Helm chart is always up-to-date with the latest manifests generated by Kubebuilder, maintaining consistency with the configured namespace and other customizable fields.
We will need to use the utils helpers such as ReplaceInFile or EnsureExistAndReplace to achieve this goal.
By default, the values.yaml file should not
be overwritten. However, users should have the option to overwrite it using
a flag (--force=true).
This can be implemented in the specific template as done for other plugins:
if f.Force {
f.IfExistsAction = machinery.OverwriteFile
} else {
f.IfExistsAction = machinery.Error
}
NOTE: We will evaluate the cases when we implement webhook.go and api.go
for the HelmChart plugin. However, we might use the force flag to replicate
the same behavior implemented in the subCommands of the kustomize plugin.
For instance, if the flag is used when creating an API, it forces
the overwriten of the generated samples. Similarly, if the api subCommand
of the HelmChart plugin is called with --force, we should replace
all samples with the latest versions instead of only adding the new one.
Ensure templates install resources based on
conditions defined in the values.yaml. Example for CRDs:
# To install CRDs
{{- if .Values.crd.enable }}
...
{{- end }}
values.yaml,
such as defining ServiceAccount names, and whether they should be created or not.
Furthermore, we should include comments to help end-users understand the source
of configurations. Example:{{- if .Values.rbac.enable }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: project-v4
app.kubernetes.io/managed-by: kustomize
name: {{ .Values.rbac.serviceAccountName }}
namespace: {{ .Release.Namespace }}
{{- end }}
Following an example to illustrate the expected result of this plugin:
# Install CRDs under the template
crd:
enable: false
keep: true
# Webhook configuration sourced from the `config/webhook`
webhook:
enabled: true
conversion:
enabled: true
## RBAC configuration under the `config/rbac` directory
rbac:
create: true
serviceAccountName: "controller-manager"
# Cert-manager configuration
certmanager:
enabled: false
issuerName: "letsencrypt-prod"
commonName: "example.com"
dnsName: "example.com"
# Network policy configuration sourced from the `config/network_policy`
networkPolicy:
enabled: false
# Prometheus configuration
prometheus:
enabled: false
# Manager configuration sourced from the `config/manager`
manager:
replicas: 1
image:
repository: "controller"
tag: "latest"
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
# Metrics configuration sourced from the `config/metrics`
metrics:
enabled: true
# Leader election configuration sourced from the `config/leader_election`
leaderElection:
enabled: true
role: "leader-election-role"
rolebinding: "leader-election-rolebinding"
# Controller Manager configuration sourced from the `config/manager`
controllerManager:
manager:
args:
- --metrics-bind-address=:8443
- --leader-elect
- --health-probe-bind-address=:8081
containerSecurityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
image:
repository: controller
tag: latest
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
replicas: 1
serviceAccount:
annotations: {}
# Kubernetes cluster domain configuration
kubernetesClusterDomain: cluster.local
# Metrics service configuration sourced from the `config/metrics`
metricsService:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: 8443
type: ClusterIP
# Webhook service configuration sourced from the `config/webhook`
webhookService:
ports:
- port: 443
protocol: TCP
targetPort: 9443
type: ClusterIP
The HelmChart plugin should not scaffold optional options enabled
when those are scaffolded as disabled by the default implementation
of kustomize/v2 and consequently the go/v4 plugin used by default. Example:
The dependency on Cert-Manager is disabled by default.
From config/default/kusyomization.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
Therefore, by default the values.yaml should be scaffolded with:
# Cert-manager configuration
certmanager:
enabled: false
Following an example of the expected result of this plugin:
example-project/
dist/
chart/
example-project-crd/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ │ └── <CRDs YAML files generated under config/crds/>
└── values.yaml
example-project/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ └── <CRDs YAML files generated under config/crds/>
│ ├── certmanager/
│ │ └── certificate.yaml
│ ├── manager/
│ │ └── manager.yaml
│ ├── network-policy/
│ │ ├── allow-metrics-traffic.yaml
│ │ └── allow-webhook-traffic.yaml // Should be added by the plugin subCommand webhook.go
│ ├── prometheus/
│ │ └── monitor.yaml
│ ├── rbac/
│ │ ├── kind_editor_role.yaml
│ │ ├── kind_viewer_role.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── metrics_auth_role.yaml
│ │ ├── metrics_auth_role_binding.yaml
│ │ ├── metrics_reader_role.yaml
│ │ ├── role.yaml
│ │ ├── role_binding.yaml
│ │ └── service_account.yaml
│ ├── samples/
│ │ └── kind_version_admiral.yaml
│ ├── webhook/
│ │ ├── manifests.yaml
│ │ └── service.yaml
└── values.yaml
A README.md is scaffold for the projects. (see its implementation here). Therefore, if the project is scaffold with the HelmChart plugin then, we should update the Distribution section to add the info and steps over how to keep the HelmChart synced.
To ensure that the new plugin will work well we will need to:
The new plugin should either be properly documented such as the others. For reference see:
Difficulty in Maintaining the Solution
Maintaining the solution may prove challenging in the long term, particularly if it does not gain community adoption and, consequently, collaboration. To mitigate this risk, the proposal aims to introduce an optional alpha plugin or to implement it through an alpha command. This approach provides us with greater flexibility to make adjustments or, if necessary, to deprecate the feature without definitively compromising support.
In order to prove that would be possible we could refer to the open source tool helmify.
Inability to Handle Complex Kubebuilder Scenarios
The proposed plugin may struggle to appropriately handle complex scenarios commonly encountered in Kubebuilder projects, such as intricate webhook configurations. Kubebuilder’s scaffolded projects can have sophisticated webhook setups, and translating these accurately into Helm Charts may prove challenging. This could result in Helm Charts that are not fully reflective of the original project’s functionality or configurations.
Incomplete Generation of Valid and Deployable Helm Charts
The proposed solution may not be capable of generating a fully valid and deployable Helm Chart for all use cases supported by Kubebuilder. Given the diversity and complexity of potential configurations within Kubebuilder projects, there is a risk that the generated Helm Charts may require significant manual intervention to be functional. This drawback undermines the goal of simplifying distribution via Helm Charts and could lead to frustration for users who expect a seamless and automated process.
Via a new command (Alternative Option)
By running the following command, the plugin will generate a helm chart from the specific kustomize directory and output it to the directory specified by the --output flag.
kubebuilder alpha generate-helm-chart --from=<path> --output=<path>
The main drawback of this option is that it does not adhere to the Kubebuilder ecosystem.
Additionally, we would not take advantage of Kubebuilder library features, such as avoiding
overwriting the values.yaml. It might also be harder to support and maintain since we would
not have the templates as we usually do.
Lastly, another con is that it would not allow us to scaffold projects with the plugin
enabled and in the future provide further configurations and customizations for this plugin.
These configurations would be tracked in the PROJECT file, allowing integration with other
projects, extensions, and the re-scaffolding of the HelmChart while preserving the inputs
provided by the user via plugins flags as it is done for example for
the Deploy Image plugin.