designs/discontinue_usage_of_kube_rbac_proxy.md
| Authors | Creation Date | Status | Extra |
|---|---|---|---|
| @camilamacedo86 | 07/04/2024 | Implementable | - |
This proposal highlights the need to reassess the usage of kube-rbac-proxy in the default scaffold due to the evolving k8s infra and community feedback. Key considerations include the transition to a shared infrastructure requiring all images to be published on registry.k8s.io, the deprecation of Google Cloud Platform's Container Registry, and the fact that kube-rbac-proxy is yet to be part of the Kubernetes ecosystem umbrella.
The dependency on a potentially discontinuable Google infrastructure, which is out of our control, paired with the challenges of maintaining, building, or promoting kube-rbac-proxy images, calls for a change.
In this document is proposed to replace the kube-rbac-proxy within Network Policies follow-up for potentially enhancements to protect the metrics endpoint combined with cert-manager and a new a feature introduced in controller-runtime, see here.
For the future (when kube-rbac-proxy be part of the k8s umbrella), it is proposed the usage of the Plugins API provided by Kubebuilder, to create an external plugin to properly integrate the solution with Kubebuilder and provide a helper to allow users to opt-in as they please them.
Besides Network Policies being part of the core Kubernetes API, their enforcement relies on the CNI plugin installed in the Kubernetes cluster. While support and implementation details vary among CNIs, the most commonly used ones, such as Calico, Cilium, WeaveNet, and Canal, offer support for NetworkPolicies.
Also, there was concern in the past because AWS did not support it. However, this changed, as detailed in their announcement: Amazon VPC CNI now supports Kubernetes Network Policies.
Moreover, under this proposal, users can still disable/enable this option as they please them.
authn/authz and encryption.Yes, that's correct. NetworkPolicy acts as a basic firewall for pods within a Kubernetes cluster, controlling traffic flow at the IP address or port level. However, it doesn't handle authentication (authn), authorization (authz), or encryption directly like kube-rbac-proxy solution.
However, if we can combine the cert-manager and the new feature provided by controller-runtime, we can achieve the same or a superior level of protection without relying on any extra third-party dependency.
registry.k8s.io?We tried to do that, see here the recipe implemented. However, it does not work because kube-rbac-proxy is not under the kubernetes umbrella. Moreover, we experimented with the GitHub Repository as an alternative approach, see the PR but seems that we are not allowed to use it. Nevertheless, neither approach sorts out all motivations and requirements Ideally, Kubebuilder should not be responsible for maintaining and promoting third-party artefacts.
Yes, but it also will need to change. Controller-runtime maintainers are looking for solutions to build those binaries inside its project since it seems part of its domain. This change is likely to be transparent to the community users.
Yes, after some changes are addressed. After we ask for a hand for reviews from skilled auth maintainers and receive feedback, it appears that this configuration needs to align with best practices. See the issue raised to track this need.
No, we can not. One of the goals of Kubebuilder is to make it easier for new users. So, we cannot make mandatory the usage of a third party as cert-manager for users by default and to only quick-start.
However, we can make mandatory the usage of cert-manager for some specific features like use kube-rbac-proxy or, as it is today, using webhooks, a more advanced and optional option.
Starting with release 3.15.0, Kubebuilder will no longer scaffold
new projects with kube-rbac-proxy.
Existing users are encouraged to switch to images hosted by the project
on quay.io OR
to adapt their projects to utilize Network Policies, following the updated scaffold guidelines.
For project updates, users can manually review scaffold changes or utilize the provided upgrade assistance helper.
Communications and guidelines would be provided along with the release.
gcr.io to registry.k8s.io and
the Container Registry deprecation implies that all images provided so far by Kubebuilder
here will unassailable by April 22, 2025. More info and slack ETA threadHowever, incorporating NetworkPolicies, cert-manager, and/or the features introduced in the controller-runtime pull request #2407 we are mainly addressing the security concerns that kube-rbac-proxy handles.
The immediate action outlined in this proposal is the replacement of kube-rbac-proxy with Kubernetes API NetworkPolicies.
Looking beyond the initial phase, this proposal envisions integrating cert-manager for TLS certificate management and exploring synergies with new features in Controller Runtime, as demonstrated in PR #2407.
These enhancements would introduce encrypted communication for metrics endpoints and potentially incorporate authentication mechanisms, significantly elevating the security model employed by projects scaffolded by Kubebuilder.
That would mean, in a follow-up to the current open PR to address the above phase 1 - Transition to NetworkPolices,
we aim to introduce a configurable Kustomize patch that will enable patching the ServiceMonitor in config/prometheus/monitor.yaml and certificates similar to our
existing setup for webhooks. This enhancement will ensure more flexible deployment configurations and enhance the security
features of the service monitoring components.
Currently, in the config/default/, we have implemented patches for cert-manager along with webhooks, as seen in
config/default/kustomization.yaml (example).
These patches handle annotations for the cert-manager CA injection across various configurations, like
ValidatingWebhookConfiguration, MutatingWebhookConfiguration, and CRDs.
For the proposed enhancements, we need to integrate similar configurations for the ServiceMonitor.
This involves the creation of a patch file named metrics_https_patch.yaml, which will include
configurations necessary for enabling HTTPS for the ServiceMonitor.
Here's an example of how this configuration might look:
# [METRICS WITH HTTPS] To enable the ServiceMonitor using HTTPS, uncomment the following line
# Note that for this to work, you also need to ensure that cert-manager is enabled in your project
- path: metrics_https_patch.yaml
This patch should apply similar changes as the current webhook patches, targeting necessary updates in the manifest to support HTTPS communication secured by cert-manager certificates.
Here is an example of how the ServiceMonitor configured to work with cert-manager might look:
# Prometheus Monitor Service (Metrics) with cert-manager
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: project-v4
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-monitor
namespace: system
annotations:
cert-manager.io/inject-ca-from: $(NAMESPACE)/controller-manager-certificate
spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
# We should recommend ensure that TLS verification is not skipped in production
insecureSkipVerify: false
caFile: /etc/prometheus/secrets/ca.crt # CA certificate injected by cert-manager
certFile: /etc/prometheus/secrets/tls.crt # TLS certificate injected by cert-manager
keyFile: /etc/prometheus/secrets/tls.key # TLS private key injected by cert-manager
selector:
matchLabels:
control-plane: controller-manager
After we have the issue
addressed, and we plan to use it to protect the endpoint. See, that would mean ensuring
that we are either handle authentication (authn), authorization (authz).
Examples of its implementation can be found here.
Once kube-rbac-proxy is included in the Kubernetes umbrella, Kubebuilder maintainers can support its integration through a plugin. We can following up the ongoing process and changes required for the project be accepted by looking at the project issue.
This enables a seamless way to incorporate kube-rbac-proxy into Kubebuilder scaffolds, allowing users to run:
kubebuilder init|edit --plugins="kube-rbac-proxy/v1"
So that the plugin could use the plugin/util lib provide
to comment (We can add a method like the UncommentCode)
the patches in the config/default/kustomization and disable the default network policy used within
and replace the code
in the main.go bellow with to not use the controller-runtime
feature instead.
ctrlOptions := ctrl.Options{
MetricsFilterProvider: filters.WithAuthenticationAndAuthorization,
MetricsSecureServing: true,
}
Each phase of implementation associated with this proposal must include corresponding updates to the documentation. This is essential to ensure end users understand how to enable, configure, and utilize the options effectively. Documentation updates should be completed as part of the pull request to introduce code changes.
The transition to the new shared infrastructure for Kubernetes SIG projects has rendered us unable to automatically build and promote images as before. The process only works for projects under the umbrella. However, the k8s-infra maintainers could manually transfer these images to the new registry.k8s.io as a "contingent approach". See: https://explore.ggcr.dev/?repo=gcr.io%2Fk8s-staging-kubebuilder%2Fkube-rbac-proxy
To continue using kube-rbac-proxy, users must update their projects to reference images
from the new registry. This requires a project update and a new release,
ensuring the image references in the config/default/manager_auth_proxy_patch.yaml point
to a new place.
Therefore, the best approach here for those still interested in using kube-rbac-proxy seems to direct them to the images hosted at quay.io, which are maintained by the project itself and then, we keep those images in the registry.k8s.io as a "contingent approach".
Ensuring that these images will continue to be promoted under any infrastructure available to Kubebuilder is not reliable or achievable for Kubebuilder maintainers. It is definitely out of our control.
Kubebuilder hasn't received any official notice regarding a shutdown of its project there so far, but there's a proactive move to transition away from Google Cloud Platform services due to factors beyond our control. Open communication with our community is key as we explore alternatives. It's important to note the Container Registry Deprecation results in users no longer able to consume those images from the current location from early 2025, emphasizing the need to shift away from dependent images as soon as possible and communicate it extensively through mailing lists and other channels to ensure community awareness and readiness.
gcr.io/kubebuilder/kube-rbac-proxy with registry.k8s.io/kubebuilder/kube-rbac-proxyThe k8s-infra maintainers assist in ensuring these images will not be lost by:
An available option would be to communicate to users to:
gcr.io/k8s-staging-kubebuilder/kube-rbac-proxy to registry.k8s.io/kubebuilder/kube-rbac-proxyCons:
This alternative keeps kube-rbac-proxy out of the default scaffolds, offering it as an optional plugin for users who choose to integrate it. Clear communication will be crucial to inform users about the implications of using kube-rbac-proxy.
Cons:
Mainly, all cons added for the above alternative option Replace the current images gcr.io/kubebuilder/kube-rbac-proxy
with registry.k8s.io/kubebuilder/kube-rbac-proxy within the exception that we would make clear that we kubebuilder
is unable to manage those images and move the current implementation for the alpha plugin
it would maybe make the process to move it from the Kubebuilder repository to kube-rbac-proxy an
easier process to allow them to work with the external plugin.
However, that is a double effort for users and Kubebuilder maintainers to deal with breaking changes resulting from achieving the ultimate go. Therefore, it would make more sense to encourage using external-plugins API and add this option in their repo once, then create these intermediate steps.