content/operate/kubernetes/release-notes/6-2-releases/k8s-6-2-8-15-2022-01.md
The Redis Enterprise K8s 6.2.8-15 is a maintenance release for the [Redis Enterprise Software release 6.2.8]({{< relref "/operate/rs/release-notes/rs-6-2-8-october-2021.md" >}}) and includes bug fixes.
The key new features, bug fixes, and known limitations are described below.
This release includes the following container images:
redislabs/redis:6.2.8-64 or redislabs/redis:6.2.8-64.rhel7-openshiftredislabs/operator:6.2.8-15redislabs/k8s-controller:6.2.8-15 or redislabs/services-manager:6.2.8-15 (on the Red Hat registry)Upgrading with the bundle using kubectl apply -f fails giving error (RED-69570):
The CustomResourceDefinition "redisenterpriseclusters.app.redislabs.com" is invalid: spec.preserveUnknownFields: Invalid value: true: must be false in order to use defaults in the schema.
Removed unneeded certificates from the Redis Enterprise Software container (RED-69661, RED-60086)
On clusters with more than 9 REC nodes, a Kubernetes upgrade can render the Redis cluster unresponsive in some cases. A fix is available in the 6.4.2-5 release. Upgrade your operator version to 6.4.2-5 or later before upgrading your Kubernetes cluster. (RED-93025)
A cluster name longer than 20 characters will result in a rejected route configuration because the host part of the domain name will exceed 63 characters. The workaround is to limit cluster name to 20 characters or less.
A cluster CR specification error is not reported if two or more invalid CR resources are updated in sequence.
When a cluster is in an unreachable state, the state is still running instead of being reported as an error.
STS Readiness probe does not mark a node as "not ready" when running rladmin status on node failure.
The redis-enterprise-operator role is missing permission on replica sets.
OpenShift 3.11 does not support DockerHub private registries. This is a known OpenShift issue.
DNS conflicts are possible between the cluster mdns_server and the K8s DNS. This only impacts DNS resolution from within cluster nodes for Kubernetes DNS names.
Kubernetes-based 5.4.10 deployments seem to negatively impact existing 5.4.6 deployments that share a Kubernetes cluster.
In Kubernetes, the node CPU usage we report on is the usage of the Kubernetes worker node hosting the REC pod.
In OLM-deployed operators, the deployment of the cluster will fail if the name is not "rec". When the operator is deployed via the OLM, the security context constraints (scc) are bound to a specific service account name (i.e., "rec"). The workaround is to name the cluster "rec".
When REC clusters are deployed on Kubernetes clusters with unsynchronized clocks, the REC cluster does not start correctly. The fix is to use NTP to synchronize the underlying K8s nodes.
When a REC cluster is deployed in a project (namespace) and has REDB resources, the REDB resources must be deleted first before the REC can be deleted. As such, until the REDB resources are deleted, the project deletion will hang. The fix is to delete the REDB resources first and the REC second. Afterwards, you may delete the project.
In K8s 1.15 or older, the PVC labels come from the match selectors and not the PVC templates. As such, these versions cannot support PVC labels. If this feature is required, the only fix is to upgrade the K8s cluster to a newer version.
There is no workaround at this time
There is no workaround at this time except to ignore the errors.
The workaround for this issue is to make sure you use integer values for the PVC size.
The workaround is to use the newer (current) revision of the quick start document available online.
See [Supported Kubernetes distributions]({{< relref "/operate/kubernetes/reference/supported_k8s_distributions.md" >}}) for the full list of supported distributions.