Back to Autoscaler

Cluster Autoscaler

cluster-autoscaler/README.md

latest15.9 KB
Original Source
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->

Cluster Autoscaler

Introduction

Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to be scheduled in the cluster due to insufficient resources.
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

FAQ/Documentation

An FAQ is available HERE.

You should also take a look at the notes and "gotchas" for your specific cloud provider:

Releases

We recommend using Cluster Autoscaler with the Kubernetes control plane (previously referred to as master) version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.

Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes minor releases exactly.

Kubernetes VersionCA VersionChart Version
1.34.x1.34.x9.51.0+
1.33.x1.33.x9.47.0+
1.32.x1.32.x9.45.0+
1.31.x1.31.x9.38.0+
1.30.x1.30.x9.37.0+
1.29.X1.29.X9.35.0+
1.28.X1.28.X9.34.0+
1.27.X1.27.X9.29.0+
1.26.X1.26.X9.28.0+
1.25.X1.25.X
1.24.X1.24.X9.25.0+
1.23.X1.23.X9.14.0+
1.22.X1.22.X
1.21.X1.21.X9.10.0+
1.20.X1.20.X9.5.0+
1.19.X1.19.X
1.18.X1.18.X9.0.0+
1.17.X1.17.X
1.16.X1.16.X
1.15.X1.15.X
1.14.X1.14.X
1.13.X1.13.X
1.12.X1.12.X
1.11.X1.3.X
1.10.X1.2.X
1.9.X1.1.X
1.8.X1.0.X
1.7.X0.6.X
1.6.X0.5.X, 0.6.X<sup>*</sup>
1.5.X0.4.X
1.4.X0.3.X

<sup>*</sup>Cluster Autoscaler 0.5.X is the official version shipped with k8s 1.6. We've done some basic tests using k8s 1.6 / CA 0.6 and we're not aware of any problems with this setup. However, Cluster Autoscaler internally simulates Kubernetes' scheduler and using different versions of scheduler code can lead to subtle issues.

Schedule

Cluster Autoscaler synchronizes its releases with the Kubernetes release schedule.

For Cluster Autoscaler releases of new minor versions, expect a release date of up to one month after the corresponding Kubernetes release. This is due the fact that upstream integrations of Kubernetes into Cluster Autoscaler can't be finalized until the Kubernetes release is official, and the time required to test and validate those integrations.

Cluster Autoscaler will also release patch versions in accordance with Kubernetes patch releases to ensure rapid integration of upstream Kubernetes fixes. The overhead to integrate and validate Kubernetes patch releases is less costly, and thus the Cluster Autoscaler release date should follow the corresponding Kubernetes release by no more than 1-2 weeks.

Bug fixes and Cloud Provider features to Cluster Autoscaler itself will be continually backported into the supported release branches (n - 3, where n is the latest release). Backporting into older release branches can be requested as an exception by filing an issue and bringing the request to the official SIG Autoscaling Community.

Finally, additional Cluster Autoscaler patch releases may happen outside of the above schedule in case of critical bugs or vulnerabilities.

In summary, users should not be guided by a strict patch version equivalency between Kubernetes and Cluster Autoscaler (for example, there is no strict requirement to use Cluster Autoscaler v1.34.1 w/ a Kubernetes v1.34.1 cluster). Rather, we recommend that users always use the latest Cluster Autoscaler release that corresponds to the minor version of Kubernetes that their cluster is running.

For example, if the latest (hypothetical) Cluster Autoscaler releases are v1.100.1, v1.99.5, v1.98.10, and v1.97.16, any of the below scenarios follows the recommended guidance:

Kubernetes VersionCA Version
1.100.01.100.1
1.99.41.99.5
1.98.41.98.10
1.97.161.97.16

Notable changes

For CA 1.1.2 and later, please check release notes.

CA version 1.1.1:

  • Fixes around metrics in the multiple kube apiserver configuration.
  • Fixes for unready nodes issues when quota is overrun.

CA version 1.1.0:

CA version 1.0.3:

  • Adds support for safe-to-evict annotation on pod. Pods with this annotation can be evicted even if they don't meet other requirements for it.
  • Fixes an issue when too many nodes with GPUs could be added during scale-up (https://github.com/kubernetes/kubernetes/issues/54959).

CA Version 1.0.2:

CA Version 1.0.1:

CA Version 1.0:

With this release we graduated Cluster Autoscaler to GA.

  • Support for 1000 nodes running 30 pods each. See: Scalability testing report
  • Support for 10 min graceful termination.
  • Improved eventing and monitoring.
  • Node allocatable support.
  • Removed Azure support. See: PR removing support with reasoning behind this decision
  • cluster-autoscaler.kubernetes.io/scale-down-disabled annotation for marking nodes that should not be scaled down.
  • scale-down-delay-after-delete and scale-down-delay-after-failure flags replaced scale-down-trial-interval

CA Version 0.6:

CA Version 0.5.4:

  • Fixes problems with node drain when pods are ignoring SIGTERM.

CA Version 0.5.3:

CA Version 0.5.2:

CA Version 0.5.1:

CA Version 0.5:

  • CA continues to operate even if some nodes are unready and is able to scale-down them.
  • CA exports its status to kube-system/cluster-autoscaler-status config map.
  • CA respects PodDisruptionBudgets.
  • Azure support.
  • Alpha support for dynamic config changes.
  • Multiple expanders to decide which node group to scale up.

CA Version 0.4:

  • Bulk empty node deletions.
  • Better scale-up estimator based on binpacking.
  • Improved logging.

CA Version 0.3:

  • AWS support.
  • Performance improvements around scale down.

Deployment

Cluster Autoscaler is designed to run on Kubernetes control plane (previously referred to as master) node. This is the default deployment strategy on GCP. It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system namespace (Cluster Autoscaler doesn't scale down node with non-mirrored kube-system pods running on them) and set a priorityClassName: system-cluster-critical property on your pod spec (to prevent your pod from being evicted).

Supported cloud providers: