Back to Autoscaler

Cluster Autoscaler for Kamatera

cluster-autoscaler/cloudprovider/kamatera/README.md

latest9.5 KB
Original Source

Cluster Autoscaler for Kamatera

The cluster autoscaler for Kamatera scales nodes in a Kamatera cluster.

Kamatera Kubernetes

Kamatera supports Kubernetes clusters using our Rancher app or by creating a self-managed cluster directly on Kamatera compute servers, the autoscaler supports both methods.

Cluster Autoscaler Node Groups

An autoscaler node group is composed of multiple Kamatera servers with the same server configuration. All servers belonging to a node group are identified by Kamatera server tags k8sca-CLUSTER_NAME, k8scang-NODEGROUP_NAME. The cluster and node groups must be specified in the autoscaler cloud configuration file.

Deployment

Copy examples/deployment.yaml and modify the configuration as needed, see below regarding the required configuration values and format. When the configuraiont is ready, deploy it to your cluster e.g. using kubectl apply -f deployment.yaml.

Configuration

The cluster autoscaler only considers the cluster and node groups defined in the configuration file.

You can see an example of the cloud config file at examples/deployment.yaml,

Important Note: The cluster and node group names must be 15 characters or less.

it is an INI file with the following fields:

KeyValueMandatoryDefault
global/kamatera-api-client-idKamatera API Client IDyesnone
global/kamatera-api-secretKamatera API Secretyesnone
global/cluster-namemax 15 characters: english letters, numbers, dash, underscore, space, dot: distinct string used to set the cluster server tagyesnone
global/filter-name-prefixautoscaler will only handle server names that start with this prefixnonone
global/default-min-sizedefault minimum size of a node group (must be > 0)no1
global/default-max-sizedefault maximum size of a node groupno254
global/default-<SERVER_CONFIG_KEY>replace <SERVER_CONFIG_KEY> with the relevant configuration keysee belowsee below
nodegroup "name"max 15 characters: english letters, numbers, dash, underscore, space, dot: distinct string within the cluster used to set the node group server tagyesnone
nodegroup "name"/min-sizeminimum size for a specific node groupnoglobal/defaut-min-size
nodegroup "name"/max-sizemaximum size for a specific node groupnoglobal/defaut-min-size
nodegroup "name"/<SERVER_CONFIG_KEY>replace <SERVER_CONFIG_KEY> with the relevant configuration keynoglobal/default-<SERVER_CONFIG_KEY>

Server configuration keys

Following are the supported server configuration keys:

KeyValueMandatoryDefault
name-prefixPrefix for all created server namesnonone
passwordServer root passwordnonone
ssh-keyPublic SSH key to add to the server authorized keysnonone
datacenterDatacenter IDyesnone
imageImage ID or nameyesnone
cpuCPU type and size identifieryesnone
ramRAM size in MByesnone
diskDisk specifications - see below for detailsyesnone
dailybackupboolean - set to true to enable daily backupsnofalse
managedboolean - set to true to enable managed servicesnofalse
networkNetwork specifications - see below for detailsyesnone
billingcycle"hourly" or "monthly"no"hourly"
monthlypackageFor monthly billing only - the monthly network package to usenonone
script-base64base64 encoded server initialization script, must be provided to connect the server to the cluster, see below for detailsnonone

Disk specifications

Server disks are specified using an array of strings which are the same as the cloudcli --disk argument as specified in cloudcli server create. For multiple disks, include the configuration multiple times, example:

[global]
; default for all node groups: single 100gb disk
default-disk = "size=100"

[nodegroup "ng1"]
; this node group will use the default

[nodegroup "ng2"]
; override the default and use 2 disks
disk = "size=100"
disk = "size=200"

Network specifications

Networks are specified using an array of strings which are the same as the cloudcli --network argument as specified in cloudcli server create. For multiple networks, include the configuration multiple times, example:

[global]
; default for all node groups: single public network with auto-assigned ip
default-network = "name=wan,ip=auto"

[nodegroup "ng1"]
; this node group will use the default

[nodegroup "ng2"]
; override the default and attach 2 networks - 1 public and 1 private
network = "name=wan,ip=auto"
network = "name=lan-12345-abcde,ip=auto"

Server Initialization Script

This script is required so that the server will connect to the relevant cluster. The specific script depends on how you create and manage the cluster.

See below for some common configurations, but the exact script may need to be modified depending on your requirements and server image.

The script needs to be provided as a base64 encoded string. You can encode your script using the following command: cat script.sh | base64 -w0.

Kamatera Rancher Server Initialization Script

Using Kamatera Rancher you need to get the command to join a server to the cluster. This is available from the following URL: https://rancher.domain/v3/clusterregistrationtokens. The relevant command is available under data[].nodeCommand, if you have a single cluster, it will be the first one. If you have multiple cluster you will have to locate the relevant cluster from the array using clusterId. The command will look like this:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run  rancher/rancher-agent:v2.6.4 --server https://rancher.domain --token aaa --ca-checksum bbb

You can replace this command in the example script at examples/server-init-rancher.sh.txt

Kubeadm Initialization Script

The example script at examples/server-init-kubeadm.sh.txt can be used as a base for writing your own script to join the server to your cluster.

Development

Make sure you are inside the cluster-autoscaler path of the autoscaler repository.

Run tests:

go test -v k8s.io/autoscaler/cluster-autoscaler/cloudprovider/kamatera

Setup a Kamatera cluster, you can use this guide

Get the cluster kubeconfig and set in local file and set in the KUBECONFIG environment variable. Make sure you are connected to the cluster using kubectl get nodes. Create a cloud config file according to the above documentation and set it's path in CLOUD_CONFIG_FILE env var.

Build the binary and run it:

make build &&\
./cluster-autoscaler-amd64 --cloud-config $CLOUD_CONFIG_FILE --cloud-provider kamatera --kubeconfig $KUBECONFIG -v2

Build the docker image:

make container

Tag and push it to a Docker registry

docker tag staging-k8s.gcr.io/cluster-autoscaler-amd64:dev ghcr.io/github_username_lowercase/cluster-autoscaler-amd64
docker push ghcr.io/github_username_lowercase/cluster-autoscaler-amd64

Make sure relevant clsuter has access to this registry/image.

Follow the documentation for deploying the image and using the autoscaler.