topics/kubernetes/CKA.md
Set up Kubernetes cluster. Use one of the following
Set aliases
alias k=kubectl
alias kd=kubectl delete
alias kds=kubectl describe
alias ke=kubectl edit
alias kr=kubectl run
alias kg=kubectl get
kubectl get pods
Note: create an alias (alias k=kubectl) and get used to k get po
</b></details>
k run nginx-test --image=nginx
</b></details>
k delete po nginx-test
</b></details>
k get po -n kube-system
Let's say you didn't know in what namespace it is. You could then run k get po -A | grep etc to find the Pod and see in what namespace it resides.
</b></details>
k get po -A
The long version would be kubectl get pods --all-namespaces.
</b></details>
cat > pod.yaml <<EOL
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- image: alpine
name: alpine
- image: nginx-unprivileged
name: nginx-unprivileged
EOL
k create -f pod.yaml
If you ask yourself how would I remember writing all of that? no worries, you can simply run kubectl run some_pod --image=redis -o yaml --dry-run=client > pod.yaml. If you ask yourself "how am I supposed to remember this long command" time to change attitude ;)
</b></details>
k run some-pod -o yaml --image nginx-unprivileged --dry-run=client > pod.yaml
</b></details>
with --dry-run flag which will not actually create it, but it will test it and you can find this way, any syntax issues.
k create -f YAML_FILE --dry-run
</b></details>
k describe po <POD_NAME> | grep -i image
</b></details>
k get po POD_NAME and see the number under "READY" column.
You can also run k describe po POD_NAME
</b></details>
k run remo --image=redis:latest -l year=2017
</b></details>
k get po --show-labels
</b></details>
k delete po nm
</b></details>
k get po -l env=prod
To count them: k get po -l env=prod --no-headers | wc -l
</b></details>
First change to the directory tracked by kubelet for creating static pod: cd /etc/kubernetes/manifests (you can verify path by reading kubelet conf file)
Now create the definition/manifest in that directory
k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > static-pod.yaml
</b></details>
<details> <summary>Describe how would you delete a static Pod </summary> <b>Locate the static Pods directory (look at staticPodPath in kubelet configuration file).
Go to that directory and remove the manifest/definition of the staic Pod (rm <STATIC_POD_PATH>/<POD_DEFINITION_FILE>)
</b></details>
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
Some ways to debug:
kubectl describe pod POD_NAME
State (which should be Waiting, CrashLoopBackOff) and Last State which should tell what happened before (as in why it failed)kubectl logs mypod
-c CONTAINER_NAMEkubectl get events</b></details>
<details> <summary>What the error <code>ImagePullBackOff</code> means?</summary> <b>Most likely you didn't write correctly the name of the image you try to pull and run. Or perhaps it doesn't exists in the registry.
You can confirm with kubectl describe po POD_NAME
</b></details>
k get po POD_NAME -o wide
</b></details>
Because there is no such image sheris. At least for now :)
To fix it, run kubectl edit ohno and modify the following line - image: sheris to - image: redis or any other image you prefer.
</b></details>
One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run kubectl get po -A | grep scheduler or check directly in kube-system namespace.
</b></details>
k logs POD_NAME
</b></details>
It won't work because there are two containers inside the Pod and you need to specify one of them with kubectl logs POD_NAME -c CONTAINER_NAME
</b></details>
k get ns
</b></details>
k create ns alle
</b></details>
k get ns --no-headers | wc -l
</b></details>
k get po -n dev
</b></details>
If the namespace doesn't exist already: k create ns dev
k run kratos --image=redis -n dev
</b></details>
k get po -A | grep atreus
</b></details>
kubectl get nodes
Note: create an alias (alias k=kubectl) and get used to k get no
</b></details>
k get nodes -o json > some_nodes.json
</b></details>
k get no minikube --show-labels
</b></details>
k get svc
</b></details>
kubectl expose pod web --port=1991 --name=sevi
</b></details>
app-service </b></details>
<details> <summary>How to check the TargetPort of a service?</summary> <b>k describe svc <SERVICE_NAME>
</b></details>
k describe svc <SERVICE_NAME>
</b></details>
app-service.dev.svc.cluster.local </b></details>
<details> <summary>Assume you have a deployment running and you need to create a Service for exposing the pods. This is what is required/known:kubectl expose deployment jabulik --name=jabulik-service --target-port=8080 --type=NodePort --port=8080 --dry-run=client -o yaml -> svc.yaml
vi svc.yaml (make sure selector is set to jabulik-app)
k apply -f svc.yaml
</b></details>
k get rs
</b></details>
There will still be 3 Pods running theoretically because the goal of the replica set is to ensure that. so if you delete one or more Pods, it will run additional Pods so there are always 3 Pods. </b></details>
<details> <summary>How to check which container image was used as part of replica set called "repli"?</summary> <b>k describe rs repli | grep -i image
</b></details>
k describe rs repli | grep -i "Pods Status"
</b></details>
k delete rs rori
</b></details>
k edis rs rori
</b></details>
k scale rs rori --replicas=5
</b></details>
k scale rs rori --replicas=1
</b></details>
apiVersion: apps/v1
kind: ReplicaCet
metadata:
name: redis
labels:
app: redis
tier: cache
spec:
selector:
matchLabels:
tier: cache
template:
metadata:
labels:
tier: cachy
spec:
containers:
- name: redis
image: redis
kind should be ReplicaSet and not ReplicaCet :)
</b></details>
<details> <summary>Fix the following ReplicaSet definitionapiVersion: apps/v1
kind: ReplicaSet
metadata:
name: redis
labels:
app: redis
tier: cache
spec:
selector:
matchLabels:
tier: cache
template:
metadata:
labels:
tier: cachy
spec:
containers:
- name: redis
image: redis
The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so it's cache instead.
</b></details>
k get deploy
</b></details>
<details> <summary>How to check which image a certain Deployment is using?</summary> <b>k describe deploy <DEPLOYMENT_NAME> | grep image
</b></details>
<details> <summary>Create a file definition/manifest of a deployment called "dep", with 3 replicas that uses the image 'redis'</summary> <b>k create deploy dep -o yaml --image=redis --dry-run=client --replicas 3 > deployment.yaml
</b></details>
<details> <summary>Remove the deployment `depdep`</summary> <b>k delete deploy depdep
</b></details>
<details> <summary>Create a deployment called "pluck" using the image "redis" and make sure it runs 5 replicas</summary> <b>kubectl create deployment pluck --image=redis --replicas=5
</b></details>
<details> <summary>Create a deployment with the following properties:kubectl create deployment blufer --image=python --replicas=3 -o yaml --dry-run=client > deployment.yaml
Add the following section (vi deployment.yaml):
spec:
affinity:
nodeAffinity:
requiredDuringSchedlingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: blufer
operator: Exists
kubectl apply -f deployment.yaml
</b></details>
apiVersion: apps/v1
kind: Deploy
metadata:
creationTimestamp: null
labels:
app: dep
name: dep
spec:
replicas: 3
selector:
matchLabels:
app: dep
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: dep
spec:
containers:
- image: redis
name: redis
resources: {}
status: {}
Change kind: Deploy to kind: Deployment
</b></details>
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: dep
name: dep
spec:
replicas: 3
selector:
matchLabels:
app: depdep
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: dep
spec:
containers:
- image: redis
name: redis
resources: {}
status: {}
The selector doesn't match the label (dep vs depdep). To solve it, fix depdep so it's dep instead. </b></details>
k run some-pod --image=redix -o yaml --dry-run=client > pod.yaml
vi pod.yaml and add:
spec:
nodeName: node1
k apply -f pod.yaml
Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pending" state. </b></details>
vi pod.yaml
affinity:
nodeAffinity:
requiredDuringSchedlingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: region
operator: In
values:
- asia
- emea
</b></details>
<details> <summary>Using node affinity, set a Pod to never schedule on a node where the key is "region" and value is "neverland"</summary> <b>vi pod.yaml
affinity:
nodeAffinity:
requiredDuringSchedlingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: region
operator: NotIn
values:
- neverland
</b></details>
k get po -l app=web
</b></details>
k get all -l env=staging
</b></details>
k get deploy -l env=prod,type=web
</b></details>
kubectl label nodes some-node hw=max
</b></details>
<details> <summary>Create and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`</summary> <b>kubectl run some-pod --image=redis --dry-run=client -o yaml > pod.yaml
vi pod.yaml
spec:
nodeSelector:
hw: max
kubectl apply -f pod.yaml
</b></details>
<details> <summary>Explain why node selectors might be limited</summary> <b>Assume you would like to run your Pod on all the nodes with with either hw set to max or to min, instead of just max. This is not possible with nodeSelectors which are quite simplified and this is where you might want to consider node affinity.
</b></details>
k describe no master | grep -i taints
</b></details>
k taint node minikube app=web:NoSchedule
k describe no minikube | grep -i taints
</b></details>
The Pod will remain in "Pending" status due to the only node in the cluster having a taint of "app=web". </b></details>
<details> <summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code> but the Pod is in pending state. How to fix it?</summary> <b>kubectl edit po some-pod and add the following
- effect: NoSchedule
key: app
operator: Equal
value: web
Exit and save. The pod should be in Running state now. </b></details>
<details> <summary>Remove an existing taint from one of the nodes in your cluster</summary> <b>k taint node minikube app=web:NoSchedule-
</b></details>
kubectl describe po <POD_NAME> | grep -i limits
</b></details>
kubectl run yay --image=python --dry-run=client -o yaml > pod.yaml
vi pod.yaml
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay
resources:
requests:
cpu: 250m
memory: 64Mi
kubectl apply -f pod.yaml
</b></details>
kubectl run yay2 --image=python --dry-run=client -o yaml > pod.yaml
vi pod.yaml
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay2
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 64Mi
kubectl apply -f pod.yaml
</b></details>
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</b></details>
kubectl top nodeskubectl top pods</b></details>
Yes, it is possible. You can run another pod with a command similar to:
spec:
containers:
- command:
- kube-scheduler
- --address=127.0.0.1
- --leader-elect=true
- --scheduler-name=some-custom-scheduler
...
</b></details>
<details> <summary>Assuming you have multiple schedulers, how to know which scheduler was used for a given Pod?</summary> <b>Running kubectl get events you can see which scheduler was used.
</b></details>
Add the following to the spec of the Pod:
spec:
schedulerName: some-custom-scheduler
</b></details>