site/content/en/docs/contrib/tests.en.md
makes sure the --download-only parameter in minikube start caches the appropriate images and tarballs.
makes sure --download-only caches the docker driver images as well.
tests functionality of --binary-mirror flag
makes sure minikube works without internet, once the user has cached the necessary images. This test has to run after TestDownloadOnly.
tests addons that require no special environment in parallel
tests the ingress addon by deploying a default nginx pod
tests the registry-creds addon by trying to load its configs
tests the registry addon
tests the metrics server addon by making sure "kubectl top pods" returns a sensible result
tests the OLM addon
tests the csi hostpath driver by creating a persistent volume, snapshotting it and restoring it.
validates that newly created namespaces contain the gcp-auth secret.
tests the GCP Auth addon with either phony or real credentials and makes sure the files are mounted into pods correctly
tests the inspektor-gadget addon by ensuring the pod has come up and addon disables
tests the cloud-spanner addon by ensuring the deployment and pod come up and addon disables
tests the Volcano addon, makes sure the Volcano is installed into cluster.
tests the functionality of the storage-provisioner-rancher addon
tests enabling an addon on a non-existing cluster
tests disabling an addon on a non-existing cluster
tests the nvidia-device-plugin addon by ensuring the pod comes up and the addon disables
tests the amd-gpu-device-plugin addon by ensuring the pod comes up and the addon disables
makes sure minikube certs respect the --apiserver-ips and --apiserver-names parameters
makes sure minikube can start after its profile certs have expired. It does this by configuring minikube certs to expire after 3 minutes, then waiting 3 minutes, then starting again. It also makes sure minikube prints a cert expiration warning to the user.
makes sure the --docker-env and --docker-opt parameters are respected
tests the --force-systemd flag, as one would expect.
makes sure the --force-systemd flag worked with the docker container runtime
makes sure the --force-systemd flag worked with the containerd container runtime
makes sure the --force-systemd flag worked with the cri-o container runtime
makes sure the MINIKUBE_FORCE_SYSTEMD environment variable works just as well as the --force-systemd flag
makes sure that minikube docker-env command works when the runtime is containerd
makes sure our docker-machine-driver-hyperkit binary can be installed properly
makes sure our docker-machine-driver-hyperkit binary can be installed properly
asserts that there are no unexpected errors displayed in minikube command outputs.
are functionality tests which can safely share a profile in parallel
are functionality run functional tests using NewestKubernetesVersion
checks if minikube cluster is created with correct kubernetes's node label
Steps:
kubectl get nodesminikube.k8s.io/*runs tests on all the minikube image commands, ex. minikube image load, minikube image list, etc.
Steps:
minikube image buildminikube image load --daemonminikube image load --daemon worksminikube image load --daemonminikube image load --daemonminikube image rmminikube image loadminikube image loadSkips:
none driver as image loading is not supportedcheck functionality of minikube after evaluating docker-env
Steps:
eval $(minikube docker-env) to configure current environment to use minikube's Docker daemonminikube status to get the minikube statusRunningdocker-env has status in-use$(minikube -p profile docker-env) and check if we are point to docker inside minikubedocker images hits the minikube's Docker daemon by check if gcr.io/k8s-minikube/storage-provisioner is in the output of docker imagesSkips:
none drive since docker-env is not supportedcheck functionality of minikube after evaluating podman-env
Steps:
eval $(minikube podman-env) to configure current environment to use minikube's Podman daemon, and minikube status to get the minikube statusRunningpodman-env has status in-useeval $(minikube docker-env) again and docker images to list the docker images using the minikube's Docker daemondocker images hits the minikube's Podman daemon by check if gcr.io/k8s-minikube/storage-provisioner is in the output of docker imagesSkips:
none drive since podman-env is not supportedmakes sure minikube start respects the HTTP_PROXY environment variable
Steps:
HTTP_PROXY set to the local HTTP proxymakes sure minikube start respects the HTTPS_PROXY environment variable and works with custom certs a proxy is started by calling the mitmdump binary in the background, then installing the certs generated by the binary mitmproxy/dump creates the proxy at localhost at port 8080 only runs on GitHub Actions for amd64 linux, otherwise validateStartWithProxy runs instead
makes sure the audit log contains the correct logging after minikube start
Steps:
validates that after minikube already started, a minikube start should not change the configs.
Steps:
validateStartWithProxy should have start minikube, make sure the configured node port is 8441minikube start again as a soft startasserts that kubectl is properly configured (race-condition prone!)
Steps:
kubectl config current-contextasserts that kubectl get pod -A returns non-zero content
Steps:
kubectl get po -A to get all pods in the current minikube profilekube-system componentsvalidates that the minikube kubectl command returns content
Steps:
minikube kubectl -- get pods to get the pods in the current minikube profilevalidates that calling the minikube binary linked as "kubectl" acts as a kubectl wrapper. This tests the feature where minikube behaves like kubectl when invoked via a binary named "kubectl".
Steps:
kubectl get pods by calling the minikube's kubectl binary file directlyverifies minikube with --extra-config works as expected
Steps:
--extra-config command line option--extra-config is correctly returnedasserts that all Kubernetes components are healthy NOTE: It expects all components to be Ready, so it makes sense to run it close after only those tests that include '--wait=all' start flag
Steps:
kubectl get po po -l tier=control-plane -n kube-system -o=json to get all the Kubernetes conponentsRunningmakes sure minikube status outputs correctly
Steps:
minikube status with custom format host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}host, kublete, apiserver and kubeconfig statuses are shown in the outputminikube status again as JSON outputhost, kublete, apiserver and kubeconfig statuses are set in the JSON outputasserts that the dashboard command works
Steps:
minikube dashboard --url to start minikube dashboard and return the URL of itasserts that the dry-run mode quickly exits with the right code
Steps:
minikube start --dry-run --memory 250MBExInsufficientMemoryminikube start --dry-runasserts that the language used can be changed with environment variables
Steps:
LC_ALL=fr to enable minikube translation to Frenchminikube start --dry-run --memory 250MBtests functionality of cache command (cache add, delete, list)
Steps:
minikube cache add and make sure we can add a remote image to the cacheminikube cache add and make sure we can build and add a local image to the cacheminikube cache delete and make sure we can delete an image from the cacheminikube cache list and make sure we can list the images in the cacheminikube ssh sudo crictl images and make sure we can list the images in the cache with crictlminikube cache reload to make sure the image is brought back correctlyasserts basic "config" command functionality
Steps:
minikube config set/get/unset to make sure configuration is modified correctlyasserts basic "logs" command functionality
Steps:
minikube logs and make sure the logs contains some keywords like apiserver, Audit and Last Startasserts "logs --file" command functionality
Steps:
minikube logs --file logs.txt to save the logs to a local fileasserts "profile" command functionality
Steps:
minikube profile lis and make sure the command doesn't fail for the non-existent profile lisminikube profile list --output json to make sure the previous command doesn't create a new profileminikube profile list and make sure the profiles are correctly listedminikube profile list -o JSON and make sure the profiles are correctly listed as JSON outputasserts basic "service" command functionality
Create a new kickbase/echo_server deployment
Run minikube service list to make sure the newly created service is correctly listed in the output
Run minikube service list -o JSON and make sure the services are correctly listed as JSON output
Run minikube service with --https --url to make sure the HTTPS endpoint URL of the service is printed
Run minikube service with --url --format={{.IP}} to make sure the IP address of the service is printed
Run minikube service with a regular --url to make sure the HTTP endpoint URL of the service is printed
Steps:
kickbase/echo-server deploymentminikube service with a regular --url to make sure the HTTP endpoint URL of the service is printedasserts basic "addon" command functionality
Steps:
minikube addons list to list the addons in a tabular formatdashboard, ingress and ingress-dns is listed as available addonsminikube addons list -o JSON lists the addons in JSON formatasserts basic "ssh" command functionality
Steps:
minikube ssh echo hello to make sure we can SSH into the minikube container and run an commandminikube ssh cat /etc/hostname as well to make sure the command is run inside minikubeasserts basic "cp" command functionality
Steps:
minikube cp ... to copy a file to the minikube nodeminikube ssh sudo cat ... to print out the copied file within minikubeSkips:
none driver since cp is not supportedvalidates a minimalist MySQL deployment
Steps:
kubectl replace --force -f testdata/mysql/yamlmysql pod to be runningmysql -e show databases; inside the MySQL pod to verify MySQL is up and runningmysqld first comes up without users configured. Scan for names in case of a reschedule.to check existence of the test file
Steps:
setupFileSyncSkips:
none driver since SSH is not supportedchecks to make sure a custom cert has been copied into the minikube guest and installed correctly
Steps:
asserts that for a given runtime, the other runtimes are disabled, for example for containerd runtime, docker and crio needs to be not running
Steps:
minikube ssh sudo systemctl is-active ... and make sure the other container runtimes are not runningasserts basic "update-context" command functionality
Steps:
minikube update-contextasserts minikube version command works fine for both --short and --components
Steps:
minikube version --short and make sure the returned version is a valid semverminikube version --components and make sure the component versions are returnedasserts that the minikube license command downloads and untars the licenses
Note: This test will fail on release PRs as the licenses file for the new version won't be uploaded at that point
makes sure minikube will not start a tunnel for an unavailable service that has no running pods
verifies the minikube mount command works properly for the platforms that support it, we're testing:
makes sure PVCs work properly verifies at least one StorageClass exists Applies a PVC manifest (pvc.yaml) and verfies PVC named myclaim reaches phase Bound. Creates a test pod (sp-pod) that mounts the claim (via createPVTestPod). Writes a file foo to the mounted volume at /tmp/mount/foo. Deletes the pod, recreates it, and verifies the file foo still exists by listing /tmp/mount, proving data persists across pod restarts.
makes sure the minikube tunnel command works as expected
starts minikube tunnel
ensures only 1 tunnel can run simultaneously
starts nginx pod, nginx service and waits nginx having loadbalancer ingress IP
validates if the test service can be accessed with LoadBalancer IP from host
validates if the DNS forwarding works by dig command DNS lookup NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
validates if the DNS forwarding works by dscacheutil command DNS lookup NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
validates if the test service can be accessed with DNS forwarding from host NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
stops minikube tunnel
tests the functionality of the gVisor addon
tests all ha (multi-control plane) cluster functionality
ensures ha (multi-control plane) cluster can start.
deploys an app to ha (multi-control plane) cluster and ensures all nodes can serve traffic.
uses app previously deployed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
uses the minikube node add command to add a worker node to an existing ha (multi-control plane) cluster.
check if all node labels were configured correctly.
Steps:
kubectl get nodesminikube.k8s.io/*ensures minikube profile list outputs correct with ha (multi-control plane) clusters.
ensures minikube cp works with ha (multi-control plane) clusters.
tests ha (multi-control plane) cluster by stopping a secondary control-plane node using minikube node stop command.
ensures minikube profile list outputs correct with ha (multi-control plane) clusters.
tests the minikube node start command on existing stopped secondary node.
restarts minikube cluster and checks if the reported node list is unchanged.
tests the minikube node delete command on secondary control-plane. note: currently, 'minikube status' subcommand relies on primary control-plane node and storage-provisioner only runs on a primary control-plane node.
runs minikube stop on a ha (multi-control plane) cluster.
verifies a soft restart on a ha (multi-control plane) cluster works.
uses the minikube node add command to add a secondary control-plane node to an existing ha (multi-control plane) cluster.
makes sure the 'minikube image build' command works fine
starts a cluster for the image builds
is normal test case for minikube image build, with -t parameter
is normal test case for minikube image build, with -t and -f parameter
is a test case building with --build-opt
is a test case building with --build-env
is a test case building with .dockerignore
verifies files and packages installed inside minikube ISO/Base image
makes sure json output works properly for the start, pause, unpause, and stop commands
makes sure each step has a distinct step number
verifies that for a successful minikube start, 'current step' should be increasing
makes sure json output can print errors properly
verifies the docker driver works with a custom network
verifies the docker driver and run with an existing network
verifies the docker/podman driver works with a custom subnet
starts minikube with the static IP flag
will return true if the integraiton test is running against a passed --base-image flag
tests using the mount command on start
starts a cluster with mount enabled
checks if the cluster has a folder mounted
stops a cluster
restarts a cluster
tests all multi node cluster functionality
makes sure a 2 node cluster can start
uses the minikube node add command to add a node to an existing cluster
make sure minikube profile list outputs correct with multinode clusters
make sure minikube cp works with multinode clusters.
check if all node labels were configured correctly
Steps:
kubectl get nodesminikube.k8s.io/*tests the minikube node stop command
tests the minikube node start command on an existing stopped node
restarts minikube cluster and checks if the reported node list is unchanged
runs minikube stop on a multinode cluster
verifies a soft restart on a multinode cluster works
tests the minikube node delete command
tests that the node name verification works as expected
deploys an app to a multinode cluster and makes sure all nodes can serve traffic
uses app previously deployed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
tests all supported CNI options Options tested: kubenet, bridge, flannel, kindnet, calico, cilium Flags tested: enable-default-cni (legacy), false (CNI off), auto-detection
checks that minikube returns and error if container runtime is "containerd" or "crio" and --cni=false
makes sure the hairpinning (https://en.wikipedia.org/wiki/Hairpinning) is correctly configured for given CNI try to access deployment/netcat pod using external, obtained from 'netcat' service dns resolution, IP address should fail if hairpinMode is off
tests starting minikube without Kubernetes, for use cases where user only needs to use the container runtime (docker, containerd, crio) inside minikube
expect an error when starting a minikube cluster without kubernetes and with a kubernetes version.
Steps:
starts a minikube cluster with Kubernetes started/configured.
Steps:
starts a minikube cluster while stopping Kubernetes.
Steps:
starts a minikube cluster without kubernetes started/configured
Steps:
validates that there is no kubernetes running inside minikube
validates that minikube is stopped after a --no-kubernetes start
validates that profile list works with --no-kubernetes
validates that minikube start with no args works.
tests to make sure the CHANGE_MINIKUBE_NONE_USER environment variable is respected and changes the minikube file permissions from root to the correct user.
tests minikube pause functionality
just starts a new minikube cluster
validates that starting a running cluster does not invoke reconfiguration
runs minikube pause
runs minikube unpause
deletes the unpaused cluster
makes sure no left over left after deleting a profile such as containers or volumes
makes sure paused clusters show up in minikube status correctly
verifies that disabling the initial preload, pulling a specific image, and restarting the cluster preserves the image across restarts. also tests --preload-source should work for both github and gcs
tests the schedule stop functionality on Windows
tests the schedule stop functionality on Unix
makes sure skaffold run can be run with minikube
tests starting, stopping and restarting a minikube clusters with various Kubernetes versions and configurations The oldest supported, newest supported and default Kubernetes versions are always tested.
runs the initial minikube start
deploys an app the minikube cluster
makes sure addons can be enabled while cluster is active.
tests minikube stop
makes sure addons can be enabled on a stopped cluster
verifies that starting a stopped cluster works
verifies that a user's app will not vanish after a minikube stop
validates that an addon which was enabled when minikube is stopped will be enabled and working..
verifies that a restarted cluster contains all the necessary images
verifies that minikube pause works
makes sure minikube status displays the correct info if there is insufficient disk space on the machine
upgrades a running legacy cluster to minikube at HEAD
starts a legacy minikube, stops it, and then upgrades to minikube at HEAD
upgrades Kubernetes from oldest to newest
tests a Docker upgrade where the underlying container is missing