docs/use-cases/NVIDIA-GPU-passthrough-and-Kata.md
This page gives an overview on the different modes in which GPUs can be passed
to a Kata Containers container, provides host system requirements, explains how
Kata Containers guest components can be built to support the NVIDIA GPU
scenario, and gives practical usage examples using ctr.
Please see the guide Enabling NVIDIA GPU workloads using GPU passthrough with Kata Containers for a documentation of an end-to-end reference implementation of a Kata Containers stack for GPU passthrough using QEMU, the go-based Kata Runtime, and an NVIDIA-specific root filesystem. This reference implementation is built and validated in Kata's CI, and it can be used to test GPU workloads with Kata components and Kubernetes out of the box.
An NVIDIA GPU device can be passed to a Kata Containers container using GPU
passthrough (NVIDIA GPU passthrough mode) as well as GPU mediated passthrough
(NVIDIA vGPU mode).
NVIDIA GPU passthrough mode, an entire physical GPU is directly assigned to one VM, bypassing the NVIDIA Virtual GPU Manager. In this mode of operation, the GPU is accessed exclusively by the NVIDIA driver running in the VM to which it is assigned. The GPU is not shared among VMs.
NVIDIA Virtual GPU (vGPU) enables multiple virtual machines (VMs) to have
simultaneous, direct access to a single physical GPU, using the same NVIDIA
graphics drivers that are deployed on non-virtualized operating systems. By
doing this, NVIDIA vGPU provides VMs with unparalleled graphics performance,
compute performance, and application compatibility, together with the
cost-effectiveness and scalability brought about by sharing a GPU among multiple
workloads. A vGPU can be either time-sliced or Multi-Instance GPU (MIG)-backed
with MIG-slices.
| Technology | Description | Behavior | Detail |
|---|---|---|---|
| NVIDIA GPU passthrough mode | GPU passthrough | Physical GPU assigned to a single VM | Direct GPU assignment to VM without limitation |
| NVIDIA vGPU time-sliced | GPU time-sliced | Physical GPU time-sliced for multiple VMs | Mediated passthrough |
| NVIDIA vGPU MIG-backed | GPU with MIG-slices | Physical GPU MIG-sliced for multiple VMs | Mediated passthrough |
NVIDIA GPUs recommended for virtualization:
Some hardware requires a larger PCI BARs window, for example, NVIDIA Tesla P100, K40m
$ lspci -s d0:00.0 -vv | grep Region
Region 0: Memory at e7000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 222800000000 (64-bit, prefetchable) [size=32G] # Above 4G
Region 3: Memory at 223810000000 (64-bit, prefetchable) [size=32M]
For large BARs devices, MMIO mapping above 4G address space should be enabled
in the PCI configuration of the BIOS.
Some hardware vendors use a different name in BIOS, such as:
If one is using a GPU based on the Ampere architecture and later additionally
SR-IOV needs to be enabled for the vGPU use-case.
The following configurations need to be enabled on your host kernel:
CONFIG_VFIOCONFIG_VFIO_IOMMU_TYPE1CONFIG_VFIO_MDEVCONFIG_VFIO_MDEV_DEVICECONFIG_VFIO_PCIYour host kernel needs to be booted with intel_iommu=on on the kernel command
line.
This section explains how to build an environment with Kata Containers bits supporting the GPU scenario. We first deploy and configure the regular Kata components, then describe how to build the guest kernel and root filesystem.
To use non-large BARs devices (for example, NVIDIA Tesla T4), you need Kata version 1.3.0 or above. Follow the Kata Containers setup instructions to install the latest version of Kata.
To use large BARs devices (for example, NVIDIA Tesla P100), you need Kata version 1.11.0 or above.
The following configuration in the Kata configuration.toml file as shown below
can work:
Hotplug for PCI devices with small BARs by acpi_pcihp (Linux's ACPI PCI
Hotplug driver):
machine_type = "q35"
hotplug_vfio_on_root_bus = false
Hotplug for PCIe devices with large BARs by pciehp (Linux's PCIe Hotplug
driver):
machine_type = "q35"
hotplug_vfio_on_root_bus = true
pcie_root_port = 1
The default guest kernel installed with Kata Containers does not provide GPU support. To use an NVIDIA GPU with Kata Containers, you need to build a kernel with the necessary GPU support.
The following kernel config options need to be enabled:
# Support PCI/PCIe device hotplug (Required for large BARs device)
CONFIG_HOTPLUG_PCI_PCIE=y
# Support for loading modules (Required for load NVIDIA drivers)
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# Enable the MMIO access method for PCIe devices (Required for large BARs device)
CONFIG_PCI_MMCONFIG=y
The following kernel config options need to be disabled:
# Disable Open Source NVIDIA driver nouveau
# It conflicts with NVIDIA official driver
CONFIG_DRM_NOUVEAU=n
Note:
CONFIG_DRM_NOUVEAUis normally disabled by default. It is worth checking that it is not enabled in your kernel configuration to prevent any conflicts.
Build the Kata Containers kernel with the previous config options, using the instructions described in Building Kata Containers kernel. For further details on building and installing guest kernels, see the developer guide.
There is an easy way to build a guest kernel that supports NVIDIA GPU:
## Build guest kernel with ../../tools/packaging/kernel
# Prepare (download guest kernel source, generate .config)
$ ./build-kernel.sh -v 5.15.23 -g nvidia -f setup
# Build guest kernel
$ ./build-kernel.sh -v 5.15.23 -g nvidia build
# Install guest kernel
$ sudo -E ./build-kernel.sh -v 5.15.23 -g nvidia install
However, there are special cases like Dragonball VMM. It directly supports device hot-plug/hot-unplug
via upcall to avoid ACPI virtualization and minimize VM overhead. Since upcall isn't upstream kernel
code, using Dragonball VMM for NVIDIA GPU hot-plug/hot-unplug requires applying the Upcall patchset in
addition to the above kernel configuration items. Follow these steps to build for NVIDIA GPU hot-[un]plug
for Dragonball:
# Prepare .config to support both upcall and nvidia gpu
$ ./build-kernel.sh -v 5.10.25 -e -t dragonball -g nvidia -f setup
# Build guest kernel to support both upcall and nvidia gpu
$ ./build-kernel.sh -v 5.10.25 -e -t dragonball -g nvidia build
# Install guest kernel to support both upcall and nvidia gpu
$ sudo -E ./build-kernel.sh -v 5.10.25 -e -t dragonball -g nvidia install
Note:
-v 5.10.25is for the guest kernel version.-ehere means experimental, mainly becauseupcallpatches are not in upstream Linux kernel.-t dragonballis for specifying hypervisor type.-fis for generating.configfile.
To build NVIDIA Driver in Kata container, linux-headers are required.
This is a way to generate deb packages for linux-headers:
Note: Run
make rpm-pkgto build the rpm package. Runmake deb-pkgto build the deb package.
$ cd kata-linux-5.15.23-89
$ make deb-pkg
Before using the new guest kernel, please update the kernel parameters in
configuration.toml.
kernel = "/usr/share/kata-containers/vmlinuz-nvidia-gpu.container"
Consult the Developer-Guide on how to create a
rootfs base image for a distribution of your choice. This is going to be used as
a base for a NVIDIA enabled guest OS. Use the EXTRA_PKGS variable to install
all the needed packages to compile the drivers. Also copy the kernel development
packages from the previous make deb-pkg into $ROOTFS_DIR.
export EXTRA_PKGS="gcc make curl gnupg"
Having the $ROOTFS_DIR exported in the previous step we can now install all the
needed parts in the guest OS. In this case, we have an Ubuntu based rootfs.
First off all mount the special filesystems into the rootfs
$ sudo mount -t sysfs -o ro none ${ROOTFS_DIR}/sys
$ sudo mount -t proc -o ro none ${ROOTFS_DIR}/proc
$ sudo mount -t tmpfs none ${ROOTFS_DIR}/tmp
$ sudo mount -o bind,ro /dev ${ROOTFS_DIR}/dev
$ sudo mount -t devpts none ${ROOTFS_DIR}/dev/pts
Now we can enter chroot
$ sudo chroot ${ROOTFS_DIR}
Inside the rootfs one is going to install the drivers and toolkit to enable the easy creation of GPU containers with Kata. We can also use this rootfs for any other container not specifically only for GPUs.
As a prerequisite install the copied kernel development packages
$ sudo dpkg -i *.deb
Get the driver run file, since we need to build the driver against a kernel that
is not running on the host we need the ability to specify the exact version we
want the driver to build against. Take the kernel version one used for building
the NVIDIA kernel (5.15.23-nvidia-gpu).
$ wget https://us.download.nvidia.com/XFree86/Linux-x86_64/510.54/NVIDIA-Linux-x86_64-510.54.run
$ chmod +x NVIDIA-Linux-x86_64-510.54.run
# Extract the source files so we can run the installer with arguments
$ ./NVIDIA-Linux-x86_64-510.54.run -x
$ cd NVIDIA-Linux-x86_64-510.54
$ ./nvidia-installer -k 5.15.23-nvidia-gpu
Having the drivers installed we need to install the toolkit which will take care of providing the right bits into the container.
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
$ curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ apt update
$ apt install nvidia-container-toolkit
Create the hook execution file for Kata:
# Content of $ROOTFS_DIR/usr/share/oci/hooks/prestart/nvidia-container-toolkit.sh
#!/bin/bash -x
/usr/bin/nvidia-container-toolkit -debug $@
Make sure the hook shell is executable:
chmod +x $ROOTFS_DIR/usr/share/oci/hooks/prestart/nvidia-container-toolkit.sh
As the last step one can do some cleanup of files or package caches. Build the rootfs and configure it for use with Kata according to the development guide.
Enable the guest_hook_path in Kata's configuration.toml
guest_hook_path = "/usr/share/oci/hooks"
As the last step one can remove the additional packages and files that were added
to the $ROOTFS_DIR to keep it as small as possible.
One has built a NVIDIA rootfs, kernel and now we can run any GPU container
without installing the drivers into the container. Check NVIDIA device status
with nvidia-smi:
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/nvidia/cuda:11.6.0-base-ubuntu20.04" cuda nvidia-smi
Fri Mar 18 10:36:59 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.54 Driver Version: 510.54 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A30X Off | 00000000:02:00.0 Off | 0 |
| N/A 38C P0 67W / 230W | 0MiB / 24576MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The following sections give usage examples for this based on the different modes.
Use the following steps to pass an NVIDIA GPU device in passthrough mode with Kata:
Find the Bus-Device-Function (BDF) for the GPU device on the host:
$ sudo lspci -nn -D | grep -i nvidia
0000:d0:00.0 3D controller [0302]: NVIDIA Corporation Device [10de:20b9] (rev a1)
PCI address
0000:d0:00.0is assigned to the hardware GPU device.10de:20b9is the device ID of the hardware GPU device.
Find the IOMMU group for the GPU device:
$ BDF="0000:d0:00.0"
$ readlink -e /sys/bus/pci/devices/$BDF/iommu_group
The previous output shows that the GPU belongs to IOMMU group 192. The next step is to bind the GPU to the VFIO-PCI driver.
$ BDF="0000:d0:00.0"
$ DEV="/sys/bus/pci/devices/$BDF"
$ echo "vfio-pci" > $DEV/driver_override
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
# To return the device to the standard driver, we simply clear the
# driver_override and reprobe the device, ex:
$ echo > $DEV/preferred_driver
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
Check the IOMMU group number under /dev/vfio:
$ ls -l /dev/vfio
total 0
crw------- 1 zvonkok zvonkok 243, 0 Mar 18 03:06 192
crw-rw-rw- 1 root root 10, 196 Mar 18 02:27 vfio
Start a Kata container with the GPU device:
# You may need to `modprobe vhost-vsock` if you get
# host system doesn't support vsock: stat /dev/vhost-vsock
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch uname -r
Run lspci within the container to verify the GPU device is seen in the list
of the PCI devices. Note the vendor-device id of the GPU (10de:20b9) in the lspci output.
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -nn | grep '10de:20b9'"
Additionally, you can check the PCI BARs space of the NVIDIA GPU device in the container:
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -s 02:00.0 -vv | grep Region"
Note: If you see a message similar to the above, the BAR space of the NVIDIA GPU has been successfully allocated.
NVIDIA vGPU is a licensed product on all supported GPU boards. A software license is required to enable all vGPU features within the guest VM. NVIDIA vGPU manager needs to be installed on the host to configure GPUs in vGPU mode. See NVIDIA Virtual GPU Software Documentation v14.0 through 14.1 for more details.
In the time-sliced mode, the GPU is not partitioned and the workload uses the whole GPU and shares access to the GPU engines. Processes are scheduled in series. The best effort scheduler is the default one and can be exchanged by other scheduling policies see the documentation above how to do that.
Beware if you had MIG enabled before to disable MIG on the GPU if you want
to use time-sliced vGPU.
$ sudo nvidia-smi -mig 0
Enable the virtual functions for the physical GPU in the sysfs file system.
$ sudo /usr/lib/nvidia/sriov-manage -e 0000:41:00.0
Get the BDF of the available virtual function on the GPU, and choose one for the
following steps.
$ cd /sys/bus/pci/devices/0000:41:00.0/
$ ls -l | grep virtfn
The following shell snippet will walk the sysfs and only print instances
that are available, that can be created.
# The 00.0 is often the PF of the device. The VFs will have the function in the
# BDF incremented by some values so e.g. the very first VF is 0000:41:00.4
cd /sys/bus/pci/devices/0000:41:00.0/
for vf in $(ls -d virtfn*)
do
BDF=$(basename $(readlink -f $vf))
for md in $(ls -d $vf/mdev_supported_types/*)
do
AVAIL=$(cat $md/available_instances)
NAME=$(cat $md/name)
DIR=$(basename $md)
if [ $AVAIL -gt 0 ]; then
echo "| BDF | INSTANCES | NAME | DIR |"
echo "+--------------+-----------+----------------+------------+"
printf "| %12s |%10d |%15s | %10s |\n\n" "$BDF" "$AVAIL" "$NAME" "$DIR"
fi
done
done
If there are available instances you get something like this (for the first VF),
beware that the output is highly dependent on the GPU you have, if there is no
output check again if MIG is really disabled.
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-4C | nvidia-692 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-8C | nvidia-693 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-10C | nvidia-694 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-16C | nvidia-695 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-20C | nvidia-696 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-40C | nvidia-697 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-80C | nvidia-698 |
Change to the mdev_supported_types directory for the virtual function on which
you want to create the vGPU. Taking the first output as an example:
$ cd virtfn0/mdev_supported_types/nvidia-692
$ UUIDGEN=$(uuidgen)
$ sudo bash -c "echo $UUIDGEN > create"
Confirm that the vGPU was created. You should see the UUID pointing to a
subdirectory of the sysfs space.
$ ls -l /sys/bus/mdev/devices/
Get the IOMMU group number and verify there is a VFIO device created to use
with Kata.
$ ls -l /sys/bus/mdev/devices/*/
$ ls -l /dev/vfio
Use the VFIO device created in the same way as in the passthrough use-case.
Beware that the guest needs the NVIDIA guest drivers, so one would need to build
a new guest OS image.
We're not going into detail what MIG is but briefly it is a technology to
partition the hardware into independent instances with guaranteed quality of
service. For more details see
NVIDIA Multi-Instance GPU User Guide.
First enable MIG mode for a GPU, depending on the platform you're running
a reboot would be necessary. Some platforms support GPU reset.
$ sudo nvidia-smi -mig 1
If the platform supports a GPU reset one can run, otherwise you will get a warning to reboot the server.
$ sudo nvidia-smi --gpu-reset
The driver per default provides a number of profiles that users can opt-in when configuring the MIG feature.
$ sudo nvidia-smi mig -lgip
+-----------------------------------------------------------------------------+
| GPU instance profiles: |
| GPU Name ID Instances Memory P2P SM DEC ENC |
| Free/Total GiB CE JPEG OFA |
|=============================================================================|
| 0 MIG 1g.10gb 19 7/7 9.50 No 14 0 0 |
| 1 0 0 |
+-----------------------------------------------------------------------------+
| 0 MIG 1g.10gb+me 20 1/1 9.50 No 14 1 0 |
| 1 1 1 |
+-----------------------------------------------------------------------------+
| 0 MIG 2g.20gb 14 3/3 19.50 No 28 1 0 |
| 2 0 0 |
+-----------------------------------------------------------------------------+
...
Create the GPU instances that correspond to the vGPU types of the MIG-backed
vGPUs that you will create
NVIDIA A100 PCIe 80GB Virtual GPU Types.
# MIG 1g.10gb --> vGPU A100D-1-10C
$ sudo nvidia-smi mig -cgi 19
List the GPU instances and get the GPU instance id to create the compute instance.
$ sudo nvidia-smi mig -lgi # list the created GPU instances
$ sudo nvidia-smi mig -cci -gi 9 # each GPU instance can have several compute
# instances. Instance -> Workload
Verify that the compute instances were created within the GPU instance
$ nvidia-smi
... snip ...
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| 0 9 0 0 | 0MiB / 9728MiB | 14 0 | 1 0 0 0 0 |
| | 0MiB / 4095MiB | | |
+------------------+----------------------+-----------+-----------------------+
... snip ...
We can use the snippet from before to list
the available vGPU instances, this time MIG-backed.
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.5 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:01.6 | 1 |GRID A100D-1-10C | nvidia-699 |
... snip ...
Repeat the steps after the snippet listing
to create the corresponding mdev device and use the guest OS created in the
previous section with time-sliced vGPUs.