Deploy with Kata Containers#
About the Operator with Kata Containers#
Kata Containers is an open source project that creates lightweight Virtual Machines (VMs) that feel and perform like traditional containers such as a Docker container. A traditional container packages software for user-space isolation from the host, but the container runs on the host and shares the operating system kernel with the host. Sharing the operating system kernel is a potential vulnerability.
A Kata container runs in a virtual machine on the host. The virtual machine has a separate operating system and operating system kernel. Hardware virtualization and a separate kernel provide improved workload isolation in comparison with traditional containers.
The NVIDIA GPU Operator works with the Kata container runtime. Kata uses a hypervisor, such as QEMU, to provide a lightweight virtual machine with a single purpose: to run a Kubernetes pod.
The following diagram shows the software components that Kubernetes uses to run a Kata container.
Software Components with Kata Container Runtime#
Tip
This page describes deploying with Kata containers only. Refer to the Confidential Containers documentation if you are interested in deploying Confidential Containers with Kata Containers and the GPU Operator.
Benefits of Using Kata Containers#
The primary benefits of Kata Containers are as follows:
Running untrusted workloads in a container. The virtual machine provides a layer of defense against the untrusted code.
Limiting access to hardware devices such as NVIDIA GPUs. The virtual machine is provided access to specific devices. This approach ensures that the workload cannot access additional devices.
Transparent deployment of unmodified containers.
Limitations and Restrictions#
For GPU passthrough workloads, all GPUs must be assigned to one Kata Container virtual machine. Configuring only some GPUs on a node for Kata Containers is not supported. vGPU is not supported.
Support for Kata Containers is limited to the implementation described on this page. The Operator offers Technology Preview support for Red Hat OpenShift Sandboxed Containers v1.12.
NVIDIA supports the Operator and Kata Containers with the containerd runtime only.
Cluster Topology Considerations#
You can configure all the worker nodes in your cluster for Kata Containers or you can configure some nodes for Kata Containers and others for traditional containers. Consider the following example where node A is configured to run traditional containers and node B is configured to run Kata Containers.
Node A - Traditional Container nodes receive the following software components |
Node B - Kata Container nodes receive the following software components |
|---|---|
|
|
This configuration can be controlled through node labelling, as described in the Label Nodes section.
You can also set sandboxWorkloads.defaultWorkload=vm-passthrough when you install the GPU Operator to configure all nodes to run Kata Containers by default.
Configure the GPU Operator for Kata Containers#
To enable Kata Containers for GPUs on your cluster, you do the following:
Make sure your cluster meets the prerequisites.
Label the nodes you want to use for Kata Containers.
Install the upstream
kata-deployHelm chart, which deploys all Kata runtime classes, including NVIDIA-specific runtime classes. Thekata-qemu-nvidia-gpuruntime class is used with Kata Containers.Install the NVIDIA GPU Operator with Kata sandbox mode enabled.
After installation, you can run a sample workload that uses the Kata runtime class.
Prerequisites#
Hardware and BIOS#
Ensure hosts are configured to enable hardware virtualization and Access Control Services (ACS). With some AMD CPUs and BIOSes, ACS might be grouped under Advanced Error Reporting (AER). Enabling these features is typically performed by configuring the host BIOS.
Configure hosts to support IOMMU. You can check if your host is configured for IOMMU by running the following command:
$ ls /sys/kernel/iommu_groupsIf the output of this command includes 0, 1, and so on, then your host is configured for IOMMU.
If the host is not configured or if you are unsure, add the
intel_iommu=on(oramd_iommu=onfor AMD CPUs) Linux kernel command-line argument. For most Linux distributions, add the argument to the/etc/default/grubfile:... GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on modprobe.blacklist=nouveau" ...
On Ubuntu systems, run
sudo update-grubafter making the change to configure the bootloader. On other systems, you might need to runsudo dracutafter making the change. Refer to the documentation for your operating system. Reboot the host after configuring the bootloader.Note
After configuring IOMMU, you might see QEMU warnings about PCI P2P DMA when running GPU workloads. These are expected and can be safely ignored.
Ensure that no NVIDIA GPU drivers are installed on the host. Kata Containers uses VFIO to pass GPUs directly to the VM, and host-level GPU drivers interfere with VFIO device binding.
To check if NVIDIA GPU drivers are installed, run the following command:
$ lsmod | grep nvidia
If the output is empty, no NVIDIA GPU drivers are loaded. If modules such as
nvidia,nvidia_uvm, ornvidia_modesetare listed, NVIDIA GPU drivers are present and must be removed before proceeding. Refer to Removing the Driver in the NVIDIA Driver Installation Guide.
Kubernetes Cluster#
A Kubernetes cluster with cluster administrator privileges.
Helm installed on your cluster. Use the command below to install Helm or refer to the Helm documentation for installation instructions.
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \ && chmod 700 get_helm.sh \ && ./get_helm.sh
Enable the
KubeletPodResourcesGetKubelet feature gate on your cluster. The Kata runtime uses this feature gate to query the Kubelet Pod Resources API and discover allocated GPU devices during sandbox creation.For Kubernetes v1.34 and later, the
KubeletPodResourcesGetfeature gate is enabled by default.For Kubernetes versions older than v1.34, you must explicitly enable the
KubeletPodResourcesGetfeature gate. Add the feature gate to your Kubelet configuration (typically/var/lib/kubelet/config.yaml):apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration featureGates: KubeletPodResourcesGet: true
If your
config.yamlalready has afeatureGatessection, add the gate to the existing section rather than creating a duplicate.Restart the Kubelet service to apply the changes:
$ sudo systemctl restart kubelet
Refer to the Kata Containers documentation for more details on the Kata runtime and VFIO cold-plug.
Label Nodes to use Kata Containers#
Get a list of the nodes in your cluster:
$ kubectl get nodesExample Output:
NAME STATUS ROLES AGE VERSION node-01 Ready <none> 10d v1.34.0 node-02 Ready <none> 10d v1.34.0
Label the nodes you want to use for Kata Containers:
$ kubectl label node <node-name> nvidia.com/gpu.workload.config=vm-passthrough
The GPU Operator uses this label to determine what software components to deploy to a node. The
nvidia.com/gpu.workload.config=vm-passthroughlabel specifies that the node should receive the software components to run Kata Containers. A node can only run one container runtime at a time, so a labeled node runs only Kata container workloads and cannot run traditional GPU container workloads. The labeling approach is useful if you want to run Kata container workloads on some nodes and traditional GPU container workloads on other nodes in your cluster. Refer to the GPU Operator Cluster Topology Considerations section for more details on what gets deployed to a Kata Container node.Tip
Skip this section if you plan to set
sandboxWorkloads.defaultWorkload=vm-passthroughwhen you install the GPU Operator.Verify the node label was added:
$ kubectl describe node <node-name> | grep nvidia.com/gpu.workload.config
Example Output:
nvidia.com/gpu.workload.config: vm-passthrough
After labeling the nodes, you can continue to the next steps to install Kata Containers and the NVIDIA GPU Operator.
Install the Kata Containers Helm Chart#
Install Kata Containers using the kata-deploy Helm chart.
The kata-deploy chart installs all required components from the Kata Containers project including the Kata Containers runtime binary, runtime configuration, UVM kernel, and images that NVIDIA uses for Kata Containers.
The minimum required version is 3.29.0.
Set the chart version and registry path:
$ export VERSION="3.29.0" $ export CHART="oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy"
Install the kata-deploy Helm chart:
$ helm install kata-deploy "${CHART}" \ --namespace kata-system --create-namespace \ --set nfd.enabled=false \ --wait --timeout 10m \ --version "${VERSION}"
Example Output:
LAST DEPLOYED: Wed Apr 1 17:03:00 2026 NAMESPACE: kata-system STATUS: deployed REVISION: 1 DESCRIPTION: Install complete TEST SUITE: None
Note
The
--waitflag in the install command instructs Helm to wait until the release is deployed before returning. It can take a few minutes to return output.There is a known Helm issue on single node clusters, that may result in the Helm command finishing before all deployed pods are finished initializing. If you are deploying to a single node cluster, you may need to wait for an additional few minutes after the Helm command completes for the
kata-deploypod to be in the Running state.Note
Both
kata-deployand the GPU Operator deploy Node Feature Discovery (NFD) by default. The install command includes--set nfd.enabled=falseto preventkata-deployfrom deploying NFD. The GPU Operator will deploy and manage NFD in the next step.Optional: Verify that the
kata-deploypod is running:$ kubectl get pods -n kata-system | grep kata-deploy
Example Output:
NAME READY STATUS RESTARTS AGE kata-deploy-b2lzs 1/1 Running 0 6m37s
Optional: Verify that the
kata-qemu-nvidia-gpuruntime class is available:$ kubectl get runtimeclass | grep kata-qemu-nvidia-gpu
Example Output:
NAME HANDLER AGE kata-qemu-nvidia-gpu kata-qemu-nvidia-gpu 40s kata-qemu-nvidia-gpu-snp kata-qemu-nvidia-gpu-snp 40s kata-qemu-nvidia-gpu-tdx kata-qemu-nvidia-gpu-tdx 40s
Several runtime classes are installed by the
kata-deploychart. Thekata-qemu-nvidia-gpuruntime class is used with Kata Containers. Thekata-qemu-nvidia-gpu-snpandkata-qemu-nvidia-gpu-tdxruntime classes are used to deploy Confidential Containers.Note
To manage the lifecycle of Kata Containers, including upgrades and day-two operations, install the Kata Lifecycle Manager. This Argo Workflows-based tool is the recommended way to manage Kata Containers deployments.
Optional: If you have an issue deploying the
kata-deploypod or are not seeing the expected runtime classes, get the pod name and view the logs:$ kubectl get pods -n kata-system | grep kata-deploy $ kubectl logs -n kata-system <pod-name>
Replace
<pod-name>with the name of thekata-deploypod from the first command’s output.
Install the NVIDIA GPU Operator#
Install the NVIDIA GPU Operator and configure it to deploy Kata Container components.
Add and update the NVIDIA Helm repository:
$ helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \ && helm repo update
Example Output:
"nvidia" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "nvidia" chart repository Update Complete. ⎈Happy Helming!⎈
Install the GPU Operator. The following configures the GPU Operator to deploy the operands that are required for Kata Containers. Refer to Common Chart Customization Options for more details on the additional configuration options you can specify when installing the GPU Operator.
$ helm install --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ --version=v26.3.1 \ --set sandboxWorkloads.enabled=true \ --set sandboxWorkloads.mode=kata \ --set nfd.enabled=true \ --set nfd.nodefeaturerules=true
Example Output:
NAME: gpu-operator LAST DEPLOYED: Wed Mar 25 17:21:34 2026 NAMESPACE: gpu-operator STATUS: deployed REVISION: 1 DESCRIPTION: Install complete TEST SUITE: None
Tip
Add
--set sandboxWorkloads.defaultWorkload=vm-passthroughif every worker node should use Kata by default.Optional: Verify that all GPU Operator pods, especially the Sandbox Device Plugin and VFIO Manager operands, are running:
$ kubectl get pods -n gpu-operatorExample Output:
NAME READY STATUS RESTARTS AGE gpu-operator-1766001809-node-feature-discovery-gc-75776475sxzkp 1/1 Running 0 86s gpu-operator-1766001809-node-feature-discovery-master-6869lxq2g 1/1 Running 0 86s gpu-operator-1766001809-node-feature-discovery-worker-mh4cv 1/1 Running 0 86s gpu-operator-f48fd66b-vtfrl 1/1 Running 0 86s nvidia-cc-manager-7z74t 1/1 Running 0 61s nvidia-kata-sandbox-device-plugin-daemonset-d5rvg 1/1 Running 0 30s nvidia-sandbox-validator-6xnzc 1/1 Running 0 30s nvidia-vfio-manager-h229x 1/1 Running 0 62s
Note
It can take several minutes for all GPU Operator pods to be in the Running state. If you are not seeing the expected output, you can view the logs for the GPU Operator pods:
$ kubectl logs -n gpu-operator <pod-name>Replace
<pod-name>with the name of the GPU Operator pod fromkubectl get pods -n gpu-operator.Note
The NVIDIA Confidential Computing (CC) Manager for Kubernetes (
nvidia-cc-manager) is deployed to all nodes configured to run Kata containers, even if you are not planning to run Confidential Containers. This manager sets the confidential computing mode on the NVIDIA GPUs, if your GPU is capable of Confidential Computing, but will not be used if you are deploying in Kata Containers only. Refer to Confidential Containers for more details.Optional: If you have host access to the worker node, you can perform the following validation step:
Confirm that the host uses the
vfio-pcidevice driver for GPUs:$ lspci -nnk -d 10de:Example Output:
65:00.0 3D controller [0302]: NVIDIA Corporation xxxxxxx [xxx] [10de:xxxx] (rev xx) Subsystem: NVIDIA Corporation xxxxxxx [xxx] [10de:xxxx] Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau
Optional: Configuring GPU or NVSwitch Resource Types Name#
By default, the NVIDIA GPU Operator creates a resource type for GPUs and NVSwitches, nvidia.com/pgpu and nvidia.com/nvswitch.
You can reference these names in your manifests to request GPU or NVSwitch resources for your workload.
If you want to use a different name, you can set the P_GPU_ALIAS or NVSWITCH_ALIAS environment variables in the Kata device plugin to your preferred name.
In clusters where all GPUs are the same model, a single resource type is typically sufficient.
In heterogeneous clusters, where you have different GPU types on your nodes, you might want to use specific GPU types for your workload.
To do this, specify an empty P_GPU_ALIAS environment variable in the Kata device plugin by adding the following to your GPU Operator installation:
--set kataSandboxDevicePlugin.env[0].name=P_GPU_ALIAS and
--set kataSandboxDevicePlugin.env[0].value="".
When this variable is set to "", the Kata device plugin creates GPU model-specific resource types, for example nvidia.com/GH100_H100L_94GB, instead of the default nvidia.com/pgpu type.
Use the exposed device resource types in pod specs by specifying respective resource limits.
Similarly, you can set NVSWITCH_ALIAS to "" to advertise model-specific NVSwitch resource types.
The following example installs the GPU Operator with both P_GPU_ALIAS and NVSWITCH_ALIAS configured:
$ helm install --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--version=v26.3.1 \
--set sandboxWorkloads.enabled=true \
--set sandboxWorkloads.mode=kata \
--set nfd.enabled=true \
--set nfd.nodefeaturerules=true \
--set kataSandboxDevicePlugin.env[0].name=P_GPU_ALIAS \
--set kataSandboxDevicePlugin.env[0].value="" \
--set kataSandboxDevicePlugin.env[1].name=NVSWITCH_ALIAS \
--set kataSandboxDevicePlugin.env[1].value=""
After installing the GPU Operator, you can view the GPU or NVSwitch resource types available on a node by running the following command:
$ kubectl get node <node-name> -o json | grep nvidia.com
Example Output:
"nvidia.com/GH100_H100L_94GB": "1"
Run a Sample Workload#
A pod specification for a Kata container requires the following:
Specify a Kata runtime class.
Specify a passthrough GPU resource.
Create a file, such as
cuda-vectoradd-kata.yaml, with the following content:apiVersion: v1 kind: Pod metadata: name: cuda-vectoradd-kata namespace: default spec: runtimeClassName: kata-qemu-nvidia-gpu restartPolicy: OnFailure containers: - name: cuda-vectoradd image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda12.5.0-ubuntu22.04" resources: limits: nvidia.com/pgpu: "1" memory: 16Gi
Create the pod:
$ kubectl apply -f cuda-vectoradd-kata.yamlExample Output:
pod/cuda-vectoradd-kata createdOptional: Verify the pod is running:
$ kubectl get pod cuda-vectoradd-kataExample Output:
NAME READY STATUS RESTARTS AGE cuda-vectoradd-kata 1/1 Running 0 10s
View the pod logs:
$ kubectl logs -n default cuda-vectoradd-kataExample Output:
[Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done
Delete the pod:
$ kubectl delete -f cuda-vectoradd-kata.yaml
Troubleshooting Workloads#
If the sample workload does not run, confirm that you labeled nodes to run virtual machines in containers:
$ kubectl get nodes -l nvidia.com/gpu.workload.config=vm-passthrough
Example Output:
NAME STATUS ROLES AGE VERSION
kata-worker-1 Ready <none> 10d v1.35.3
kata-worker-2 Ready <none> 10d v1.35.3
kata-worker-3 Ready <none> 10d v1.35.3
You might have configured vm-passthrough as the default sandbox workload in the ClusterPolicy resource.
That setting applies the default sandbox workload cluster-wide, including for Kata when mode is kata.
Also confirm in the ClusterPolicy that sandboxWorkloads is configured for Kata as shown in the following example.
$ kubectl describe clusterpolicy | grep sandboxWorkloads
Example Output:
sandboxWorkloads:
enabled: true
defaultWorkload: vm-passthrough
mode: kata