Support Matrix#
This page is the version-pinned support matrix for NVIDIA AI Enterprise Infrastructure Release 4.10. Use it before you install or upgrade to confirm that your GPUs, networking, system platform, operating system, hypervisor or cloud, and orchestration stack are supported on this release. It covers the deployment prerequisites, supported NVIDIA infrastructure software components and versions, qualified GPUs and networking products, and the operating systems, hypervisors, Kubernetes distributions, and cloud instance types validated for bare metal, virtualized, and public cloud deployments.
If you need to compare configurations across releases (4.4 through 4.10) or filter by deployment type, OS, hypervisor, or orchestration, use the Interactive Support Matrix instead. For a summary of new features and updated component versions in this release, see the Release Notes.
Prerequisites#
Supported configurations require all of the following.
On-Premises Servers
GPU supported by NVIDIA AI Enterprise (and optionally a supported networking card)
Supported infrastructure management software - bare-metal or virtualized (OS, orchestration platform, hypervisor)
Cloud Servers
Supported orchestration platform
Supported NVIDIA Infrastructure Software#
Component |
Version |
x86 |
ARM |
|---|---|---|---|
NVIDIA Data Center GPU Driver |
Supported |
Supported |
|
NVIDIA DOCA Driver for Networking |
Supported |
Supported |
|
NVIDIA Fabric Manager (integrated into NVIDIA AI Enterprise drivers) |
Supported |
Supported |
|
NVIDIA Virtual GPU Manager |
Supported |
N/A |
|
NVIDIA vGPU for Compute Guest Driver |
Supported |
N/A |
|
NVIDIA Container Toolkit |
Supported |
Supported |
|
NVIDIA GPU Operator |
Supported |
Supported |
|
NVIDIA Network Operator |
Supported |
Supported |
|
NVIDIA Base Command Manager (BCM) |
Supported |
Supported |
Supported NVIDIA GPUs and Networking#
NVIDIA AI Enterprise is supported on the following NVIDIA GPUs with compatible third-party servers listed on the NVIDIA-Certified Systems page.
Specific NVIDIA AI Enterprise-supported products may not support all operating systems or GPUs. Refer to the individual product release notes for discrepancies.
Note
For HGX support information, refer to the Supported Platforms section.
NVIDIA H800
NVIDIA H800 NVL
NVIDIA H200 NVL
NVIDIA H100
NVIDIA H100 NVL
NVIDIA H20
NVIDIA L40
NVIDIA L20
NVIDIA L4
NVIDIA L2
NVIDIA RTX 6000 Ada Generation
NVIDIA RTX 5880 Ada Generation
NVIDIA RTX 5000 Ada Generation
NVIDIA RTX 4000 SFF Ada Generation
NVIDIA A800
NVIDIA AX800
NVIDIA A100X
NVIDIA A100
NVIDIA A40
NVIDIA A30X
NVIDIA A30
NVIDIA A16
NVIDIA A10
NVIDIA A10G
NVIDIA A10M
NVIDIA A2
NVIDIA RTX A6000
NVIDIA RTX A5000
NVIDIA RTX A4000
NVIDIA T4
NVIDIA T4G
NVIDIA Quadro RTX 8000
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 4000
NVIDIA V100
Multi-node requires an Ethernet NIC that supports RoCE. NVIDIA recommends using an NVIDIA Mellanox ConnectX and an NVIDIA GPU.
Product Family |
Architecture |
|---|---|
NVIDIA ConnectX-6 NIC |
NVIDIA ConnectX-6 |
NVIDIA ConnectX-6 Dx NIC |
NVIDIA ConnectX-6 Dx |
NVIDIA ConnectX-7 NIC |
NVIDIA ConnectX-7 |
NVIDIA ConnectX-8 NIC |
NVIDIA ConnectX-8 |
NVIDIA BlueField-3 SuperNIC |
NVIDIA BlueField-3 |
Supported Platforms#
NVIDIA AI Enterprise is supported on NVIDIA DGX servers in bare-metal deployments with the NVIDIA data center driver for Linux included in the DGX OS software.
Note
DGX platforms, HGX platforms with KVM hypervisors, and IGX Orin are not supported with NVIDIA vGPU for Compute. For NVIDIA IGX licensing information, refer to the NVIDIA AI Enterprise Licensing Documentation.
Accelerated Platform |
Architecture |
|---|---|
NVIDIA DGX H100 |
NVIDIA Hopper |
NVIDIA HGX H100 |
NVIDIA Hopper |
NVIDIA HGX H800 |
NVIDIA Hopper |
NVIDIA HGX H200 |
NVIDIA Hopper |
NVIDIA HGX H20 |
NVIDIA Hopper |
NVIDIA IGX Orin [1] |
NVIDIA Ada Lovelace |
NVIDIA DGX A100 |
NVIDIA Ampere |
NVIDIA HGX A100 |
NVIDIA Ampere |
NVIDIA HGX A800 |
NVIDIA Ampere |
Bare Metal Deployments#
Tables below list supported bare-metal combinations.
Orchestration Platform |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
|||||
|---|---|---|---|---|---|---|---|
Name |
Versions |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
|
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
HPE Ezmeral Runtime Enterprise |
5.6 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
N/A |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
Supported |
Supported |
vGPU Guest/Data Center |
|
SUSE Rancher RKE2 |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
SUSE Rancher RKE2 |
1.30 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
Container |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
||||
|---|---|---|---|---|---|---|
Name |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
7.9, 8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
vGPU Guest/Data Center |
Virtualized Deployments#
Tables below cover virtualized deployments with NVIDIA vGPU for Compute.
Note
vGPU Mode: NVIDIA vGPU Manager installed on the host and vGPU Guest Driver installed in the VM
GPU Passthrough Mode: NVIDIA Data Center GPU Driver or NVIDIA vGPU Guest Driver installed in the Guest VM
Orchestration Platform |
Guest Operating System |
Hypervisor |
NVIDIA AI Enterprise Infrastructure Support |
|||||||
|---|---|---|---|---|---|---|---|---|---|---|
Name |
Versions |
Engine |
Name |
Versions |
Name |
Versions |
GPU Operator |
Network Operator |
GPU Driver Support (inside the virtual machine) |
|
vGPU Mode |
GPU Passthrough Mode [4] |
|||||||||
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [6] |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
8.10, 9.4, 9.5, 9.6, 9.7 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [6] |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
VMware vSphere |
8.0 and later |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.5, 9.6, 9.7 |
8.10, 9.4, 9.5, 9.6, 9.7 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.5, 9.6, 9.7 |
VMware vSphere |
8.0 and later |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4,9.5, 9.6, 9.7 |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
VMware vSphere Kubernetes Service |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later |
Supported |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Container |
Guest Operating System |
Hypervisor |
NVIDIA AI Enterprise Infrastructure Support |
||||||
|---|---|---|---|---|---|---|---|---|---|
Name |
Engine |
Name |
Versions |
Name |
Versions |
GPU Operator |
Network Operator |
GPU Driver Support (inside the virtual machine) |
|
vGPU Mode |
GPU Passthrough Mode[#f6]_ |
||||||||
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.2, 9.4, 9.5, 9.6, 9.7 |
Red Hat Enterprise Linux [5] |
8.10, 9.4, 9.5, 9.6, 9.7 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.2, 9.4, 9.5, 9.6, 9.7 |
VMware vSphere |
8.0 and later |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
VMware vSphere |
8.0 and later |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Hypervisor |
Guest Operating System |
NVIDIA AI Enterprise Infrastructure Support |
||
|---|---|---|---|---|
Name |
Versions |
Name |
Versions |
vGPU |
VMware ESXi |
ESXi 8.0 and later |
Debian |
12 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later |
Red Hat Enterprise Linux |
8.10, 9.4, 9.5, 9.6, 9.7 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later |
SUSE Linux Enterprise Server |
12 SP3+, 12 SP5, 15 SP2 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later |
Microsoft Windows |
Server 2022, Server 2025, Windows 10, Windows 11 |
vGPU Guest |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
8.10, 9.4, 9.6, 9.7 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.5, 9.6, 9.7 |
vGPU Guest |
|
8.10, 9.4, 9.6, 9.7 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
Red Hat OpenStack Platform, Red Hat OpenStack Services on OpenShift |
vGPU Guest |
|||
AOS/AHV 6.5 - 6.10 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
AOS/AHV 6.5 - 6.10 |
Debian |
12 |
vGPU Guest |
|
AOS/AHV 6.5 - 6.10 |
Red Hat Enterprise Linux |
8.8, 8.10, 9.2, 9.4, 9.6 |
vGPU Guest |
|
AOS/AHV 6.5 - 6.10 |
SUSE Linux Enterprise Server |
12, 15 |
vGPU Guest |
|
Base Command Manager#
Base Command Manager handles cluster provisioning, workloads, and monitoring.
Orchestration Platform |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
|||||
|---|---|---|---|---|---|---|---|
Name |
Versions |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
|
Upstream Kubernetes |
1.31 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
Data Center |
Upstream Kubernetes |
1.31 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8, 9 |
Supported |
Supported |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Red Hat Enterprise Linux |
8, 9 |
N/A |
N/A |
Data Center |
Public Cloud#
Managed Kubernetes#
Supported managed Kubernetes environments for this release.
Cloud Service Provider |
Orchestration Platform |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
|||||
|---|---|---|---|---|---|---|---|---|
Name |
K8s Versions |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
||
Amazon Web Services |
Amazon Elastic Kubernetes Service (EKS) |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
Google Cloud Platform |
Google Kubernetes Engine (GKE) |
1.30 - 1.34 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
Microsoft Azure |
Azure Kubernetes Service (AKS) |
1.30 - 1.34 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
N/A |
Red Hat OpenShift (Managed Service) |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
Supported |
N/A |
vGPU Guest/Data Center |
Standard GPU Instances#
These standard GPU VM instance types support NVIDIA AI Enterprise with Kubernetes or standalone containers.
Cloud Service Provider |
Virtual Machine (VM) Instance with GPU |
Product Family |
|---|---|---|
Alibaba Cloud |
gn7e, gn7i |
NVIDIA A10 |
Alibaba Cloud |
gn7s |
NVIDIA A30 |
Alibaba Cloud |
gn6i |
NVIDIA T4 |
Alibaba Cloud |
gn6e, gn6v |
NVIDIA V100 |
Alibaba Cloud |
ecs.ebmgn8v, ecs.gn8v |
NVIDIA H20 |
Alibaba Cloud |
ecs.ebmgn8is, ecs.gn8is |
NVIDIA L20 |
Amazon Web Services |
EC2 P3 |
NVIDIA V100 |
Amazon Web Services |
EC2 P4 |
NVIDIA A100 |
Amazon Web Services |
EC2 P5 |
NVIDIA H100 |
Amazon Web Services |
EC2 G4 |
NVIDIA T4 |
Amazon Web Services |
EC2 G5 |
NVIDIA A10G |
Amazon Web Services |
EC2 G6 |
NVIDIA L4 |
Amazon Web Services |
EC2 G6e |
NVIDIA L40S |
Microsoft Azure |
NCads_H100_v5-series, NCCads_H100_v5-series |
NVIDIA H100 |
Microsoft Azure |
NCv3-series |
NVIDIA V100 |
Microsoft Azure |
NCasT4_v3-series |
NVIDIA T4 |
Microsoft Azure |
NC_A100_v4-series |
NVIDIA A100 |
Google Cloud Platform |
A3 VM |
NVIDIA H100, H200 |
Google Cloud Platform |
A2 VM |
NVIDIA A100 |
Google Cloud Platform |
G2 VM |
NVIDIA L4 |
Google Cloud Platform |
N1 VM |
NVIDIA T4, V100 |
Oracle Cloud Infrastructure |
BM.GPU3, VM.GPU3 |
NVIDIA V100 |
Oracle Cloud Infrastructure |
BM.GPU4, BM.GPU.A100 |
NVIDIA A100 |
Oracle Cloud Infrastructure |
BM.GPU.A10, VM.GPU.A10 |
NVIDIA A10 |
Oracle Cloud Infrastructure |
BM.GPU.H100.8 |
NVIDIA H100 |
Tencent Cloud |
PNV4 |
NVIDIA A10 |
Tencent Cloud |
GT4 |
NVIDIA A100 |
Tencent Cloud |
GN10Xp, GN10X |
NVIDIA V100 |
Tencent Cloud |
GN7, GN7vi, GI3X |
NVIDIA T4 |
Volcano Engine |
ecs.gni2 |
NVIDIA A10 |
NVIDIA GPU Optimized VMI on CSP Marketplace#
Validated marketplace VMIs for NVIDIA AI on AWS, Azure, Google Cloud, and OCI.
Cloud Service Provider |
VMI Name |
GPUs |
K8s Support |
Standalone Container |
|---|---|---|---|---|
Amazon Web Services |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Microsoft Azure |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Google Cloud Platform |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
CPU-Only Server Support#
CPU-only (no GPU) servers on the NVIDIA-Certified Systems list can run these CPU-enabled stacks:
TensorFlow
PyTorch
Triton Inference Server with FIL backend
NVIDIA RAPIDS with XGBoost and Dask
Footnotes