Support Matrix#
This page is the version-pinned support matrix for NVIDIA AI Enterprise Infrastructure Release 7.4. Use it before you install or upgrade to confirm that your GPUs, networking, system platform, operating system, hypervisor or cloud, and orchestration stack are supported on this release. It covers the deployment prerequisites, supported NVIDIA infrastructure software components and versions, qualified GPUs and networking products, and the operating systems, hypervisors, Kubernetes distributions, and cloud instance types validated for bare metal, virtualized, and public cloud deployments.
If you need to compare configurations across releases (4.4 through 7.4) or filter by deployment type, OS, hypervisor, or orchestration, use the Interactive Support Matrix instead. For a summary of new features and updated component versions in this release, see the Release Notes.
Prerequisites#
Supported configurations require all of the following.
On-Premises Servers
GPU supported by NVIDIA AI Enterprise (and optionally a supported networking card)
Supported infrastructure management software - bare-metal or virtualized (OS, orchestration platform, hypervisor)
Important
GB200 NVL4, GB200 NVL72, and GB300 NVL72 do not require NVIDIA-Certified Systems; they require NVIDIA-Qualified server status. See the NVIDIA Qualified Systems Catalog.
Cloud Servers
Supported orchestration platform
Supported NVIDIA Infrastructure Software#
Component |
Version |
x86 |
ARM |
Government Ready |
|---|---|---|---|---|
NVIDIA Data Center GPU Driver |
Supported |
Supported |
N/A |
|
NVIDIA DOCA Driver for Networking |
Supported |
Supported |
Supported |
|
NVIDIA Fabric Manager |
Supported |
Supported |
N/A |
|
NVIDIA DOCA Microservices |
Supported |
Supported |
N/A |
|
NVIDIA vGPU for Compute (Virtual GPU Manager and Guest Drivers) |
Supported |
N/A |
N/A |
|
NVIDIA Container Toolkit |
Supported |
Supported |
N/A |
|
NVIDIA Run:ai [17] |
Supported |
Supported |
N/A |
|
NVIDIA DPU Operator (DPF) |
Supported |
N/A |
N/A |
|
NVIDIA GPU Operator |
Supported |
Supported (N/A for Government Ready) |
Supported [11] |
|
NVIDIA Network Operator |
Supported |
Supported |
Supported [12] |
|
NVIDIA NIM Operator |
Supported |
Supported |
Supported |
|
NVIDIA Base Command Manager (BCM) |
Supported |
Supported |
N/A |
Supported NVIDIA GPUs and Networking#
NVIDIA AI Enterprise is supported on the following NVIDIA GPUs with compatible third-party servers listed on the NVIDIA-Certified Systems page.
Specific NVIDIA AI Enterprise-supported products may not support all operating systems or GPUs. Refer to the individual product release notes for discrepancies.
Important
The NVIDIA-Certified Systems requirement does not apply to GB200 NVL4, GB200 NVL72, and GB300 NVL72 systems. For these platforms, NVIDIA-Qualified server status is the prerequisite for NVIDIA AI Enterprise support. For more information, refer to the NVIDIA Qualified Systems Catalog.
Note
For HGX support information, refer to the Supported Platforms section.
NVIDIA RTX PRO 6000 Blackwell Server Edition
NVIDIA H800
NVIDIA H800 NVL
NVIDIA H200 NVL
NVIDIA H100
NVIDIA H100 NVL
NVIDIA H20
NVIDIA L40S
NVIDIA L40
NVIDIA L20
NVIDIA L4
NVIDIA L2
NVIDIA RTX 6000 Ada Generation
NVIDIA RTX 5880 Ada Generation
NVIDIA RTX 5000 Ada Generation
NVIDIA RTX 4000 SFF Ada Generation
NVIDIA A800
NVIDIA AX800
NVIDIA A100X
NVIDIA A100
NVIDIA A40
NVIDIA A30X
NVIDIA A30
NVIDIA A16
NVIDIA A10
NVIDIA A10G
NVIDIA A10M
NVIDIA A2
NVIDIA RTX A6000
NVIDIA RTX A5000
NVIDIA RTX A4000
NVIDIA T4
NVIDIA T4G
NVIDIA Quadro RTX 8000
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 4000
NVIDIA V100
Note
Multiple GPU architectures can be deployed within the same NVIDIA AI Enterprise environment.
Multi-node requires an Ethernet NIC that supports RoCE. NVIDIA recommends using an NVIDIA Mellanox ConnectX and an NVIDIA GPU.
Product Family |
Architecture |
|---|---|
NVIDIA ConnectX-6 NIC |
NVIDIA ConnectX-6 |
NVIDIA ConnectX-6 Dx NIC |
NVIDIA ConnectX-6 Dx |
NVIDIA ConnectX-7 NIC |
NVIDIA ConnectX-7 |
NVIDIA ConnectX-8 NIC |
NVIDIA ConnectX-8 |
NVIDIA BlueField-3 SuperNIC |
NVIDIA BlueField-3 |
NVIDIA BlueField-3 DPU |
NVIDIA BlueField-3 |
Supported Platforms#
NVIDIA AI Enterprise is supported on NVIDIA DGX servers in bare-metal deployments with the NVIDIA data center driver for Linux included in the DGX OS software.
Note
DGX platforms, HGX platforms with KVM hypervisors, and IGX Orin are not supported with NVIDIA vGPU for Compute. For NVIDIA IGX, refer to the NVIDIA AI Enterprise - IGX Packaging, Pricing, and Licensing Guide.
Accelerated Platform |
Architecture |
|---|---|
NVIDIA DGX B300 |
NVIDIA Blackwell Ultra |
NVIDIA Blackwell Ultra |
|
NVIDIA DGX GB300 NVL72 [5] |
NVIDIA Grace Blackwell |
NVIDIA GB300 NVL72 [5] |
NVIDIA Grace Blackwell |
NVIDIA DGX B200 |
NVIDIA Blackwell |
NVIDIA Blackwell |
|
NVIDIA DGX GB200 NVL72 [5] |
NVIDIA Grace Blackwell |
NVIDIA GB200 NVL72 [5] |
NVIDIA Grace Blackwell |
NVIDIA GB200 NVL4 [5] |
NVIDIA Grace Blackwell |
NVIDIA GH200 [5] |
NVIDIA Grace Hopper |
NVIDIA DGX H100 |
NVIDIA Hopper |
NVIDIA HGX H100 |
NVIDIA Hopper |
NVIDIA HGX H800 |
NVIDIA Hopper |
NVIDIA HGX H200 |
NVIDIA Hopper |
NVIDIA HGX H20 |
NVIDIA Hopper |
NVIDIA IGX Orin [2] |
NVIDIA Ada Lovelace |
NVIDIA DGX A100 |
NVIDIA Ampere |
NVIDIA HGX A100 |
NVIDIA Ampere |
NVIDIA HGX A800 |
NVIDIA Ampere |
Bare Metal Deployments#
Use the following tables to verify compatibility for NVIDIA AI Enterprise on dedicated physical servers.
What’s Covered
Kubernetes-orchestrated containers with supported distributions (Charmed, Red Hat OpenShift, SUSE RKE2, Upstream Kubernetes)
Standalone Docker/Podman containers on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu
NVIDIA AI Enterprise infrastructure support by platform, including NVIDIA GPU Operator, NVIDIA Network Operator, NVIDIA DPU Operator, and NVIDIA Run:ai
Orchestration Platform |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
|||||||
|---|---|---|---|---|---|---|---|---|---|
Name |
Versions |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
|
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest/Data Center |
HPE Ezmeral Runtime Enterprise |
5.6 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest/Data Center |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
Supported |
Supported [12] |
N/A |
Supported |
vGPU Guest/Data Center |
|
SUSE RKE2 |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported |
vGPU Guest/Data Center |
SUSE RKE2 |
1.30 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
N/A |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
Supported |
N/A |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Container |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
||||||
|---|---|---|---|---|---|---|---|---|
Name |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
7.9, 8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Virtualized Deployments#
Use the following tables to verify compatibility for NVIDIA AI Enterprise in virtualized environments with NVIDIA vGPU for Compute.
What’s Covered
Kubernetes-orchestrated containers on VMware vSphere, Red Hat KVM, Ubuntu KVM, and Nutanix AHV
Standalone Docker/Podman containers with supported hypervisor and guest OS combinations
Non-containerized applications on supported hypervisors with vGPU
Both vGPU and GPU passthrough modes (where applicable)
NVIDIA AI Enterprise infrastructure support by platform, including NVIDIA GPU Operator, NVIDIA Network Operator, NVIDIA DPU Operator, and NVIDIA Run:ai
Orchestration Platform Name |
Versions |
Engine |
Guest OS Name |
Guest OS Versions |
Hypervisor Name |
Hypervisor Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support (vGPU Mode) |
GPU Driver Support (GPU Passthrough Mode) [6] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Charmed Kubernetes |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [8] |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
8.10, 9.4, 9.6 |
Supported |
Supported [12] |
N/A |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [8] |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported [12] |
N/A |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
SUSE Rancher RKE2 |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu (single-node only) [5] |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported (Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
SUSE Rancher RKE2 |
1.30 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Red Hat Enterprise Linux (single-node only) [5] |
8.10, 9.4, 9.6 |
Supported |
Supported |
N/A |
Supported (Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
8.10, 9.4, 9.6 |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
6.5 - 6.10 |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.30 - 1.34 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
N/A |
Supported [16] |
vGPU Guest |
vGPU Guest/Data Center |
|
VMware vSphere Kubernetes Service |
VMware K8 1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Container Name |
Engine |
Guest OS Name |
Guest OS Versions |
Hypervisor Name |
Hypervisor Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support (vGPU) |
GPU Driver Support (Passthrough) [6] |
|---|---|---|---|---|---|---|---|---|---|---|---|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Red Hat Enterprise Linux [7] |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7 |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
VMWare vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Hypervisor Name |
Hypervisor Versions |
Guest OS Name |
Guest OS Versions |
vGPU Driver Support |
|---|---|---|---|---|
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Debian |
12 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
SUSE Linux Enterprise Server |
12 SP3+, 12 SP5, 15 SP2 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Microsoft Windows |
Server 2022, Server 2025, Windows 10, Windows 11 |
vGPU Guest |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
vGPU Guest |
|
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
Red Hat OpenStack Platform, Red Hat OpenStack Services on OpenShift |
vGPU Guest |
|||
6.5 - 6.10 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
6.5 - 6.10 |
Debian |
12 |
vGPU Guest |
|
6.5 - 6.10 |
Red Hat Enterprise Linux |
8.8, 8.10, 9.2, 9.4, 9.6 |
vGPU Guest |
|
6.5 - 6.10 |
SUSE Linux Enterprise Server |
12, 15 |
vGPU Guest |
Base Command Manager#
NVIDIA Base Command Manager provides cluster provisioning, workload management, and infrastructure monitoring for data centers.
Supported Configurations
Kubernetes orchestration
Slurm workload manager (non-Kubernetes)
Red Hat Enterprise Linux and Ubuntu operating systems
NVIDIA AI Enterprise infrastructure support by platform, including NVIDIA GPU Operator, NVIDIA Network Operator, NVIDIA DPU Operator, and NVIDIA Run:ai
Orchestration Platform Name |
Versions |
Engine |
OS Name |
OS Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support |
|---|---|---|---|---|---|---|---|---|---|
Upstream Kubernetes |
1.31 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported |
Data Center |
Upstream Kubernetes |
1.31 - 1.34 |
Containerd |
Red Hat Enterprise Linux |
8, 9 |
Supported |
Supported |
N/A |
Supported |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Red Hat Enterprise Linux |
8, 9 |
N/A |
N/A |
N/A |
N/A |
Data Center |
Public Cloud#
Managed Kubernetes#
Verify compatibility for NVIDIA AI Enterprise on cloud-managed Kubernetes services.
Supported Cloud Providers
Amazon Web Services (EKS)
Google Cloud Platform (GKE)
Microsoft Azure (AKS)
Red Hat OpenShift (Managed Service - multi-cloud)
Cloud Service Provider |
Orchestration Platform Name |
K8s Versions |
Engine |
OS Name |
OS Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support |
|---|---|---|---|---|---|---|---|---|---|---|
AWS |
Amazon Elastic Kubernetes Service (EKS) |
1.30 - 1.34 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
Google Kubernetes Engine (GKE) |
1.30 - 1.34 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
|
Microsoft |
Azure Kubernetes Service (AKS) |
1.30 - 1.34 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
N/A |
Red Hat OpenShift (Managed Service) |
4.16 - 4.20 |
CRI-O |
Red Hat CoreOS |
4.16 - 4.20 |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
Standard GPU Instances#
Standard GPU virtual machine instances across major cloud providers support NVIDIA AI Enterprise for both Kubernetes and standalone containers.
Cloud Service Provider |
Virtual Machine (VM) Instance with GPU |
Product Family |
|---|---|---|
Alibaba Cloud |
gn7e, gn7i |
NVIDIA A10 |
Alibaba Cloud |
gn7s |
NVIDIA A30 |
Alibaba Cloud |
gn6i |
NVIDIA T4 |
Alibaba Cloud |
gn6e, gn6v |
NVIDIA V100 |
Alibaba Cloud |
ecs.ebmgn8v, ecs.gn8v |
NVIDIA H20 |
Alibaba Cloud |
ecs.ebmgn8is, ecs.gn8is |
NVIDIA L20 |
Amazon Web Services |
EC2 P3 |
NVIDIA V100 |
Amazon Web Services |
EC2 P4 |
NVIDIA A100 |
Amazon Web Services |
EC2 P5 |
NVIDIA H100 |
Amazon Web Services |
EC2 P5e and P5en |
NVIDIA H200 |
Amazon Web Services |
EC2 G4 |
NVIDIA T4 |
Amazon Web Services |
EC2 G5 |
NVIDIA A10G |
Amazon Web Services |
EC2 G6 |
NVIDIA L4 |
Amazon Web Services |
EC2 G6e |
NVIDIA L40S |
Microsoft Azure |
ND GB200-v6 |
NVIDIA GB200 |
Microsoft Azure |
ND-H200-v5 |
NVIDIA H200 |
Microsoft Azure |
ND-H100-v5, NCads_H100_v5-series, NCCads_H100_v5-series |
NVIDIA H100 |
Microsoft Azure |
NCv3-series |
NVIDIA V100 |
Microsoft Azure |
NCasT4_v3-series |
NVIDIA T4 |
Microsoft Azure |
NC_A100_v4-series |
NVIDIA A100 |
Google Cloud Platform |
A4 VM |
NVIDIA B200 |
Google Cloud Platform |
A3 VM |
NVIDIA H100, H200 |
Google Cloud Platform |
A2 VM |
NVIDIA A100 |
Google Cloud Platform |
G2 VM |
NVIDIA L4 |
Google Cloud Platform |
N1 VM |
NVIDIA T4, V100 |
Oracle Cloud Infrastructure |
BM.GPU3, VM.GPU3 |
NVIDIA V100 |
Oracle Cloud Infrastructure |
BM.GPU4, BM.GPU.A100 |
NVIDIA A100 |
Oracle Cloud Infrastructure |
BM.GPU.A10, VM.GPU.A10 |
NVIDIA A10 |
Oracle Cloud Infrastructure |
BM.GPU.H100.8 |
NVIDIA H100 |
Tencent Cloud |
PNV4 |
NVIDIA A10 |
Tencent Cloud |
GT4 |
NVIDIA A100 |
Tencent Cloud |
GN10Xp, GN10X |
NVIDIA V100 |
Tencent Cloud |
GN7, GN7vi, GI3X |
NVIDIA T4 |
Volcano Engine |
ecs.gni2 |
NVIDIA A10 |
NVIDIA GPU Optimized VMI on CSP Marketplace#
NVIDIA provides pre-configured, validated Virtual Machine Instances (VMI) for rapid AI deployment on cloud marketplaces.
What’s Included
Pre-installed NVIDIA GPU drivers and Container Toolkit
Optimized for standalone NVIDIA AI containers
Available on AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure
Supports all standard GPU instance types listed in Standard GPU Instances
Cloud Service Provider |
VMI Name |
GPUs |
K8s Support |
Standalone Container |
|---|---|---|---|---|
Amazon Web Services |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Microsoft Azure |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Google Cloud Platform |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Oracle Cloud Infrastructure |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
CPU-Only Server Support#
NVIDIA AI Enterprise supports the following CPU-enabled frameworks:
TensorFlow
PyTorch
Triton Inference Server with FIL backend
NVIDIA RAPIDS with XGBoost and Dask
The CPU-enabled frameworks are supported on CPU-only servers that are part of the NVIDIA-Certified Systems list.
Footnotes