Infrastructure Support Matrix - Release 8.0#
Verify that your GPUs, OS, hypervisor or cloud, and orchestration stack match NVIDIA AI Enterprise 8.0 support on this page. For new features in this release, see Release Notes.
Note
To compare across releases (7.0–8.0) and use live filters, open the Interactive Support Matrix.
Requirements#
Supported configurations require all of the following.
On-Premises Servers
GPU supported by NVIDIA AI Enterprise (and optionally a supported networking card)
Supported infrastructure management software - bare-metal or virtualized (OS, orchestration platform, hypervisor)
Important
GB200 NVL4, GB200 NVL72, and GB300 NVL72 do not require NVIDIA-Certified Systems; they require NVIDIA-Qualified server status. See the NVIDIA Qualified Systems Catalog.
Cloud Servers
Supported orchestration platform
Supported NVIDIA Infrastructure Software#
Component |
Version |
x86 |
ARM |
Government Ready |
|---|---|---|---|---|
NVIDIA Data Center GPU Driver |
Supported |
Supported |
N/A |
|
NVIDIA DOCA Driver for Networking |
Supported |
Supported |
Supported |
|
NVIDIA Fabric Manager (integrated into NVIDIA AI Enterprise drivers) |
Supported |
Supported |
N/A |
|
NVIDIA DOCA Microservices |
Supported |
Supported |
N/A |
|
NVIDIA vGPU for Compute (Virtual GPU Manager and Guest Drivers) |
Supported |
N/A |
N/A |
|
NVIDIA Container Toolkit |
Supported |
Supported |
N/A |
|
NVIDIA Run:ai [13] |
Supported |
Supported |
N/A |
|
NVIDIA DPU Operator (DPF) |
Supported |
N/A |
N/A |
|
NVIDIA GPU Operator |
Supported |
Supported (N/A for Government Ready) |
Supported [8] |
|
NVIDIA Network Operator |
Supported |
Supported |
Supported [9] |
|
NVIDIA NIM Operator |
Supported |
Supported |
Supported |
|
NVIDIA Base Command Manager (BCM) |
Supported |
Supported |
N/A |
Supported NVIDIA GPUs, Platforms, and Networking#
GPUs and accelerated platforms below are supported with servers listed on the NVIDIA-Certified Systems. Product-specific matrices may differ; see each product’s release notes.
For GB200 NVL4/NVL72 and GB300 NVL72 qualification requirements, see Requirements.
Discrete GPUs
NVIDIA RTX Pro 6000 Blackwell Server Edition
NVIDIA RTX Pro 4500 Blackwell Server Edition
Accelerated Platforms
Platform
Architecture
NVIDIA DGX B300
NVIDIA Blackwell Ultra
NVIDIA Blackwell Ultra
NVIDIA DGX GB300 NVL72 (bare metal only)
NVIDIA Grace Blackwell
NVIDIA GB300 NVL72 (bare metal only)
NVIDIA Grace Blackwell
NVIDIA DGX B200
NVIDIA Blackwell
NVIDIA Blackwell
NVIDIA DGX GB200 NVL72 (bare metal only)
NVIDIA Grace Blackwell
NVIDIA GB200 NVL72 (bare metal only)
NVIDIA Grace Blackwell
NVIDIA GB200 NVL4 (bare metal only)
NVIDIA Grace Blackwell
Discrete GPUs
NVIDIA H800
NVIDIA H800 NVL
NVIDIA H200 NVL
NVIDIA H100
NVIDIA H100 NVL
NVIDIA H20
Accelerated Platforms
Platform
Architecture
NVIDIA GH200 (bare metal only)
NVIDIA Grace Hopper
NVIDIA DGX H100
NVIDIA Hopper
NVIDIA HGX H100
NVIDIA Hopper
NVIDIA HGX H800
NVIDIA Hopper
NVIDIA HGX H200
NVIDIA Hopper
NVIDIA HGX H20
NVIDIA Hopper
Discrete GPUs
NVIDIA L40S
NVIDIA L40
NVIDIA L20
NVIDIA L4
NVIDIA L2
NVIDIA RTX 6000 Ada Generation
NVIDIA RTX 5880 Ada Generation
NVIDIA RTX 5000 Ada Generation
NVIDIA RTX 4000 SFF Ada Generation
Accelerated Platforms
Platform
Architecture
NVIDIA IGX Orin (with optional RTX 6000 Ada Lovelace GPU)
NVIDIA Ada Lovelace
Discrete GPUs
NVIDIA A800
NVIDIA AX800
NVIDIA A100X
NVIDIA A100
NVIDIA A40
NVIDIA A30X
NVIDIA A30
NVIDIA A16
NVIDIA A10
NVIDIA A10G
NVIDIA A10M
NVIDIA A2
NVIDIA RTX A6000
NVIDIA RTX A5000
NVIDIA RTX A4000
Accelerated Platforms
Platform
Architecture
NVIDIA DGX A100
NVIDIA Ampere
NVIDIA HGX A100
NVIDIA Ampere
NVIDIA HGX A800
NVIDIA Ampere
Discrete GPUs
NVIDIA T4
NVIDIA T4G
NVIDIA Quadro RTX 8000
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 4000
Discrete GPUs
NVIDIA V100
On DGX servers, bare-metal support uses the Linux data center driver shipped with DGX OS.
Multi-node setups need an Ethernet NIC with RoCE. NVIDIA recommends an NVIDIA ConnectX NIC and an NVIDIA GPU.
Table 6 Supported Ethernet NICs and SuperNICs# Product Family
Architecture
NVIDIA ConnectX-6 NIC
NVIDIA ConnectX-6
NVIDIA ConnectX-6 Dx NIC
NVIDIA ConnectX-6 Dx
NVIDIA ConnectX-7 NIC
NVIDIA ConnectX-7
NVIDIA ConnectX-8 NIC
NVIDIA ConnectX-8
NVIDIA BlueField-3 SuperNIC
NVIDIA BlueField-3
NVIDIA BlueField-3 DPU
NVIDIA BlueField-3
Bare Metal Deployments#
Tables below list supported bare-metal combinations.
Orchestration Platform |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
|||||||
|---|---|---|---|---|---|---|---|---|---|
Name |
Versions |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
|
Charmed Kubernetes |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest/Data Center |
HPE Ezmeral Runtime Enterprise |
5.6 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest/Data Center |
4.18 - 4.21 |
CRI-O |
Red Hat CoreOS |
4.18 - 4.21 |
Supported |
Supported [9] |
N/A |
Supported |
vGPU Guest/Data Center |
|
SUSE Rancher RKE2 |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported |
vGPU Guest/Data Center |
SUSE Rancher RKE2 |
1.33 - 1.35 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
N/A |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
Supported |
N/A |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Container |
Operating System |
NVIDIA AI Enterprise Infrastructure Support |
||||||
|---|---|---|---|---|---|---|---|---|
Name |
Engine |
Name |
Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
7.9, 8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
vGPU Guest/Data Center |
Virtualized Deployments#
Tables below cover virtualized deployments with NVIDIA vGPU for Compute.
Orchestration Platform Name |
Versions |
Engine |
Guest OS Name |
Guest OS Versions |
Hypervisor Name |
Hypervisor Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support (vGPU) |
GPU Driver Support (GPU Passthrough) [4] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Charmed Kubernetes |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu (single-node only) [5] |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Charmed Kubernetes |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Nutanix NKP |
2.12, 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [6] |
4.18 - 4.21 |
CRI-O |
Red Hat CoreOS |
4.18 - 4.21 |
Red Hat Enterprise Linux (single-node only) [5] |
8.10, 9.4, 9.6 |
Supported |
Supported [9] |
N/A |
Supported (Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Red Hat OpenShift [6] |
4.18 - 4.21 |
CRI-O |
Red Hat CoreOS |
4.18 - 4.21 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported [9] |
N/A |
Supported (Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Red Hat Enterprise Linux (single-node only) [5] |
8.10, 9.4, 9.6 |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
6.5 - 6.10 |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu (single-node only) [5] |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd, CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
N/A |
Supported (Containerd only, Passthrough only) |
vGPU Guest |
vGPU Guest/Data Center |
|
VMware vSphere Kubernetes Service |
VMware K8 1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Container Name |
Engine |
Guest OS Name |
Guest OS Versions |
Hypervisor Name |
Hypervisor Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support (vGPU) |
GPU Driver Support (GPU Passthrough) [4] |
|---|---|---|---|---|---|---|---|---|---|---|---|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Red Hat Enterprise Linux [5] |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7 |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu (single-node only) [5] |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
N/A |
N/A |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Hypervisor Name |
Hypervisor Versions |
Guest OS Name |
Guest OS Versions |
vGPU Driver Support |
|---|---|---|---|---|
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Debian |
12 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
SUSE Linux Enterprise Server |
12 SP3+, 12 SP5, 15 SP2 |
vGPU Guest |
VMware ESXi |
ESXi 8.0 and later, 9.0 |
Microsoft Windows |
Server 2022, Server 2025, Windows 10, Windows 11 |
vGPU Guest |
Ubuntu (single-node only) [5] |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
Red Hat Enterprise Linux (RHEL) with KVM (single-node only) [5] |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
vGPU Guest |
Red Hat Enterprise Linux (RHEL) with KVM (single-node only) [5] |
8.10, 9.4, 9.6, 9.7, 10.0, 10.1 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
Red Hat Enterprise Linux (RHEL) with KVM (single-node only) [5] |
Red Hat OpenStack Platform, Red Hat OpenStack Services on OpenShift |
vGPU Guest |
||
6.5 - 6.10 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
6.5 - 6.10 |
Debian |
12 |
vGPU Guest |
|
6.5 - 6.10 |
Red Hat Enterprise Linux |
8.8, 8.10, 9.2, 9.4, 9.6 |
vGPU Guest |
|
6.5 - 6.10 |
SUSE Linux Enterprise Server |
12, 15 |
vGPU Guest |
Base Command Manager#
Base Command Manager handles cluster provisioning, workloads, and monitoring.
Orchestration Platform Name |
Versions |
Engine |
OS Name |
OS Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support |
|---|---|---|---|---|---|---|---|---|---|
Upstream Kubernetes |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
N/A |
Supported |
Data Center |
Upstream Kubernetes |
1.33 - 1.35 |
Containerd |
Red Hat Enterprise Linux |
8, 9 |
Supported |
Supported |
N/A |
Supported |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
N/A |
N/A |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Red Hat Enterprise Linux |
8, 9 |
N/A |
N/A |
N/A |
N/A |
Data Center |
Public Cloud#
Managed Kubernetes#
Supported managed Kubernetes environments for this release:
Cloud Service Provider |
Orchestration Platform Name |
K8s Versions |
Engine |
OS Name |
OS Versions |
GPU Operator |
Network Operator |
DPU Operator (DPF) |
Run:ai |
GPU Driver Support |
|---|---|---|---|---|---|---|---|---|---|---|
AWS |
Amazon Elastic Kubernetes Service (EKS) |
1.33 - 1.35 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
Google Kubernetes Engine (GKE) |
1.33 - 1.35 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
|
Microsoft |
Azure Kubernetes Service (AKS) |
1.33 - 1.35 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
N/A |
Red Hat OpenShift (Managed Service) |
4.18 - 4.21 |
CRI-O |
Red Hat CoreOS |
4.18 - 4.21 |
Supported |
N/A |
N/A |
Supported |
vGPU Guest/Data Center |
Standard GPU Instances#
These standard GPU VM instance types support NVIDIA AI Enterprise with Kubernetes or standalone containers.
Cloud Service Provider |
Virtual Machine (VM) Instance with GPU |
Product Family |
|---|---|---|
Alibaba Cloud |
gn7e, gn7i |
NVIDIA A10 |
Alibaba Cloud |
gn7s |
NVIDIA A30 |
Alibaba Cloud |
gn6i |
NVIDIA T4 |
Alibaba Cloud |
gn6e, gn6v |
NVIDIA V100 |
Alibaba Cloud |
ecs.ebmgn8v, ecs.gn8v |
NVIDIA H20 |
Alibaba Cloud |
ecs.ebmgn8is, ecs.gn8is |
NVIDIA L20 |
Amazon Web Services |
EC2 P3 |
NVIDIA V100 |
Amazon Web Services |
EC2 P4 |
NVIDIA A100 |
Amazon Web Services |
EC2 P5 |
NVIDIA H100 |
Amazon Web Services |
EC2 P5e and P5en |
NVIDIA H200 |
Amazon Web Services |
EC2 G4 |
NVIDIA T4 |
Amazon Web Services |
EC2 G5 |
NVIDIA A10G |
Amazon Web Services |
EC2 G6 |
NVIDIA L4 |
Amazon Web Services |
EC2 G6e |
NVIDIA L40S |
Microsoft Azure |
ND GB200-v6 |
NVIDIA GB200 |
Microsoft Azure |
ND-H200-v5 |
NVIDIA H200 |
Microsoft Azure |
ND-H100-v5, NCads_H100_v5-series, NCCads_H100_v5-series |
NVIDIA H100 |
Microsoft Azure |
NCv3-series |
NVIDIA V100 |
Microsoft Azure |
NCasT4_v3-series |
NVIDIA T4 |
Microsoft Azure |
NC_A100_v4-series |
NVIDIA A100 |
Google Cloud Platform |
A4 VM |
NVIDIA B200 |
Google Cloud Platform |
A3 VM |
NVIDIA H100, H200 |
Google Cloud Platform |
A2 VM |
NVIDIA A100 |
Google Cloud Platform |
G2 VM |
NVIDIA L4 |
Google Cloud Platform |
N1 VM |
NVIDIA T4, V100 |
Oracle Cloud Infrastructure |
BM.GPU3, VM.GPU3 |
NVIDIA V100 |
Oracle Cloud Infrastructure |
BM.GPU4, BM.GPU.A100 |
NVIDIA A100 |
Oracle Cloud Infrastructure |
BM.GPU.A10, VM.GPU.A10 |
NVIDIA A10 |
Oracle Cloud Infrastructure |
BM.GPU.H100.8 |
NVIDIA H100 |
Tencent Cloud |
PNV4 |
NVIDIA A10 |
Tencent Cloud |
GT4 |
NVIDIA A100 |
Tencent Cloud |
GN10Xp, GN10X |
NVIDIA V100 |
Tencent Cloud |
GN7, GN7vi, GI3X |
NVIDIA T4 |
Volcano Engine |
ecs.gni2 |
NVIDIA A10 |
NVIDIA GPU Optimized VMI on CSP Marketplace#
Validated marketplace VMIs for NVIDIA AI on AWS, Azure, Google Cloud, and OCI.
Cloud Service Provider |
VMI Name |
GPUs |
K8s Support |
Standalone Container |
|---|---|---|---|---|
Amazon Web Services |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Microsoft Azure |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Google Cloud Platform |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
Oracle Cloud Infrastructure |
NVIDIA AI Enterprise |
Refer to Standard GPU Instances |
N/A |
Supported |
CPU-Only Server Support#
CPU-only (no GPU) servers on the NVIDIA-Certified Systems list can run these CPU-enabled stacks:
TensorFlow
PyTorch
Triton Inference Server with FIL backend
NVIDIA RAPIDS with XGBoost and Dask
Footnotes
On NVIDIA HGX platforms, vGPU for Compute VMs with configurations of 1, 2, 4, or 8 GPUs per VM is supported only with VMware vSphere.
On NVIDIA HGX platforms, GPU Operator is not supported with KubeVirt or OpenShift Virtualization.
The NVIDIA AI Enterprise license includes NVIDIA Run:ai self-hosted deployments only. NVIDIA Run:ai SaaS is not included in the NVIDIA AI Enterprise license and remains a separate offering.