NVIDIA AI Enterprise Infra 6.5 Support Matrix#
Use this support matrix to verify your infrastructure is compatible with NVIDIA AI Enterprise. This page provides comprehensive compatibility information for GPUs, operating systems, hypervisors, orchestration platforms, and cloud providers.
How to Use This Matrix
Identify your deployment type - On-premises (bare metal or virtualized) or public cloud
Verify your GPU - Check the Supported GPUs section
Confirm your platform - Match your OS, hypervisor, and orchestration platform
Check software versions - Ensure your infrastructure software versions are supported
Quick Navigation
Supported GPUs and Networking - GPU and network card compatibility
Bare Metal Deployments - Kubernetes and standalone containers on physical servers
Virtualized Deployments - Hypervisor and guest OS combinations
Public Cloud - Managed Kubernetes and standard GPU instances
Requirements#
For NVIDIA AI Enterprise to be supported, ensure you meet the following requirements.
On-Premises Servers
GPU supported by NVIDIA AI Enterprise (and optionally a supported networking card)
Supported infrastructure management software - bare-metal or virtualized (OS, orchestration platform, hypervisor)
Cloud Servers
Supported orchestration platform
Supported NVIDIA Infrastructure Software#
Product |
Version |
x86 |
ARM |
|---|---|---|---|
Core Infrastructure Drivers |
|||
NVIDIA GPU Data Center Driver |
Supported |
Supported |
|
NVIDIA DOCA-OFED Driver for Networking |
Supported |
Supported |
|
Virtualization |
|||
NVIDIA vGPU for Compute (Virtual GPU Manager and Guest Drivers) |
Supported |
N/A |
|
Container Platform |
|||
NVIDIA Container Toolkit |
Supported |
Supported |
|
Kubernetes Operators |
|||
NVIDIA GPU Operator |
Supported |
Supported |
|
NVIDIA Network Operator |
Supported |
Supported |
|
NVIDIA NIM Operator |
Supported |
Supported |
|
Cluster Management & Orchestration |
|||
NVIDIA Base Command Manager (BCM) |
Supported |
Supported |
Supported NVIDIA GPUs and Networking#
NVIDIA AI Enterprise is supported on the following NVIDIA GPUs with compatible third-party servers listed on the NVIDIA-certified systems page.
Specific NVIDIA AI Enterprise-supported products may not support all OS or GPU; refer to the individual product release notes for any discrepancies.
For HGX support information, refer to the Supported Platforms section.
NVIDIA H800
NVIDIA H800 NVL
NVIDIA H200 NVL
NVIDIA H100
NVIDIA H100 NVL
NVIDIA H20
NVIDIA A800
NVIDIA AX800
NVIDIA A100X
NVIDIA A100
NVIDIA A40
NVIDIA A30X
NVIDIA A30
NVIDIA A16
NVIDIA A10
NVIDIA A10G
NVIDIA A10M
NVIDIA A2
NVIDIA RTX A6000
NVIDIA RTX A5000
NVIDIA RTX A4000
NVIDIA L40S
NVIDIA L40
NVIDIA L20
NVIDIA L4
NVIDIA L2
NVIDIA RTX 6000 Ada Generation
NVIDIA RTX 4000 SFF Ada Generation
NVIDIA T4
NVIDIA T4G
NVIDIA Quadro RTX 8000
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 4000
NVIDIA V100
Multi-node requires an Ethernet NIC that supports RoCE. NVIDIA recommends using an NVIDIA Mellanox ConnectX and an NVIDIA GPU.
Product Family |
Architecture |
|---|---|
NVIDIA ConnectX-6 NIC |
NVIDIA ConnectX-6 |
NVIDIA ConnectX-6 Dx NIC |
NVIDIA ConnectX-6 Dx |
NVIDIA ConnectX-7 NIC |
NVIDIA ConnectX-7 |
NVIDIA ConnectX-8 NIC |
NVIDIA ConnectX-8 |
NVIDIA BlueField-3 SuperNIC |
NVIDIA BlueField-3 |
Supported Platforms#
NVIDIA AI Enterprise is supported on NVIDIA DGX servers in bare-metal deployments with the NVIDIA data center driver for Linux, which is included in the DGX OS software.
Accelerated Platform |
Architecture |
|---|---|
NVIDIA DGX B200 [10] |
NVIDIA Blackwell |
NVIDIA HGX B200 [10] |
NVIDIA Blackwell |
NVIDIA DGX GB200 NVL72 [4] |
NVIDIA Grace Blackwell |
NVIDIA GB200 NVL72 [4] |
NVIDIA Grace Blackwell |
NVIDIA GH200 [4] |
NVIDIA Grace Hopper |
NVIDIA DGX H100 |
NVIDIA Hopper |
NVIDIA HGX H100 |
NVIDIA Hopper |
NVIDIA HGX H800 |
NVIDIA Hopper |
NVIDIA HGX H200 |
NVIDIA Hopper |
NVIDIA HGX H20 |
NVIDIA Hopper |
NVIDIA IGX Orin [1] |
NVIDIA Ada Lovelace |
NVIDIA DGX A100 |
NVIDIA Ampere |
NVIDIA HGX A100 |
NVIDIA Ampere |
NVIDIA HGX A800 |
NVIDIA Ampere |
Note
DGX platforms, HGX platforms with KVM hypervisors, and IGX Orin aren’t supported with NVIDIA vGPU (C-Series).
For NVIDIA IGX, refer to the NVIDIA AI Enterprise - IGX Packaging, Pricing, and Licensing Guide.
Note
HGX platforms only support VMs configured in full PCIe passthrough, that is, assigning the entire HGX board to a single VM on supported hypervisors. Partial-GPU passthrough isn’t supported. vGPU C-Series VMs with 1, 2, 4, or 8 GPUs per VM are only supported on VMware vSphere.
Bare Metal#
Refer to the following platform support matrix for NVIDIA AI Enterprise if you have a dedicated physical server on-premises.
Orchestration Platform |
Versions |
Engine |
Operating System |
OS Versions |
GPU Operator |
Network Operator |
|
|---|---|---|---|---|---|---|---|
Charmed Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
HPE Ezmeral Runtime Enterprise |
5.6 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
N/A |
vGPU Guest/Data Center |
Red Hat OpenShift [3] |
4.14 - 4.19 |
CRI-O |
Red Hat CoreOS |
4.14 - 4.19 |
Supported |
Supported |
vGPU Guest/Data Center |
N/A |
1.33 |
N/A |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Nutanix NKP |
2.12 - 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
Container Name |
Engine |
Operating System |
OS Versions |
GPU Operator |
Network Operator |
|
|---|---|---|---|---|---|---|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
7.9, 8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
N/A |
N/A |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
vGPU Guest/Data Center |
Virtualized#
If a physical server is separated into multiple virtual servers on-premises, refer to the following platform support matrix for NVIDIA AI Enterprise.
Orchestration Platform |
Versions |
Engine |
Guest OS |
Guest OS Versions |
Hypervisor |
Hypervisor Versions |
GPU Operator |
Network Operator |
GPU Driver Support (vGPU) |
GPU Driver Support (Passthrough) [5] |
|---|---|---|---|---|---|---|---|---|---|---|
Charmed Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Charmed Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Red Hat OpenShift [7] |
4.14 - 4.19 |
CRI-O |
Red Hat CoreOS |
4.14 - 4.19 |
8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Red Hat OpenShift [7] |
4.14 - 4.19 |
CRI-O |
Red Hat CoreOS |
4.14 - 4.19 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
VMware vSphere Kubernetes Service |
VMware K8 1.29, - VMware K8 1.30, - VMware K8 1.31, - VMware K8 1.32 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Nutanix NKP |
2.12 - 2.13 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
8.10, 9.4, 9.6 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.4, 9.6 |
6.5 - 6.10 |
Supported |
Supported |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
Supported |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Upstream Kubernetes |
1.29 - 1.33 |
CRI-O |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
Supported |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Container Name |
Engine |
Guest OS |
Guest OS Versions |
Hypervisor |
Hypervisor Versions |
GPU Operator |
Network Operator |
GPU Driver Support (vGPU) |
GPU Driver Support (Passthrough) [5] |
|---|---|---|---|---|---|---|---|---|---|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
Red Hat Enterprise Linux [6] |
8.10, 9.4, 9.6, 10.0 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Red Hat Enterprise Linux |
8.6, 8.8, 8.10, 9.2, 9.4, 9.6 |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
SUSE Linux Enterprise Server |
15 SP2 and later |
VMWare vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
|
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
VMware vSphere |
8.0 and later, 9.0 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Non-Kubernetes (standalone containers) |
Docker/Podman |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
6.5 - 6.10 |
N/A |
N/A |
vGPU Guest |
vGPU Guest/Data Center |
Hypervisor Name |
Hypervisor Versions |
Guest Operating System |
Guest OS Versions |
vGPU Support |
|---|---|---|---|---|
VMware ESX |
ESXi 9.0 |
Debian |
12 |
vGPU Guest |
VMware ESX |
ESXi 9.0 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6 |
vGPU Guest |
VMware ESX |
ESXi 9.0 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
VMware ESX |
ESXi 9.0 |
SUSE Linux Enterprise Server |
12 SP3+, 12 SP5, 15 SP2 |
vGPU Guest |
VMware ESX |
ESXi 9.0 |
Microsoft Windows |
Server 2022, - Windows 11, 10 |
vGPU Guest |
VMware ESX |
ESXi 8.0 and later |
Debian |
12 |
vGPU Guest |
VMware ESX |
ESXi 8.0 and later |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6 |
vGPU Guest |
VMware ESX |
ESXi 8.0 and later |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
VMware ESX |
ESXi 8.0 and later |
SUSE Linux Enterprise Server |
12 SP3+, 12 SP5, 15 SP2 |
vGPU Guest |
VMware ESX |
ESXi 8.0 and later |
Microsoft Windows |
Server 2022, - Windows 11, 10 |
vGPU Guest |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
RHEL 8.10 |
Red Hat Enterprise Linux |
8.10 |
vGPU Guest |
|
RHEL 8.10 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
RHEL 9.4 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6 |
vGPU Guest |
|
RHEL 9.4 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
RHEL 9.6 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6 |
vGPU Guest |
|
RHEL 9.6 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
RHEL 10.0 |
Red Hat Enterprise Linux |
8.10, 9.4, 9.6, 10.0 |
vGPU Guest |
|
RHEL 10.0 |
Microsoft Windows |
Server 2022 |
vGPU Guest |
|
Red Hat OpenStack Platform, - Red Hat OpenStack Services on OpenShift |
vGPU Guest |
|||
6.5 - 6.10 |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
vGPU Guest |
|
6.5 - 6.10 |
Debian |
12 |
vGPU Guest |
|
6.5 - 6.10 |
Red Hat Enterprise Linux |
8.8, 8.10, 9.2, 9.4, 9.6 |
vGPU Guest |
|
6.5 - 6.10 |
SUSE Linux Enterprise Server |
12, 15 |
vGPU Guest |
Base Command Manager#
Orchestration Platform |
Versions |
Engine |
Operating System |
OS Versions |
GPU Operator |
Network Operator |
|
|---|---|---|---|---|---|---|---|
Upstream Kubernetes |
1.31 - 1.32 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
Supported |
Data Center |
Upstream Kubernetes |
1.31 - 1.32 |
Containerd |
Red Hat Enterprise Linux |
8, 9 |
Supported |
Supported |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
N/A |
N/A |
Data Center |
Slurm (non-Kubernetes) |
24.05, 24.11, 25.05 |
N/A |
Red Hat Enterprise Linux |
8, 9 |
N/A |
N/A |
Data Center |
Public Cloud#
Managed Kubernetes#
If you have a virtual server that runs in a cloud computing environment and is accessible remotely, refer to the following platform support matrix for NVIDIA AI Enterprise.
Cloud Service Provider |
Orchestration Platform |
K8s Versions |
Engine |
Operating System |
OS Versions |
GPU Operator |
Network Operator |
|
|---|---|---|---|---|---|---|---|---|
AWS |
Amazon Elastic Kubernetes Service (EKS) |
1.29 - 1.33 |
Containerd |
Ubuntu |
20.04 LTS, 22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
Google Kubernetes Engine (GKE) |
1.29 - 1.33 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
|
Microsoft |
Azure Kubernetes Service (AKS) |
1.29 - 1.33 |
Containerd |
Ubuntu |
22.04 LTS, 24.04 LTS |
Supported |
N/A |
vGPU Guest/Data Center |
N/A |
Red Hat OpenShift (Managed Service) |
4.14 - 4.19 |
CRI-O |
Red Hat CoreOS |
4.14 - 4.19 |
Supported |
N/A |
vGPU Guest/Data Center |
Standard GPU Instances#
Standard GPU virtual machine instances across major cloud providers support NVIDIA AI Enterprise for both Kubernetes and standalone containers.
Cloud Service Provider |
Virtual Machine (VM) Instance with GPU |
Product Family |
|---|---|---|
Alibaba |
gn7e |
NVIDIA A10 |
Alibaba |
gn7i |
NVIDIA A10 |
Alibaba |
gn7s |
NVIDIA A30 |
Alibaba |
gn6i |
NVIDIA T4 |
Alibaba |
gn6e |
NVIDIA V100 |
Alibaba |
gn6v |
NVIDIA V100 |
Alibaba |
ecs.ebmgn8v |
NVIDIA H20 |
Alibaba |
ecs.gn8v |
NVIDIA H20 |
Alibaba |
ecs.ebmgn8is |
NVIDIA L20 |
Alibaba |
ecs.gn8is |
NVIDIA L20 |
Amazon Web Services (AWS) |
EC2 P3 |
NVIDIA V100 |
Amazon Web Services (AWS) |
EC2 P4 |
NVIDIA A100 |
Amazon Web Services (AWS) |
EC2 P5 |
NVIDIA H100 |
Amazon Web Services (AWS) |
EC2 P5e and P5en |
NVIDIA H200 |
Amazon Web Services (AWS) |
EC2 G4 |
NVIDIA T4 |
Amazon Web Services (AWS) |
EC2 G5 |
NVIDIA A10G |
Amazon Web Services (AWS) |
EC2 G6 |
NVIDIA L4 |
Amazon Web Services (AWS) |
EC2 G6e |
NVIDIA L40S |
Azure |
ND GB200-v6 |
NVIDIA GB200 |
Azure |
ND-H200-v5 |
NVIDIA H200 |
Azure |
ND-H100-v5 |
NVIDIA H100 |
Azure |
NCads_H100_v5-series |
NVIDIA H100 |
Azure |
NCCads_H100_v5-series |
NVIDIA H100 |
Azure |
NCv3-series |
NVIDIA V100 |
Azure |
NCasT4_v3-series |
NVIDIA T4 |
Azure |
NC_A100_v4-series |
NVIDIA A100 |
Google Cloud Platform (GCP) |
A4 VM |
NVIDIA B200 |
Google Cloud Platform (GCP) |
A3 VM |
NVIDIA H100, - NVIDIA H200 |
Google Cloud Platform (GCP) |
A2 VM |
NVIDIA A100 |
Google Cloud Platform (GCP) |
G2 VM |
NVIDIA L4 |
Google Cloud Platform (GCP) |
N1 VM |
NVIDIA T4, - NVIDIA V100 |
Oracle Cloud Infrastructure (OCI) |
BM.GPU3 |
NVIDIA V100 |
Oracle Cloud Infrastructure (OCI) |
BM.GPU4, - BM.GPU.A100 |
NVIDIA A100 |
Oracle Cloud Infrastructure (OCI) |
BM.GPU.A10 |
NVIDIA A10 |
Oracle Cloud Infrastructure (OCI) |
BM.GPU.H100.8 |
NVIDIA H100 |
Oracle Cloud Infrastructure (OCI) |
VM.GPU3 |
NVIDIA V100 |
Oracle Cloud Infrastructure (OCI) |
VM.GPU.A10 |
NVIDIA A10 |
Tencent Cloud |
PNV4 |
NVIDIA A10 |
Tencent Cloud |
GT4 |
NVIDIA A100 |
Tencent Cloud |
GN10Xp, - GN10X |
NVIDIA V100 |
Tencent Cloud |
GN7, - GN7vi, - GI3X |
NVIDIA T4 |
Volcano Engine |
ecs.gni2 |
NVIDIA A10 |
NVIDIA GPU Optimized VMI on CSP Marketplace#
For ease of use in the cloud, NVIDIA provides compute-optimized and validated base Virtual Machine Instances (VMI) to run standalone NVIDIA AI containers through CSP marketplaces. Each VMI includes key technologies and software from NVIDIA for rapid deployment, management, and scaling of AI workloads in the modern hybrid cloud.
Cloud Service Provider |
VMI Name |
GPUs |
K8s Support |
Standalone Container |
|---|---|---|---|---|
AWS |
NVIDIA AI Enterprise |
Listed in Standard GPU Instances |
N/A |
Supported |
Azure |
NVIDIA AI Enterprise |
Listed in Standard GPU Instances |
N/A |
Supported |
GCP |
NVIDIA AI Enterprise |
Listed in Standard GPU Instances |
N/A |
Supported |
OCI |
NVIDIA AI Enterprise |
Listed in Standard GPU Instances |
N/A |
Supported |
CPU-Only Server Support#
NVIDIA AI Enterprise will support the following CPU-enabled frameworks:
TensorFlow
PyTorch
Triton Inference Server with FIL backend
NVIDIA RAPIDS with XGBoost and Dask
The CPU-enabled frameworks are supported on CPU-only servers that are part of the NVIDIA Certified Systems list.
Footnotes