Prerequisites#
Added in version 2.0.
NVIDIA AI Enterprise on bare metal has the following sets of prerequisites:
At least one NVIDIA data center GPU in a single NVIDIA AI Enterprise Compatible NVIDIA-Certified System. NVIDIA recommends using the following based on your infrastructure.
Adding AI to Mainstream level servers (single to 4-GPU NVLink):
1-8x L4, L40S, H100 NVL, H200 NVL
Large Model Inference in a Single Server (NVL2 High-Capacity AI Server):
2x H200 or Blackwell GPU
Large Model Training and Inference (HGX Scale-Up and Out Server):
4x or 8x H200, or 8x Blackwell GPU
NVIDIA AI Enterprise License
Ubuntu Server 20.04 LTS, 22.04 LTS, or Red Hat Enterprise Linux 8.4 ISO. Please refer to the latest Product Support Matrix for NVIDIA AI Enterprise.
NVIDIA AI Enterprise Software:
NVIDIA AI Enterprise Driver
You may leverage the NVIDIA System Management interface (NV-SMI) management and monitoring tool for testing and benchmarking.
The following server configuration details are considered best practices:
Hyperthreading - Enabled
Power Setting or System Profile - High Performance
CPU Performance (if applicable) - Enterprise or High Throughput
Memory Mapped I/O above 4-GB - Enabled (if applicable)