NVIDIA AI Enterprise Documentation#
NVIDIA AI Enterprise is a cloud-native suite of AI tools, libraries, and frameworks for production AI deployments. This documentation covers deployment, configuration, and optimization across bare-metal, virtualized, and cloud environments.
Quick Start#
๐ New to NVIDIA AI Enterprise? โ Start with the Quick Start Guide to deploy your first AI workload in 30โ60 minutes
โฌ๏ธ Upgrading from 4.8 or earlier? โ Refer to Whatโs New in 4.9 in the following section
๐ง Need help with a specific task? โ Jump to the Deployment Guide
๐ Whatโs New in NVIDIA AI Enterprise Infra 4.9#
Latest Release Highlights
Multi-Architecture GPU Support - GPU Data Center Driver 535.288.01 with support for Blackwell, Hopper, Ada Lovelace, and Ampere GPU architectures
NVSwitch Fabric Management - NVIDIA Fabric Manager 535.288.01 enables high-bandwidth, low-latency GPU-to-GPU communication for multi-GPU AI workloads
Enhanced vGPU Virtualization - vGPU for Compute 16.13 with improved MIG-backed vGPU configurations and enhanced live migration capabilities
Kubernetes Automation - GPU Operator 25.10.1 and Network Operator 25.10.0 for streamlined GPU infrastructure management
High-Performance Networking - DOCA-OFED Driver 3.2.0 for enhanced networking and infrastructure acceleration
Enterprise Management - Base Command Manager 11.31.0 for cluster provisioning and workload orchestration at scale
What Youโll Find Here#
๐ Getting Started - Account activation, software installation, and first workload deployment
โ๏ธ Infrastructure Software - NVIDIA vGPU for Compute configuration, licensing, and management
๐ Support - Platform compatibility matrices and release information
๐ Overview - Release notes and version information
๐ Glossary - Key terms and concepts explained