Release Notes#
These release notes describe NVIDIA AI Enterprise Infrastructure Release 7.4. Use them to identify the supported infrastructure software components and versions in this release, review compatibility and support information, and locate the per-component release notes for new features, fixed issues, and known limitations.
For deployment guidance, refer to the Quick Start Guide. For a full list of supported platforms, hypervisors, operating systems, and orchestration software, refer to the Support Matrix or the interactive support matrix linked under Compatibility and Support.
Latest Release Highlights
Blackwell Architecture Support - NVIDIA GPU Data Center Driver 580.126.09 adds support for the latest Blackwell GPU architecture
vGPU for Compute Updates - Enhancements and bug fixes based on vGPU Software 19.4
Updated Kubernetes Operators - GPU Operator 25.10.1, Network Operator 25.10.0, DPU Operator 25.10.1, and NIM Operator 3.0.2 deliver improved lifecycle automation and streamlined deployment for GPU workloads
Run:ai Updates - NVIDIA Run:ai 2.24 provides AI workload and GPU orchestration capabilities for self-hosted deployments
DOCA Ecosystem Updates - DOCA Driver 3.2.0 and DOCA Microservices 3.2.1 provide enhanced networking performance and infrastructure acceleration for data-intensive workloads
Enterprise Management - Base Command Manager 11.31.0 offers refined cluster provisioning and workload orchestration for large-scale AI infrastructure
Fabric Manager Support - NVIDIA Fabric Manager supported in GPU Passthrough and vGPU for Compute deployment modes
Interactive Support Matrix - New web-based support matrix tool for exploring infrastructure compatibility across releases 7.0-7.4 with progressive filtering, cross-version comparison, and dynamic search capabilities
Lifecycle and Compatibility Explorer - New interactive tool for verifying cross-stack compatibility between infrastructure components, with query modes for browsing by branch, release, component, or full stack validation
What is Included in NVIDIA AI Enterprise Infra 7.4#
Complete list of infrastructure components with versions and documentation links:
Component |
Description |
Version |
|---|---|---|
NVIDIA Data Center GPU Driver |
Provides hardware support for NVIDIA GPUs. |
|
NVIDIA DOCA Driver for Networking |
Provides hardware support for NVIDIA BlueField DPUs and SuperNICs. Installing DOCA on the host provides all necessary drivers and tools to manage BlueField and ConnectX devices. |
|
NVIDIA Fabric Manager |
Manages NVSwitch fabric to enable high-bandwidth, low-latency GPU-to-GPU communication for multi-GPU AI workloads |
|
NVIDIA DOCA Microservices |
Infrastructure acceleration and offload services for NVIDIA BlueField, enabling accelerated networking, storage, and security workloads. |
|
NVIDIA Virtual GPU Manager |
GPU driver deployed in the hypervisor for virtualized environments. Enables multi-tenant GPU sharing, live migration, and monitoring. |
|
NVIDIA vGPU for Compute Guest Driver [2] |
GPU driver deployed in the VM or on bare metal OS to enable multiple VMs to have simultaneous, direct access to a single physical GPU. |
|
NVIDIA Container Toolkit |
Enables GPU-accelerated containers by providing runtime components and utilities for container engines (Docker, containerd, CRI-O) |
|
NVIDIA Run:ai [1] |
Provides a Kubernetes-native orchestration and management platform that maximizes GPU utilization for AI workloads through advanced scheduling and resource management. |
|
NVIDIA DPU Operator (DPF) |
Enables cluster administrators to automate provisioning, orchestration, and lifecycle management of BlueField DPUs and DOCA Microservices to enable DPU-accelerated North-South networking in Kubernetes. |
|
NVIDIA GPU Operator |
Simplifies deployment of NVIDIA AI Enterprise by automating management of all NVIDIA software components needed to provision GPUs in Kubernetes. |
|
NVIDIA Network Operator |
Simplifies deployment of high-speed networking by automating management of NVIDIA ConnectX NICs and SuperNICs required to optimize East-West traffic and RDMA transfers in Kubernetes. |
|
NVIDIA NIM Operator |
Enables cluster administrators to operate the software components and services required to run LLM, embedding, and other models using NVIDIA NIM microservices in Kubernetes. |
|
NVIDIA Base Command Manager (BCM) |
Cluster management and provisioning tool for NVIDIA DGX systems. |
Compatibility and Support#
Support Matrix
Use the Interactive Support Matrix to compare NVIDIA AI Enterprise infrastructure compatibility across releases 4.4 through 7.4. The web tool lets you:
Filter by deployment type, operating system, hypervisor, or orchestration platform.
Search by GPU architecture, platform, Kubernetes distribution, or cloud provider.
Inspect per-configuration release badges and footnote tooltips.
For deep linking, printing, or offline reference, the same information is also available as the static Support Matrix. Both forms cover supported GPU architectures, operating systems, hypervisor and orchestration platform versions, cloud provider instance types, and networking hardware.
Lifecycle and Compatibility Explorer
Use the Interactive Lifecycle and Compatibility Explorer on the NVIDIA AI Enterprise Lifecycle Policy documentation to:
Query by Infrastructure Branch, by release, or by component type and version.
Run a full-stack check from a GPU driver version to validate compatibility and plan upgrades.
Footnotes