NVIDIA AI Enterprise Infra 7.4 Release Notes#

This document covers the NVIDIA AI Enterprise Infrastructure Layer software for release 7.4. NVIDIA AI Enterprise is a software suite for building and running AI applications across cloud, data center, and edge environments with optimized performance, security, and stability.

NVIDIA AI Enterprise Components

  • Infrastructure Software (this documentation): Tools for managing your compute resources, including GPU and network drivers, Kubernetes operators for container orchestration, and NVIDIA Run:ai (self-hosted) for AI workload management and optimization. Refer to Infrastructure Software.

  • Application Software: Tools for building AI solutions, including generative AI, AI agents, 3D applications and digital twins with NVIDIA Omniverse, and domain-specific SDKs for speech AI, computer vision, cybersecurity, and more. Refer to Application Software.

  • Enterprise Support: Direct access to NVIDIA AI experts who provide technical guidance, performance optimization assistance, onboarding and training services, and infrastructure troubleshooting across bare-metal, virtualized, and containerized environments—backed by service-level agreements. Refer to Support.

Release Highlights#

  • Blackwell Architecture Support - NVIDIA GPU Data Center Driver 580.126.09 adds support for the latest Blackwell GPU architecture

  • vGPU for Compute Updates - Enhancements and bug fixes based on vGPU Software 19.4

  • Updated Kubernetes Operators - GPU Operator 25.10.1, Network Operator 25.10.0, DPU Operator 25.10.1, and NIM Operator 3.0.2 deliver improved lifecycle automation and streamlined deployment for GPU workloads

  • Run:ai Updates - NVIDIA Run:ai 2.24 provides AI workload and GPU orchestration capabilities for self-hosted deployments

  • DOCA Ecosystem Updates - DOCA Driver 3.2.0 and DOCA Microservices 3.2.1 provide enhanced networking performance and infrastructure acceleration for data-intensive workloads

  • Enterprise Management - Base Command Manager 11.31.0 offers refined cluster provisioning and workload orchestration for large-scale AI infrastructure

  • Fabric Manager Support - NVIDIA Fabric Manager supported in GPU Passthrough and vGPU for Compute deployment modes

  • Interactive Support Matrix - New web-based support matrix tool for exploring infrastructure compatibility across releases 7.0-7.4 with progressive filtering, cross-version comparison, and dynamic search capabilities

  • Lifecycle and Compatibility Explorer - New interactive tool for verifying cross-stack compatibility between infrastructure components, with query modes for browsing by branch, release, component, or full stack validation

What’s Included in NVIDIA AI Enterprise Infra 7.4#

Complete list of infrastructure components with versions and documentation links:

Table 1 Supported Infrastructure Software#

Component

Description

Version

Core Infrastructure Drivers

NVIDIA Data Center GPU Driver

Provides hardware support for NVIDIA GPUs.

580.126.09

NVIDIA DOCA Driver for Networking

Provides hardware support for NVIDIA BlueField DPUs and SuperNICs. Installing DOCA on the host provides all necessary drivers and tools to manage BlueField and ConnectX devices.

3.2.0

NVIDIA Fabric Manager

Manages NVSwitch fabric to enable high-bandwidth, low-latency GPU-to-GPU communication for multi-GPU AI workloads

580.126.09

Data Center Services

NVIDIA DOCA Microservices

Infrastructure acceleration and offload services for NVIDIA BlueField, enabling accelerated networking, storage, and security workloads.

3.2.1

Virtualization [2]

NVIDIA Virtual GPU Manager

GPU driver deployed in the hypervisor for virtualized environments. Enables multi-tenant GPU sharing, live migration, and monitoring.

19.4

NVIDIA vGPU for Compute Guest Driver

GPU driver deployed in the VM or on bare metal OS to enable multiple VMs to have simultaneous, direct access to a single physical GPU.

19.4

Container Platform

NVIDIA Container Toolkit

Enables GPU-accelerated containers by providing runtime components and utilities for container engines (Docker, containerd, CRI-O)

1.18.1

GPU Orchestration

NVIDIA Run:ai [1]

Provides a Kubernetes-native orchestration and management platform that maximizes GPU utilization for AI workloads through advanced scheduling and resource management.

2.24

Kubernetes Operators

NVIDIA DPU Operator (DPF)

Enables cluster administrators to automate provisioning, orchestration, and lifecycle management of BlueField DPUs and DOCA Microservices to enable DPU-accelerated North-South networking in Kubernetes.

25.10.1

NVIDIA GPU Operator

Simplifies deployment of NVIDIA AI Enterprise by automating management of all NVIDIA software components needed to provision GPUs in Kubernetes.

25.10.1

NVIDIA Network Operator

Simplifies deployment of high-speed networking by automating management of NVIDIA ConnectX NICs and SuperNICs required to optimize East-West traffic and RDMA transfers in Kubernetes.

25.10.0

NVIDIA NIM Operator

Enables cluster administrators to operate the software components and services required to run LLM, embedding, and other models using NVIDIA NIM microservices in Kubernetes.

3.0.2

Cluster Management

NVIDIA Base Command Manager (BCM)

Cluster management and provisioning tool for NVIDIA DGX systems.

11.31.0

Compatibility and Support#

New: Interactive Support Matrix

Starting with release 7.4, an interactive web-based support matrix tool is available to explore infrastructure compatibility across releases 7.0-7.4. The interactive tool provides:

  • Cross-Version Comparison - Compare supported configurations across multiple releases in a single view

  • Progressive Filtering - Cascading filters guide you through deployment type, operating system, hypervisor, and orchestration options

  • Dynamic Search - Filter by GPU architecture, platform type, Kubernetes distribution, cloud provider, and more

  • Interactive Footnotes - Hover over footnote markers for instant context without scrolling

  • Version Badges - Visual indicators show exactly which releases support each configuration

Access the interactive tool at: https://docs.nvidia.com/ai-enterprise/release-7/latest/support-matrix/

The traditional static support matrix remains available for deep linking, printing, and offline reference at NVIDIA AI Enterprise Infrastructure Support Matrix.

Support Matrix Contents

Both the interactive tool and static reference provide comprehensive compatibility information for:

  • Supported GPU architectures

  • Operating system compatibility

  • Hypervisor and orchestration platform versions

  • Cloud provider instance types

  • Networking hardware

New: Interactive Lifecycle and Compatibility Explorer

An interactive lifecycle and compatibility explorer is now available on the NVIDIA AI Enterprise Lifecycle Policy documentation. The explorer provides four query modes:

  • By Software Branch — Select an Infrastructure Branch to see all releases and compatible component versions

  • By Release — Select a specific release to view the complete compatibility matrix

  • By Component — Enter a component type and version to find which releases include it

  • Full Stack Check — Enter your GPU driver version to identify your release and validate compatibility

Use this tool to verify cross-stack compatibility between infrastructure components, plan upgrades, and identify which releases support your current software versions.

Footnotes