NVIDIA AI Enterprise Infra 8.0 Release Notes#

This documentation covers the NVIDIA AI Enterprise Infrastructure Layer software — GPU and network drivers, Kubernetes operators, NVIDIA vGPU for Compute, and NVIDIA Run:ai (self-hosted) for AI workload management.

For application-layer software (NIM, NeMo, Omniverse, domain SDKs), refer to Application Software. For enterprise support services, refer to Support.

Latest Release Highlights

  • New GPU Driver Branch (R595) — NVIDIA Data Center GPU Driver 595.58.03 introduces the R595 driver branch with support for new Blackwell platforms.

  • NVIDIA B300 NVL8 Support (HGX Blackwell) — NVSwitch-connected 8-GPU topology with NVLink multicast support. Supports MIG-backed and time-sliced vGPU configurations.

  • NVIDIA RTX Pro 4500 Support — New hardware SKU supported across bare metal and virtualized deployments.

  • vGPU for Compute Updates — NVIDIA vGPU Manager and Guest Driver 20.0. This release introduces vGPU for Compute support for HGX B300 (Linux KVM) and RTX Pro 4500 (Linux KVM and vSphere).

  • Updated Kubernetes Operators — GPU Operator 26.3.0, Network Operator 26.1.0, DPU Operator 25.10.1, and NIM Operator 3.1.0 for GPU workload lifecycle and deployment in Kubernetes

  • DOCA Ecosystem Updates — DOCA Driver 3.3.0 and DOCA Microservices 3.3.0 provide enhanced networking performance and infrastructure acceleration

  • Container Toolkit — NVIDIA Container Toolkit 1.19.0 with updated runtime components for GPU-accelerated containers

  • Enterprise Management — Base Command Manager 11.32.1 offers refined cluster provisioning and workload orchestration for large-scale AI infrastructure

  • Fabric Manager Integration — Fabric Manager and Fabric Manager development binaries are now integrated into the NVIDIA AI Enterprise drivers, eliminating the need for separate installation. NVLSM continues as a standalone utility.

What Is Included in NVIDIA AI Enterprise Infra 8.0#

Complete list of infrastructure components with versions and documentation links:

Table 1 Supported Infrastructure Software#

Component

Description

Version

NVIDIA Data Center GPU Driver

Provides hardware support for NVIDIA GPUs.

595.58.03

NVIDIA DOCA Driver for Networking

Provides hardware support for NVIDIA BlueField DPUs and SuperNICs. Installing DOCA on the host provides all necessary drivers and tools to manage BlueField and ConnectX devices.

3.3.0

NVIDIA Fabric Manager (integrated into NVIDIA AI Enterprise drivers)

Manages NVSwitch fabric to enable high-bandwidth, low-latency GPU-to-GPU communication for multi-GPU AI workloads. Starting with NVIDIA AI Enterprise Infra 8.0, Fabric Manager binaries are included in the NVIDIA AI Enterprise drivers and no longer require separate installation.

595.58.03

NVIDIA DOCA Microservices

Infrastructure acceleration and offload services for NVIDIA BlueField, enabling accelerated networking, storage, and security workloads.

3.3.0

NVIDIA Virtual GPU Manager

GPU driver deployed in the hypervisor for virtualized environments. Enables multi-tenant GPU sharing, live migration, and monitoring.

20.0

NVIDIA vGPU for Compute Guest Driver [2]

GPU driver deployed in the VM or on bare metal OS to enable multiple VMs to have simultaneous, direct access to a single physical GPU.

20.0

NVIDIA Container Toolkit

Enables GPU-accelerated containers by providing runtime components and utilities for container engines (Docker, containerd, CRI-O)

1.19.0

NVIDIA Run:ai [1]

Provides a Kubernetes-native orchestration and management platform that maximizes GPU utilization for AI workloads through advanced scheduling and resource management.

2.24

NVIDIA DPU Operator (DPF)

Enables cluster administrators to automate provisioning, orchestration, and lifecycle management of BlueField DPUs and DOCA Microservices to enable DPU-accelerated North-South networking in Kubernetes.

25.10.1

NVIDIA GPU Operator

Simplifies deployment of NVIDIA AI Enterprise by automating management of all NVIDIA software components needed to provision GPUs in Kubernetes.

26.3.0

NVIDIA Network Operator

Simplifies deployment of high-speed networking by automating management of NVIDIA ConnectX NICs and SuperNICs required to optimize East-West traffic and RDMA transfers in Kubernetes.

26.1.0

NVIDIA NIM Operator

Enables cluster administrators to operate the software components and services required to run LLM, embedding, and other models using NVIDIA NIM microservices in Kubernetes.

3.1.0

NVIDIA Base Command Manager (BCM)

Cluster management and provisioning tool for NVIDIA DGX systems.

11.32.1

Compatibility and Support#

Interactive Support Matrix

Web-based matrix for infrastructure compatibility across releases 7.0–8.0: cross-release comparison, filters (deployment type, OS, hypervisor, orchestration), search (GPU architecture, platform, Kubernetes distribution, cloud provider), footnote tooltips, and per-configuration release badges.

Access the interactive tool at: https://docs.nvidia.com/ai-enterprise/release-8/latest/support-matrix/

The traditional static support matrix remains available for deep linking, printing, and offline reference at NVIDIA AI Enterprise Infrastructure Support Matrix.

Support Matrix Contents

Both the interactive tool and static reference cover:

  • Supported GPU architectures

  • Operating system compatibility

  • Hypervisor and orchestration platform versions

  • Cloud provider instance types

  • Networking hardware

Interactive Lifecycle and Compatibility Explorer

On the NVIDIA AI Enterprise Lifecycle Policy documentation: query by Infrastructure Branch, by release, by component type and version, or run a full-stack check from GPU driver version to validate compatibility and plan upgrades.

Footnotes