Release Notes#
These release notes describe NVIDIA AI Enterprise Infrastructure Release 8.1. Use them to identify the supported infrastructure software components and versions in this release, review compatibility and support information, and locate the per-component release notes for new features, fixed issues, and known limitations.
Note
NVIDIA AI Enterprise Infra 8.1 is a Production Branch (PB) following the standard release cadence with feature additions and quarterly support. For branch lifecycle, support windows, and Long-Term Support Branch options, refer to the NVIDIA AI Enterprise Lifecycle Policy.
For deployment guidance, refer to the Quick Start Guide. For a full list of supported platforms, hypervisors, operating systems, and orchestration software, refer to the Support Matrix or the interactive support matrix linked under Compatibility and Support. If you are upgrading from 8.0, refer to Upgrading from 8.0 to 8.1.
Latest Release Highlights
NVIDIA Run:ai SaaS Now Included — In addition to NVIDIA Run:ai self-hosted, the NVIDIA-managed NVIDIA Run:ai SaaS offering is now included in the NVIDIA AI Enterprise license under the same enterprise SLA. Refer to the NVIDIA Run:ai SaaS Documentation for the NVIDIA-managed cloud-service option, or choose the deployment that fits your environment.
NVIDIA Run:ai 2.25 — Updated from 2.24 in 8.0. The same 2.25 release applies to both NVIDIA Run:ai self-hosted and NVIDIA Run:ai SaaS. Refer to the NVIDIA Run:ai release notes for scheduling, GPU-utilization, and platform updates in this version.
NVIDIA Data Center GPU Driver 595.71.05 — Maintenance update within the R595 production driver branch (from 595.58.03 in 8.0). Refer to the 595.71.05 release notes for fixes and platform-support details.
NVIDIA vGPU Software 20.1 — NVIDIA Virtual GPU Manager and the NVIDIA vGPU for Compute Guest Driver are both updated from 20.0 to 20.1, refreshing the full vGPU stack in a coordinated release. New in 8.1 for vGPU for Compute:
Newly supported hypervisor: Ubuntu 26.04 LTS
Newly supported guest operating systems:
SUSE Linux Enterprise Server 15 SP6, 15 SP7, and 16
Ubuntu 26.04 LTS
Kubernetes Operator Patch Updates — NVIDIA GPU Operator 26.3.1 (from 26.3.0 in 8.0) and NVIDIA Network Operator 26.1.1 (from 26.1.0 in 8.0). NVIDIA DPU Operator (DPF) 25.10.1, NVIDIA NIM Operator 3.1.0, and NVIDIA Container Toolkit 1.19.0 carry forward from 8.0.
Important
Deprecations and Removals
NVIDIA Tesla V100 (Volta architecture) is no longer supported for vGPU for Compute starting in NVIDIA AI Enterprise Infra 8.0. All Volta-based GPUs — Tesla V100 SXM2 (16 GB / 32 GB), Tesla V100 PCIe (16 GB / 32 GB), Tesla V100S PCIe 32 GB, and Tesla V100 FHHL — are removed from the supported vGPU GPU list as of 8.0 and from the NVIDIA Virtual GPU Manager release that ships with 8.0. Customers running Volta-class GPUs should migrate to Turing-class or later GPUs (T4, A-series, L-series, H-series, B-series), or remain on a NVIDIA AI Enterprise Infra LTSB branch (7.5 LTSB or 4.10 LTSB) where these GPUs continue to be supported.
Five workstation-class GPUs are no longer supported starting in NVIDIA AI Enterprise Infra 8.0. The following GPUs are removed from the supported GPU list as of 8.0: NVIDIA RTX 4000 SFF Ada Generation (Ada Lovelace), NVIDIA RTX A4000 (Ampere), NVIDIA Quadro RTX 8000 (Turing), NVIDIA Quadro RTX 6000 (Turing), and NVIDIA Quadro RTX 4000 (Turing). They remain supported on NVIDIA AI Enterprise Infra LTSB branches (7.5 LTSB and 4.10 LTSB). Customers running these cards should migrate to a currently supported workstation or data center GPU, or remain on a NVIDIA AI Enterprise Infra LTSB branch.
What is Included in NVIDIA AI Enterprise Infra 8.1#
Complete list of infrastructure components with versions and documentation links:
Component |
Description |
Version |
|---|---|---|
NVIDIA Data Center GPU Driver |
Provides hardware support for NVIDIA GPUs. |
|
NVIDIA DOCA Driver for Networking |
Provides hardware support for NVIDIA BlueField DPUs and SuperNICs. Installing DOCA on the host provides all necessary drivers and tools to manage BlueField and ConnectX devices. |
|
NVIDIA Fabric Manager (integrated into NVIDIA AI Enterprise drivers) |
Manages NVSwitch fabric to enable high-bandwidth, low-latency GPU-to-GPU communication for multi-GPU AI workloads. Starting with NVIDIA AI Enterprise Infra 8.0, Fabric Manager binaries are included in the NVIDIA AI Enterprise drivers and no longer require separate installation. |
|
NVIDIA DOCA Microservices |
Infrastructure acceleration and offload services for NVIDIA BlueField, enabling accelerated networking, storage, and security workloads. |
|
NVIDIA Virtual GPU Manager |
GPU driver deployed alongside the hypervisor in virtualized environments. Enables multi-tenant GPU sharing, live migration, and monitoring. |
|
NVIDIA vGPU for Compute Guest Driver [1] |
GPU driver deployed in the guest VM to enable multiple VMs to have simultaneous, direct access to a single physical GPU. |
|
NVIDIA Container Toolkit |
Enables GPU-accelerated containers by providing runtime components and utilities for container engines (Docker, containerd, CRI-O) |
|
NVIDIA Run:ai |
Provides a Kubernetes-native orchestration and management platform that maximizes GPU utilization for AI workloads through advanced scheduling and resource management. |
|
NVIDIA DPU Operator (DPF) |
Enables cluster administrators to automate provisioning, orchestration, and lifecycle management of BlueField DPUs and DOCA Microservices to enable DPU-accelerated North-South networking in Kubernetes. |
|
NVIDIA GPU Operator |
Simplifies deployment of NVIDIA AI Enterprise by automating management of all NVIDIA software components needed to provision GPUs in Kubernetes. |
|
NVIDIA Network Operator |
Simplifies deployment of high-speed networking by automating management of NVIDIA ConnectX NICs and SuperNICs required to optimize East-West traffic and RDMA transfers in Kubernetes. |
|
NVIDIA NIM Operator |
Enables cluster administrators to operate the software components and services required to run LLM, embedding, and other models using NVIDIA NIM microservices in Kubernetes. |
|
NVIDIA Base Command Manager (BCM) |
Cluster management and provisioning tool for NVIDIA DGX systems. |
Upgrading from 8.0 to 8.1#
Use this checklist when upgrading existing 8.0 deployments to 8.1. Run each step in a maintenance window and validate before proceeding.
Step |
Action |
Reference |
|---|---|---|
|
Read the component version delta in the table above and review per-component release notes for breaking changes, deprecations, and feature additions. |
Component release note links above |
|
Confirm hardware, hypervisor (for virtualized deployments), and operating-system compatibility against the 8.1 support matrix. |
|
|
In virtualized environments, snapshot or back up VMs and capture current driver, operator, and licensing configuration before upgrading. Document the rollback path for each component. |
N/A |
|
Depending on your deployment type, that is, bare metal or virtualized, upgrade the NVIDIA Data Center GPU Driver (applies to bare metal deployments), Virtual GPU Manager (only applies to vGPU for Compute deployments), and DOCA Driver. Fabric Manager is included in the NVIDIA AI Enterprise drivers and updates with the GPU Driver package. |
|
|
Upgrade NVIDIA GPU Operator, Network Operator, NIM Operator, DPU Operator (DPF), and NVIDIA Run:ai (self-hosted) following each operator’s documented upgrade path. NVIDIA Run:ai SaaS upgrades are managed by NVIDIA and require no customer-side upgrade. |
Operator release note links above |
|
For virtualized deployments, upgrade NVIDIA Datacenter driver (GPU passthrough), NVIDIA vGPU for Compute Guest Driver (vGPU deployments), and/or NVIDIA Container Toolkit in tenant VMs. |
|
|
Confirm each licensed vGPU VM reaches the NVIDIA License System and shows |
|
|
Run a representative CUDA or AI/ML workload to confirm performance, feature parity, and operator-managed scheduling behavior. |
N/A |
For lifecycle policy, branch support windows, and migration windows beyond 8.x, refer to the NVIDIA AI Enterprise Lifecycle Policy.
Compatibility and Support#
Support Matrix
Use the Interactive Support Matrix to compare NVIDIA AI Enterprise infrastructure compatibility across releases 4.4 through 8.1. The web tool lets you:
Filter by deployment type, operating system, hypervisor, or orchestration platform.
Search by GPU architecture, platform, Kubernetes distribution, or cloud provider.
Inspect per-configuration release badges and footnote tooltips.
For deep linking, printing, or offline reference, the same information is also available as the static Support Matrix. Both forms cover supported GPU architectures, operating systems, hypervisor and orchestration platform versions, cloud provider instance types, and networking hardware.
Lifecycle and Compatibility Explorer
Use the Interactive Lifecycle and Compatibility Explorer on the NVIDIA AI Enterprise Lifecycle Policy documentation to:
Query by Infrastructure Branch, by release, or by component type and version.
Run a full-stack check from a GPU driver version to validate compatibility and plan upgrades.
Footnotes