NVIDIA AI Enterprise is an end-to-end software platform for developing, deploying, and managing AI applications across cloud, data center, and edge environments. It delivers AI frameworks, NIM microservices, and SDKs through its Application Layer, and GPU drivers, Kubernetes operators, and cluster management tools through its Infrastructure Layer — each with independent release branches and lifecycle — plus enterprise support backed by SLAs.
Highlights
NVIDIA AI Enterprise is organized into two independently versioned software layers. The Application Software layer provides AI frameworks, SDKs, NVIDIA NIM microservices, NeMo, and pre-trained models from NVIDIA, the open-source community, and partners. The Infrastructure Software layer delivers GPU, networking, and virtualization drivers, Kubernetes operators, and cluster management tools. Together, the two layers form a composable, full-stack AI platform that deploys across cloud, data center, edge, and workstation environments.
Where to Go Next
Find the right documentation based on where you are in your journey:
| If you need to… | Go to | Why |
|---|---|---|
| Start: Set up your account, install drivers, and run your first workload | NVIDIA AI Enterprise Quick Start Guide | Walks you through account activation, driver installation, and running your first GPU-accelerated workload |
| Discover: Browse the software components included in your license | NVIDIA AI Enterprise Software | Lists every component in your license with version details and direct links to NGC Catalog entries |
| Research: Choose a release branch, review support timelines, and validate version compatibility | NVIDIA AI Enterprise Lifecycle Policy | Compares release branch types (FB, PB, LTSB, Infrastructure), support timelines, and EOL schedules |
| Plan: Design your deployment for bare metal, virtualized, or cloud | Planning & Deployment | Provides reference architectures and step-by-step deployment guides for bare metal, virtualized, or cloud environments |
| Serve: Deploy models and configure inference and training runtimes | NVIDIA NIM · NVIDIA NeMo · NVIDIA NGC Catalog | Covers model deployment, API references, and container configuration for optimized inference and training workloads |
| Orchestrate: Deploy and manage GPU, network, and DPU operators in Kubernetes | NVIDIA GPU Operator · NVIDIA Network Operator · NVIDIA DOCA Platform Framework (DPF) | Covers installation, Helm chart configuration, and upgrade procedures for GPU, network, and DPU orchestration in Kubernetes |
| Support: Check licensing or open a technical support case | Support | Provides support tier details, SLA response times, and instructions for opening a technical support case |
Start here to onboard, explore, and plan your NVIDIA AI Enterprise deployment.
Walks you through the end-to-end onboarding process for NVIDIA AI Enterprise — from receiving your entitlement certificate and registering your NVIDIA Enterprise Account, to accessing the NGC Catalog and installing software components on bare metal, virtualized, or public cloud infrastructure. Includes steps for linking evaluation accounts, generating NGC API keys, and verifying GPU-accelerated containers are operational.
Browse ›Catalogs the Application Layer and Infrastructure Layer components included in your NVIDIA AI Enterprise license, linking each to its NGC Catalog entry and product documentation. Explains the two-layer architecture, independent release cadences, and provides infrastructure support matrices for verifying hardware and software compatibility across deployment types.
Browse ›Defines the release branch strategy, support timelines, and end-of-life schedule for every NVIDIA AI Enterprise component — from choosing between Feature Branch (FB), Production Branch (PB), and Long-Term Support Branch (LTSB) releases, to validating cross-stack compatibility using the Interactive Lifecycle and Compatibility Explorer. Includes infrastructure stack alignment diagrams, EOL notices with migration paths, and archived branch references.
Browse ›Application software includes AI frameworks, NVIDIA NIMs, domain SDKs, and pre-trained models — all included in your NVIDIA AI Enterprise license. Production Branches (PB) deliver production-ready AI frameworks with 9-month support. Long-Term Support Branches (LTSB) provide 36 months of API stability for highly regulated industries. Refer to the Lifecycle Policy, including the Choosing the Right Release Branch section, for more information.
For the full list of Application Layer components with NGC Catalog links and version details, refer to the Application Layer Software page.
Active Release Branches
| Software Branch | Compatible Infra | First Release | Last Release | Planned EOL |
|---|---|---|---|---|
| Production Branch - October 2025 (PB 25h2) | Infra Release 7 | October 2025 | June 2026 | July 2026 |
| Long-Term Support Branch 2 (LTSB 2) | Infra Release 4 and 7 | November 2024 | August 2027 | October 2027 |
Archived Release Branches (End of Life)
| Software Branch | Compatible Infra | First Release | Last Release | Planned EOL |
|---|---|---|---|---|
| Production Branch - May 2025 (PB 25h1) | Infra Release 6 and 7 | May 2025 | December 2025 | January 2026 |
| Production Branch - October 2024 (PB 24h2) | Infra Release 5 and 6 | October 2024 | June 2025 | July 2025 |
| Production Branch - May 2024 (PB 24h1) | Infra Release 4 and 5 | May 2024 | December 2024 | January 2025 |
| Production Branch - October 2023 (PB 23h2) | Infra Release 3 | October 2023 | June 2024 | July 2024 |
| Long-Term Support Branch 1 (LTSB 1) | Infra Release 1 | August 2021 | February 2024 | June 2024 |
Infrastructure software includes GPU drivers, Kubernetes operators (NVIDIA GPU Operator, NVIDIA Network Operator, NVIDIA NIM Operator), NVIDIA Container Toolkit, and cluster management tools — all included in your NVIDIA AI Enterprise license. Infrastructure Branches provide regular updates with 1-year support windows. Branches designated as Long-Term Support (LTSB) receive extended 3-year support. Refer to the Lifecycle Policy, including the Choosing the Right Release Branch section, for more information.
For the full list of Infrastructure Layer components with NGC Catalog links, version details, and the stack alignment diagram, refer to the Infrastructure Layer Software page.
Active Release Branches
| Software Branch | Driver | Latest Release | Latest Update | Planned EOL |
|---|---|---|---|---|
| NVIDIA AI Enterprise Infra 7 (LTSB) | R580 (580.126.09) | 7.4 | January 2026 | July 2028 |
| NVIDIA AI Enterprise Infra 6 (FB, PB) | R570 (570.211.01) | 6.7 | January 2026 | March 2026 |
| NVIDIA AI Enterprise Infra 4 (LTSB) | R535 (535.288.01) | 4.9 | January 2026 | July 2026 |
Archived Release Branches (End of Life)
| Software Branch | Driver | Latest Release | Latest Update | Planned EOL |
|---|---|---|---|---|
| NVIDIA AI Enterprise Infra 5 (FB, PB) | R550 (550.144.02) | 5.3 | January 2025 | April 2025 |
| NVIDIA AI Enterprise Infra 3 (FB, PB) | R525 (525.147.05) | 3.3 | November 2023 | December 2023 |
| NVIDIA AI Enterprise Infra 2 (FB) | R520 (520.61.05) | 2.3 | October 2022 | November 2022 |
| NVIDIA AI Enterprise Infra 1 (LTSB) | R470 (470.256.02) | 1.9 | July 2024 | September 2024 |
Deployment Guides
Step-by-step deployment instructions for installing NVIDIA AI Enterprise on your target infrastructure — whether bare metal, virtualized, cloud, or co-engineered partner platforms. Each guide covers prerequisites, driver and software installation, validation steps, and environment-specific configuration. Select the guide that matches your deployment environment.
Deploy NVIDIA AI Enterprise directly on bare metal servers with step-by-step instructions covering prerequisites, driver installation, licensing, Docker setup, and AI framework configuration. Includes GPU partitioning options, Kubernetes deployment, and GPUDirect Storage setup.
Browse ›Deploy NVIDIA AI Enterprise on VMware vSphere with end-to-end instructions covering ESXi, vCenter, NVIDIA host software, vGPU configuration, and VM creation. Includes Docker setup, AI framework installation, and a CPU-only deployment path.
Browse ›Deploy NVIDIA AI Enterprise in the cloud with instructions for supported CSPs, instance types, and licensing options. Covers standard instances, NVIDIA Virtual Machine Images, managed Kubernetes, and Red Hat OpenShift across AWS, Google Cloud, Azure, Oracle Cloud, and more.
Browse ›Deploy the co-engineered Red Hat AI Factory with NVIDIA solution covering hardware requirements, network configuration, NVIDIA AI Enterprise Software integration, and Red Hat OpenShift AI installation. Includes end-to-end guidance through deploying NVIDIA NIM microservices.
Browse ›Reference Architectures
Validated hardware and software blueprints for designing production-grade AI infrastructure before deployment. Use these reference architectures to determine compute node specifications, networking topologies, and software stack configurations for your target workloads and scale requirements.
Full-stack hardware and software recommendations for building production-grade AI infrastructure, covering compute node hardware, networking topologies, and software stack configurations. Designed for OEMs and partners building single-tenant systems for inference, fine-tuning, and retrieval-augmented generation workloads.
Browse ›White Papers
Technical white papers covering security, compliance, and architecture guidance for deploying NVIDIA AI Enterprise in government, regulated, and enterprise environments. Use these resources to understand container security practices, VM optimization strategies, and partner integration patterns before finalizing your deployment architecture.
A purpose-built, full-stack architecture that enables federal agencies to deploy secure, scalable AI in mission-critical environments. Integrates NVIDIA accelerated computing, high-performance networking, NVIDIA-Certified Systems, Nemotron models, and government-ready software with a broad partner ecosystem.
Browse ›A security and compliance framework that introduces a new "Government Ready" baseline for NVIDIA AI Enterprise software, enabling deployment in FedRAMP High and equivalent sovereign environments. Covers hardened containers, distroless images, and Red Hat UBI-STIG images for regulated AI workloads.
Browse ›A comprehensive overview of the security development lifecycle that protects the NVIDIA AI Enterprise software stack from development through production deployment. Covers container image security, vulnerability scanning, NIM microservice security controls, and continuous monitoring.
Browse ›A full-stack reference architecture for deploying single-tenant AI solutions from infrastructure provisioning through agentic AI workloads. Covers accelerated computing platforms, high-performance networking, Kubernetes-based deployment, observability, security, and partner integrations.
Browse ›A practitioner's guide for configuring virtual machines on HGX systems to achieve near bare-metal performance for ML training and AI inference. Covers NUMA-aware VM topology, GPU and NIC passthrough, PCIe and NVLink device mapping, and day-2 best practices for RHEL KVM.
Browse ›NVIDIA AI Enterprise support resources cover enterprise-level support services, product-specific licensing terms, and version lifecycle policies. All NVIDIA AI Enterprise components are covered under the same support services and SLAs.
Enterprise Support and Services
Access NVIDIA Enterprise Support, including support tiers, response times, escalation processes, and how to open a support case. These resources apply to all NVIDIA AI Enterprise licensed products.
Access support tiers, response-time SLAs, escalation processes, and instructions for opening a technical support case. Covers all NVIDIA AI Enterprise licensed products.
Browse ›Explore available enterprise service offerings, including onboarding assistance, deployment guidance, and strategic advisory services for NVIDIA AI Enterprise environments.
Browse ›- Business Standard — Access to NVIDIA AI experts during local business hours with SLA-backed response times
- Business Critical (add-on) — 24/7 support coverage with accelerated response times for production issues
- Technical Account Manager (add-on) — Dedicated NVIDIA expert for strategic guidance and deployment planning
Product Support and Licensing
Product-specific licensing agreements, support policies, and version lifecycle timelines for NVIDIA AI Enterprise. Review these to understand support levels, end-of-support dates, and license terms for individual products.
Covers NVIDIA AI Enterprise licensing models, entitlement activation, license server configuration, and subscription management for all deployment types.
Browse ›