NVIDIA AI Enterprise Lifecycle Policy#
Getting Started#
📋 Which branch should I use? → Refer to Choosing the Right Release Branch for a decision guide with industry-specific recommendations
🔍 Check version compatibility → Use the Interactive Lifecycle and Compatibility Explorer (details below)
📦 View active releases → Jump to Application Layer Software or Infrastructure Layer Software for current and archived release branches
⚠️ Check EOL dates → Review End of Life Notices to plan migrations before branches reach end-of-support
Interactive Lifecycle and Compatibility Explorer#
🔍 Validate Your Infrastructure Stack Before You Deploy
The Interactive Lifecycle and Compatibility Explorer is a web-based tool. Use it to answer critical questions before upgrading, deploying, or planning your infrastructure. The explorer replaces manual cross-referencing of compatibility tables across multiple pages. Instead of checking driver release notes, operator documentation, and NVIDIA Run:ai support matrices separately, you get a single answer in seconds.
🆕 What’s New in Lifecycle Policy#
Lifecycle Policy Updates
NVIDIA Omniverse added to Application Layer — NVIDIA Omniverse is now documented in the NVIDIA AI Enterprise Lifecycle Policy as an Application Layer component with Feature Branch (FB) and Production Branch (PB) releases starting with PB 25h2. NVIDIA Omniverse PB is accessible to NVIDIA Developer Program members for research and testing, with enterprise support reserved for paid subscribers.
NVIDIA Run:ai added to Infrastructure Layer — NVIDIA Run:ai (self-hosted) is now documented in the NVIDIA AI Enterprise Lifecycle Policy as an Infrastructure Layer component. NVIDIA Run:ai follows a phase-based lifecycle model: 12 months of Supported followed by 6 months of Deprecated (critical fixes only, 18 months total). NVIDIA Run:ai SaaS is not covered under the NVIDIA AI Enterprise license.
Interactive Lifecycle and Compatibility Explorer — A new web-based tool embedded in the Lifecycle Policy page that consolidates infrastructure compatibility checks into a single interface. Refer to the dedicated section above for details.
Decoupled Application and Infrastructure Layers — Starting with NVIDIA AI Enterprise 4.0, application software (NVIDIA NIMs, SDKs, frameworks) and infrastructure software (drivers, operators) follow independent release cadences and branch types.
Infrastructure Stack Alignment Diagram — A visual diagram showing how GPU Drivers, NVIDIA GPU Operator, NVIDIA Network Operator, and NVIDIA Run:ai versions align across infrastructure branches, with staggered release cadences and overlap windows.
Before You Begin#
The Lifecycle Policy sits between initial setup and production deployment. Here’s where it fits and how to use it depending on where you are.
âś… What you should have completed before arriving here:
Account activated — You’ve redeemed your Entitlement Certificate, created your NVIDIA Enterprise Account, and have access to NVIDIA NGC, the Support Portal, and the Licensing Portal. If not, start with the Quick Start Guide.
Infrastructure running — You’ve installed GPU drivers, the NVIDIA Container Toolkit, and optionally Kubernetes operators. You’ve verified GPU access with a basic workload. If not, the Quick Start Guide covers three deployment paths: Bare Metal, Virtualized (vGPU), and Public Cloud.
Know what’s included — You’ve reviewed the NVIDIA AI Enterprise Software document and understand that your NVIDIA AI Enterprise license includes the full Application Layer (AI frameworks, NVIDIA NIMs, SDKs, NVIDIA Omniverse) and Infrastructure Layer (drivers, operators, NVIDIA Run:ai self-hosted). If not, review the Software document first — it maps every component across both layers.
📍 What to do here (Lifecycle Policy):
Choose your release branch — Use Choosing the Right Release Branch to determine whether Feature Branch (FB), Production Branch (PB), or Long-Term Support Branch (LTSB) fits your use case, industry, and update cadence.
Validate your infrastructure stack — Use the Interactive Lifecycle and Compatibility Explorer to confirm that your NVIDIA GPU driver, NVIDIA GPU Operator, NVIDIA Network Operator, and NVIDIA Run:ai versions are compatible before deploying.
Plan your upgrade cycle — Review the Infrastructure Layer Software stack alignment diagram to understand how components release on staggered cadences and where overlap windows allow zero-downtime upgrades.
Check support timelines — Verify that your current versions are still under Supported or Deprecated, and review End of Life Notices for any upcoming deprecations.
➡️ Where to go next:
If you need to… |
Go to |
Why |
|---|---|---|
Start: Get started from scratch (account, drivers, first workload) |
Guides you through account activation, software installation, and running your first AI workload |
|
Discover: Browse available software components |
Complete catalog of Application and Infrastructure Layer components with links to NGC Catalog entries |
|
Plan: Choose a release branch or check version compatibility |
This Document |
Defines branch types (FB, PB, LTSB, Infrastructure), support periods, and includes the Interactive Lifecycle and Compatibility Explorer |
Plan: Plan a deployment architecture |
Planning & Deployment on the NVIDIA AI Enterprise Docs Hub |
Reference architectures, sizing guides, and deployment blueprints |
Validate: Validate your infrastructure stack |
Confirm that your NVIDIA GPU driver, NVIDIA GPU Operator, NVIDIA Network Operator, and NVIDIA Run:ai versions are compatible before deploying |
|
Application layer: Deploy or configure NVIDIA Omniverse |
Standalone documentation site — NVIDIA Omniverse is included in your NVIDIA AI Enterprise license, but is documented separately |
|
Kubernetes operators: Deploy NVIDIA GPU Operator, NVIDIA Network Operator, or NVIDIA DPU Operator (DPF) |
NVIDIA GPU Operator Documentation · NVIDIA Network Operator Documentation · NVIDIA DOCA Platform Framework (DPF) Documentation |
Standalone documentation sites for Kubernetes operator installation and configuration |
AI frameworks and NIMs: Set up NVIDIA NIM, NVIDIA NeMo, or other AI frameworks |
NVIDIA NIM Documentation · NVIDIA NeMo Documentation · NVIDIA NGC Catalog |
Standalone documentation sites; the NVIDIA NGC Catalog lists all supported software |
Infrastructure orchestration: Deploy NVIDIA Run:ai |
Standalone documentation site — NVIDIA Run:ai self-hosted is included in your NVIDIA AI Enterprise license (NVIDIA Run:ai SaaS is separate) |
|
Support: Check licensing or open a support case |
Support on the NVIDIA AI Enterprise Docs Hub |
NVIDIA Enterprise Support Services portal for licensing and technical support |