NVIDIA AI Enterprise Software#
Getting Started#
📦 What’s in my license? → Refer to Overview for a high-level architecture of the Application and Infrastructure Layers
🧰 Application Layer components → Jump to Application Layer Software for AI frameworks, NVIDIA NIMs, SDKs, NVIDIA Omniverse, and pre-trained models
🔧 Infrastructure Layer components → Jump to Infrastructure Layer Software for GPU drivers, Kubernetes operators, NVIDIA Run:ai, and cluster management tools
📋 Check supported configurations → Refer to the Infrastructure Support Matrix to verify hardware and software compatibility for your deployment
🆕 What’s New in NVIDIA AI Enterprise Software#
Software Documentation Updates
NVIDIA Omniverse added to Application Layer — NVIDIA Omniverse is now listed as an Application Layer component with Feature Branch (FB) and Production Branch (PB) releases. NVIDIA Omniverse provides 3D collaboration, simulation, OpenUSD workflows, and digital twin capabilities. Included in your NVIDIA AI Enterprise license.
NVIDIA Run:ai added to Infrastructure Layer — NVIDIA Run:ai (self-hosted) is now listed as an Infrastructure Layer component for GPU orchestration and workload scheduling on Kubernetes. Included in your NVIDIA AI Enterprise license (SaaS is separate).
Decoupled Layers — Application and Infrastructure Layers follow independent release cadences. You can adopt new AI frameworks without changing your infrastructure, and upgrade drivers without affecting application workloads.
Interactive Support Matrix — Infrastructure software support matrices for the 7.x branch are now available as an interactive tool with filtering, version comparison, and dynamic badges.
Before You Begin#
The NVIDIA AI Enterprise Software documentation maps every component included in your license. Here’s where it fits in your journey and how to use it.
âś… What you should have completed before arriving here:
Account activated — You’ve redeemed your Entitlement Certificate, created your NVIDIA Enterprise Account, and have access to the NVIDIA NGC Catalog, the Support Portal, and the Licensing Portal. If not, start with the NVIDIA AI Enterprise Quick Start Guide.
Infrastructure running (or planned) — You’ve installed NVIDIA GPU drivers and the NVIDIA Container Toolkit, or you’re planning your deployment. If not, the Quick Start Guide covers three deployment paths: Bare Metal, Virtualized (vGPU), and Public Cloud.
📍 What to do here (NVIDIA AI Enterprise Software):
Discover what’s included — Review the Overview to understand the two-layer architecture and how Application and Infrastructure components work together.
Browse Application Layer components — Use the Application Layer Software tables to find AI frameworks, NVIDIA NIMs, domain SDKs, NVIDIA Omniverse, and pre-trained models — with direct links to NGC Catalog entries and product documentation.
Browse Infrastructure Layer components — Use the Infrastructure Layer Software tables to find GPU drivers, Kubernetes operators (NVIDIA GPU Operator, NVIDIA Network Operator, NVIDIA NIM Operator), NVIDIA Run:ai, and cluster management tools.
Check supported configurations — Use the Infrastructure Support Matrix to verify that your hardware, OS, hypervisor, and orchestration platform are supported for your target infrastructure release.
Find NGC Catalog links — Every component table includes direct links to the corresponding NGC Catalog page for downloading containers, Helm charts, and collections.
➡️ Where to go next:
If you need to… |
Go to |
Why |
|---|---|---|
Start: Get started from scratch (account, drivers, first workload) |
Guides you through account activation, software installation, and running your first AI workload |
|
Discover: Browse available software components |
This Document |
Complete catalog of Application and Infrastructure Layer components with links to NGC Catalog entries |
Research: Choose a release branch or check version compatibility |
Defines branch types (FB, PB, LTSB, Infrastructure), support periods, and includes the Interactive Lifecycle and Compatibility Explorer |
|
Plan: Plan a deployment architecture |
Planning & Deployment on the NVIDIA AI Enterprise Docs Hub |
Reference architectures, sizing guides, and deployment blueprints |
Validate: Validate your infrastructure stack |
Confirm that your NVIDIA GPU driver, NVIDIA GPU Operator, NVIDIA Network Operator, and NVIDIA Run:ai versions are compatible before deploying |
|
Application layer: Deploy or configure NVIDIA Omniverse |
Standalone documentation site — NVIDIA Omniverse is included in your NVIDIA AI Enterprise license, but is documented separately |
|
Kubernetes operators: Deploy NVIDIA GPU Operator, NVIDIA Network Operator, or NVIDIA DPU Operator (DPF) |
NVIDIA GPU Operator Documentation · NVIDIA Network Operator Documentation · NVIDIA DOCA Platform Framework (DPF) Documentation |
Standalone documentation sites for Kubernetes operator installation and configuration |
AI frameworks and NIMs: Set up NVIDIA NIM, NVIDIA NeMo, or other AI frameworks |
NVIDIA NIM Documentation · NVIDIA NeMo Documentation · NVIDIA NGC Catalog |
Standalone documentation sites; the NVIDIA NGC Catalog lists all supported software |
Infrastructure orchestration: Deploy NVIDIA Run:ai |
Standalone documentation site — NVIDIA Run:ai self-hosted is included in your NVIDIA AI Enterprise license (NVIDIA Run:ai SaaS is separate) |
|
Support: Check licensing or open a support case |
Support on the NVIDIA AI Enterprise Docs Hub |
NVIDIA Enterprise Support Services portal for licensing and technical support |