Glossary#
Key terms used throughout the NVIDIA AI Enterprise Software documentation.
- Application Layer#
The set of NVIDIA AI Enterprise software components for building and deploying AI solutions — including AI frameworks, SDKs, NVIDIA NIMs, NVIDIA Omniverse, and pre-trained models.
- Feature Branch (FB)#
A release branch containing the latest software versions. Designed for development, testing, and research environments with a rapid release cadence.
- Infrastructure Branch#
A release branch for GPU drivers and infrastructure operators, with standard and long-term support options available.
- Infrastructure Layer#
The set of NVIDIA AI Enterprise software components for managing GPU compute resources — including GPU drivers, Kubernetes operators, NVIDIA Run:ai, container runtime tools, and cluster management software.
- Long-Term Support Branch (LTSB)#
A release branch designed for highly regulated industries with extended certification cycles. Provides the longest support window with periodic security patches.
- NVIDIA Base Command Manager#
Cluster management and provisioning software for NVIDIA DGX systems and AI infrastructure at scale.
- NVIDIA Container Toolkit#
A set of runtime components and utilities that enable GPU-accelerated containers across container engines such as Docker, containerd, and CRI-O.
- NVIDIA DOCA Microservices#
A set of prebuilt, composable microservices that run natively on NVIDIA BlueField to accelerate AI factories with high-performance networking, cybersecurity, data acceleration, and orchestration.
- NVIDIA DPU Operator (DPF)#
A Kubernetes operator and framework that automates the provisioning, orchestration, and lifecycle management of NVIDIA BlueField DPUs and DOCA Microservices, enabling optimized North-South networking in Kubernetes environments.
- NVIDIA GPU Operator#
A Kubernetes operator that automates the management of all NVIDIA software components needed to provision and manage GPUs in Kubernetes clusters.
- NVIDIA NeMo#
An end-to-end platform for building, customizing, and deploying generative AI models including large language models, multimodal, speech AI, and vision.
- NVIDIA Network Operator#
A Kubernetes operator that automates the provisioning and management of NVIDIA ConnectX NICs and SuperNICs, optimizing high-speed East-West networking and RDMA-based data transfers within Kubernetes clusters.
- NVIDIA NGC Catalog#
NVIDIA’s hub for GPU-optimized software. All NVIDIA AI Enterprise supported components are listed in the NGC Catalog and identified by the “NVIDIA AI Enterprise Supported” designation.
- NVIDIA NIM#
Optimized microservices for accelerating foundation model deployment. Available on the NVIDIA NGC Catalog with enterprise support.
- NVIDIA NIM Operator#
A Kubernetes operator that enables cluster administrators to deploy and manage NVIDIA NIM microservices in Kubernetes.
- NVIDIA Triton Inference Server#
A multi-framework inference serving platform that supports multiple model formats, dynamic batching, and optimized backends for deploying AI models at scale.
- Production Branch (PB)#
A release branch optimized for production deployments. Provides API stability, regular security patches, and a longer support window than Feature Branches.
- Support Matrix#
A reference that lists supported hardware, operating systems, hypervisors, and orchestration platforms for each NVIDIA AI Enterprise infrastructure release.