Production Branch (PB)#
A Production Branch (PB) contains production-ready AI frameworks and SDK branches to provide API stability and a secure environment for building mission-critical AI applications.
Learn more about NVIDIA AI Enterprise release branches in the NVIDIA AI Enterprise Release Branches document.
Production Branch - October 2025 (PB 25h2)#
PB Collection on NGC |
|
|---|---|
Government Ready PB Collection |
Production Branch Government Ready - October 2025 (PB 25h2) |
First planned release [3] |
October 2025 |
Last planned release |
June 2026 |
Planned End of Life (EOL) |
July 2026 |
Government Ready Versions |
STIG hardened, FIPS enabled versions are available for some products. Refer to the table below. |
Compatible Infrastructure Release |
Use the latest NVIDIA AI Enterprise Infra Release 7 on the NGC Catalog. |
Support Matrix |
Name |
Version |
Government Ready Versions |
Product Documentation |
|---|---|---|---|
CUDA Deep Learning |
25.08 |
Yes for x86 |
|
Multi-LLM NIM |
1.14 |
Yes for x86 |
|
NVIDIA Holoscan SDK |
3.3 |
No |
|
NVIDIA TensorRT |
25.08-py |
Yes for x86 |
|
NVIDIA Triton Inference Server |
|
Yes for x86 |
|
NVIDIA TAO |
|
Yes for x86 |
|
PyTorch |
25.08-py |
Yes for x86 |
New Features - Government Ready
This release introduces a significant new baseline for security, Government Ready, for most x86 container images. This designation indicates that the software:
Meets software security requirements for use within a FedRAMP High or equivalent Sovereign use cases.
Matching functionality with NVIDIA software without the government ready designation.
Technical Implementation
Security Technical Implementation Guides (STIGs) are configuration standards consisting of cybersecurity requirements for specific products developed by the U.S. Department of Defense. STIGs provide a methodology for standardized secure installation and maintenance of DOD IA and IA-enabled devices and systems, helping organizations harden their systems against security vulnerabilities through detailed technical configuration guidance.
FIPS 140-3 is the U.S. government computer security standard used to approve cryptographic modules, with FIPS 140-3 superseding FIPS 140-2 for new submissions as of April 1, 2022. The goal of the CMVP is to promote the use of validated cryptographic modules and provide Federal agencies with a security metric to use in procuring equipment containing validated cryptographic modules. These standards ensure that cryptographic implementations meet rigorous security requirements for government and regulated environments.
Our containers have been built on top of Canonical’s Ubuntu 24.04 STIG hardened base image, and they include FIPS versions of common cryptography libraries, like OpenSSL. These containers can be deployed the same as normal containers. To make use of FIPS mode, your host machine must have a FIPS-enabled Linux kernel.
If you run into problems integrating your application with FIPS-enabled libraries, check the documentation for each library whether FIPS mode can be toggled. For example, for OpenSSL you can use OPENSSL_FORCE_FIPS_MODE=0 to disable FIPS mode if needed for testing.
Verifying FIPS Mode on Your Host System
To verify that your host machine is running in FIPS mode, check the /proc/sys/crypto/fips_enabled file and ensure it is set to 1. If it is set to 0, the FIPS modules will not run in FIPS mode. If the file is missing, the FIPS kernel is not installed. You can verify this with the shell command:
cat /proc/sys/crypto/fips_enabled
Additionally, you can check your kernel version using uname -a to confirm you’re running a FIPS-enabled kernel. Refer to Canonical’s FIPS documentation as an example of setting up a FIPS kernel. Any Linux distribution with a FIPS-enabled kernel should provide similar verification methods through the /proc/sys/crypto/fips_enabled flag.
Learn more about NVIDIA’s hardened image in the AI Software for Regulated Environments.
Production Branch - May 2025 (PB 25h1)#
PB Collection on NGC |
|
|---|---|
First planned release |
May 2025 |
Last planned release |
December 2025 |
Planned End of Life (EOL) |
January 2026 |
Compatible Infrastructure Release |
Use the latest NVIDIA AI Enterprise Infra Release 6 on the NGC Catalog. |
Support Matrix |
Name |
Version |
Product Documentation |
|---|---|---|
NVIDIA NIM Llama-3.1-8b-instruct [2] |
1.10 |
|
NVIDIA NIM Llama-3.1-70b-instruct [2] |
1.10 |
|
NVIDIA DeepStream SDK [2] |
N/A |
|
NVIDIA Holoscan SDK [2] |
3.3.0 |
|
NVIDIA Morpheus |
25.02-runtime |
|
NVIDIA TensorRT |
25.03-py |
|
NVIDIA Triton Inference Server |
|
|
NVIDIA NIM Retrieval QA E5 Embedding v5 [2] |
1.8.0 |
|
PyTorch |
25.03-py |
|
RAPIDS |
25.02-runtime |
|
RAPIDS Accelerator for Apache Spark |
25.02.1 |
Production Branch - October 2024 (PB 24h2) - EOL#
Important
This branch is end-of-life (EOL).
PB Collection on NGC |
|
|---|---|
First planned release |
October 2024 |
Last planned release |
June 2025 |
Planned End of Life (EOL) |
July 2025 |
Compatible Infrastructure Release |
Use the latest NVIDIA AI Enterprise Infra Release 6 or NVIDIA AI Enterprise Infra Release 5 on the NGC Catalog. |
Support Matrix |
Name |
Version |
Product Documentation |
|---|---|---|
Deep Graph Library (DGL) |
24.08-py3 |
|
NVIDIA NIM Llama-3.1-8b-instruct [1] |
1.3 |
|
NVIDIA NIM Llama-3.1-70b-instruct [1] |
1.3 |
|
NVIDIA DeepStream SDK |
7.1-triton-x86 |
|
NVIDIA Holoscan SDK |
24.08 |
|
NVIDIA Morpheus |
24.06-runtime |
|
NVIDIA TensorRT |
24.08-py3 |
|
NVIDIA Triton Inference Server |
|
|
NVIDIA NIM Retrieval QA E5 Embedding v5 [1] |
1.2 |
|
PyTorch |
24.08-py3 |
|
PyTorch Geometric (PyG) |
24.08-py3 |
|
RAPIDS |
24.06-runtime |
|
RAPIDS Accelerator for Apache Spark |
24.06.02 |
|
TensorFlow 2 |
24.08-tf2-py3 |
Production Branch - May 2024 (PB 24h1) - EOL#
Important
This branch is end-of-life (EOL).
PB Collection on NGC |
|
|---|---|
First planned release |
May 2024 |
Last planned release |
December 2024 |
Planned End of Life (EOL) |
January 2025 |
Compatible Infrastructure Release |
Use the latest NVIDIA AI Enterprise Infra Release 5 as a supported configuration. |
Support Matrix |
Name |
Version |
Product Documentation |
|---|---|---|
NVIDIA MONAI Toolkit |
24.03-py3 |
|
NVIDIA Morpheus |
24.02-runtime |
|
NVIDIA TensorRT |
24.03-py3 |
|
NVIDIA Triton Inference Server |
|
|
PyTorch |
24.03-py3 |
|
RAPIDS |
24.02-runtime |
|
RAPIDS Accelerator for Apache Spark |
24.02 |
|
TensorFlow 2 |
24.03-tf2-py3 |
Production Branch - October 2023 (PB 23h2) - EOL#
Important
This branch is end-of-life (EOL).
PB Collection on NGC |
|
|---|---|
First planned release |
October 2023 |
Last planned release |
June 2024 |
Planned End of Life (EOL) |
July 2024 |
Compatible Infrastructure Release |
Use the latest NVIDIA AI Enterprise Infra Release 5 as a supported configuration. |
Name |
Version |
Product Documentation |
|---|---|---|
NVIDIA Holoscan SDK |
23.10 |
|
NVIDIA TensorRT |
23.08-py3 |
|
NVIDIA Triton Inference Server |
|
|
PyTorch |
23.08-py3 |
|
RAPIDS |
23.06-runtime |
|
TensorFlow 2 |
23.08-tf2-py3 |
Footnotes