PyTorch Release 24.07
The NVIDIA container image for PyTorch, release 24.07 is available on NGC.
Contents of the PyTorch container
This container image contains the complete source of the version of PyTorch in /opt/pytorch
. It is prebuilt and installed in the default Python environment (/usr/local/lib/python3.10/dist-packages/torch
) in the container image.
The container also includes the following:
- Ubuntu 22.04 including Python 3.10
- NVIDIA CUDA 12.5.1
- NVIDIA cuBLAS 12.5.3.2
- NVIDIA cuDNN 9.2.1.18
- NVIDIA NCCL 2.22.3
- NVIDIA RAPIDS™ 24.04
- rdma-core 39.0
- NVIDIA HPC-X 2.19
- OpenMPI 4.1.7
- GDRCopy 2.3
- TensorBoard 2.9.0
- Nsight Compute 2024.2.1.2
- Nsight Systems 2024.4.2.133
- NVIDIA TensorRT™ 10.2.0.19
- Torch-TensorRT 2.4.0a0
- NVIDIA DALI® 1.39
- nvImageCodec 0.2.0.7
- MAGMA 2.6.2
- JupyterLab 2.3.2 including Jupyter-TensorBoard
- TransformerEngine 1.8
- PyTorch quantization wheel 2.1.2
Driver Requirements
Release 24.07 is based on CUDA 12.5.1 which requires NVIDIA Driver release 555 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 470.57 (or later R470), 525.85 (or later R525), 535.86 (or later R535), or 545.23 (or later R545).
The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R450, R460, R510, R520 and R545 drivers, which are not forward-compatible with CUDA 12.5. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This PyTorch release includes the following key features and enhancements.
- PyTorch container image version 24.07 is based on 2.4.0a0+3bcc3cddb5.
- torch.cuda.MemPool() experimental API enables mixing multiple CUDA system allocators in the same PyTorch program.
Announcements
- Starting with the 24.06 release, the NVIDIA Optimized PyTorch container release ships with TensorRT Model Optimizer, use
pip list |grep modelopt
to check version details. - Starting with the 24.06 release, the NVIDIA Optimized PyTorch container release builds pytorch with cusparse_lt turned-on, similar to stock PyTorch.
- Starting with the 24.03 release, the NVIDIA Optimized PyTorch container release provides access to lightning-thunder (/opt/pytorch/lightning-thunder).
- Starting with the 23.11 release, NVIDIA Optimized PyTorch containers supporting iGPU architectures are published, and able to run on Jetson devices. Please refer to the Frameworks Support Matrix for information regarding which iGPU hardware/software is supported by which container.
- Starting with the 23.06 release, the NVIDIA Optimized Deep Learning Framework containers are no longer tested on Pascal GPU architectures.
- Transformer Engine is a library for accelerating Transformer models on NVIDIA GPUs. It includes support for 8-bit floating point (FP8) precision on Hopper GPUs which provides better training and inference performance with lower memory utilization. Transformer Engine also includes a collection of highly optimized modules for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your PyTorch code.
- A preview of Torch-TensorRT (1.4.0dev0) is now included. Torch-TRT is the TensorRT integration for PyTorch and brings the capabilities of TensorRT directly to Torch in one line Python and C++ APIs.
- Deep learning framework containers 19.11 and later include experimental support for Singularity v3.0. Starting with the 22.11 PyTorch NGC container, miniforge is removed and all Python packages are installed in the default Python environment. In case you depend on Conda-specific packages, which might not be available on PyPI, we recommend building these packages from source. A workaround is to manually install a Conda package manager, and add the conda path to your PYTHONPATH for example, using
export PYTHONPATH="/opt/conda/lib/python3.10/site-packages"
if your Conda package manager was installed in/opt/conda
. - Starting with the 24.05 release,
torchtext
andtorchdata
have been removed in the NGC PyTorch container.
NVIDIA PyTorch Container Versions
The following table shows what versions of Ubuntu, CUDA, PyTorch, and TensorRT are supported in each of the NVIDIA containers for PyTorch. For earlier container versions, refer to the Frameworks Support Matrix.Automatic Mixed Precision (AMP)
Automatic Mixed Precision (AMP) for PyTorch is available in this container through the native implementation (torch.cuda.amp). AMP enables users to try mixed precision training by adding only three lines of Python to an existing FP32 (default) script. AMP will select an optimal set of operations to cast to FP16. FP16 operations require 2X reduced memory bandwidth (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and convolutions with FP16 inputs can run on Tensor Cores, which provide an 8X increase in computational throughput over FP32 arithmetic.
APEX AMP is included to support models that currently rely on it, but torch.cuda.amp
is the future-proof alternative and offers a number of advantages over APEX AMP.
- Guidance and examples demonstrating
torch.cuda.amp
can be found here. - APEX AMP examples can be found here.
For more information about AMP, see the Training With Mixed Precision Guide.
Tensor Core Examples
The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence from NVIDIA Volta™ tensor cores by using the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on NVIDIA Volta and NVIDIA Turing™, so you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
- ResNeXt101-32x4d model: This model was introduced in the Aggregated Residual Transformations for Deep Neural Networks paper.
It is based on the regular ResNet model, which substitutes 3x3 convolutions in the bottleneck block for 3x3 grouped convolutions. This model script is available on GitHub.
- SE-ResNext model: This ResNeXt101-32x4d model has an added Squeeze-and-Excitation (SE) module that was introduced in the Squeeze-and-Excitation Networks paper.
This model script is available on GitHub.
- TransformerXL model: This transformer-based language model has a segment-level recurrence and a novel relative positional encoding.
The enhancements that were introduced in Transformer-XL help capture better long-term dependencies by attending to tokens from multiple previous segments. Our implementation is based on the codebase that was published by the authors of the Transformer-XL paper. Our implementation uses modified model architecture hyperparameters, our modifications were made to achieve better hardware usage and to take advantage of Tensor Cores.
This model script is available on GitHub.
- Jasper model: This repository provides an implementation of the Jasper model in PyTorch from the Jasper: An End-to-End Convolutional Neural Acoustic Model paper.
The Jasper model is an end-to-end neural acoustic model for automatic speech recognition (ASR) that provides near state-of-the-art results on LibriSpeech among end-to-end ASR models without external data.
- BERT model: Bidirectional Encoder Representations from Transformers (BERT) is a new method of pretraining language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.
This model is based on the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper. The NVIDIA BERT implementation is an optimized version of the Hugging Face implementation paper that leverages mixed-precision arithmetic and Tensor Cores on V100 GPUs for faster training times while maintaining target accuracy.
- Mask R-CNN model: Mask R-CNN is a convolution-based neural network that is used for object instance segmentation.
The paper describing the model can be found here. NVIDIA’s Mask R-CNN model is an optimized version of Facebook’s implementation, which leverages mixed precision arithmetic by using Tensor Cores on NVIDIA V100 GPUs for 1.3x faster training time while maintaining target accuracy.
- Tacotron 2 and WaveGlow v1.1 model: This text-to-speech (TTS) system is a combination of the following neural network models:
- A modified Tacotron 2 model from the Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions paper
- A flow-based neural network model from the WaveGlow: A Flow-based Generative Network for Speech Synthesis paper.
- SSD300 v1.1 model: This model is based on the SSD: Single Shot MultiBox Detector paper.
The main difference between this model and the model described in the paper is in the backbone. Specifically, the VGG model is obsolete and has been replaced by the ResNet50 model.
- Neural Collaborative Filtering (NCF) model: This model focuses on providing recommendations, which is also known as collaborative filtering with implicit feedback.
The training data for this model should contain binary information about whether a user interacted with a specific item. NCF was first described by Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu and Tat-Seng Chua in the Neural Collaborative Filtering paper.
- ResNet50 v1.5 model: This model is a modified version of the original ResNet50 v1 model.
- GNMT v2 model: This model is similar to the model that is discussed in the Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper.
torch.cuda.MemPool() experimental API
This is an early preview of the API that NVIDIA is working on with the upstream PyTorch community. It is tracked through https://github.com/pytorch/pytorch/issues/124807, and future release notes will include more details.
torch.cuda.MemPool() enables usage of multiple CUDA system allocators in the same PyTorch program. Following is an example that enables NVLink Sharp (NVLS) reductions for part of a PyTorch program, by using ncclMemAlloc allocator, and user buffer registration using ncclCommRegister.
# Run with NCCL_ALGO=NVLS NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=NVLS torchrun --nproc-per-node 4 mempool_example.py
import os
import torch
import torch.distributed as dist
from torch.cuda.memory import CUDAPluggableAllocator
from torch.distributed.distributed_c10d import _get_default_group
from torch.utils import cpp_extension
# create allocator
nccl_allocator_source = """
#include <nccl.h>
#include <iostream>
extern "C" {
void* nccl_alloc_plug(size_t size, int device, void* stream) {
std::cout << "Using ncclMemAlloc" << std::endl;
void* ptr;
ncclResult_t err = ncclMemAlloc(&ptr, size);
return ptr;
}
void nccl_free_plug(void* ptr) {
std::cout << "Using ncclMemFree" << std::endl;
ncclResult_t err = ncclMemFree(ptr);
}
}
"""
nccl_allocator_libname = "nccl_allocator"
nccl_allocator = torch.utils.cpp_extension.load_inline(
name=nccl_allocator_libname,
cpp_sources=nccl_allocator_source,
with_cuda=True,
extra_ldflags=["-lnccl"],
verbose=True,
is_python_module=False,
build_directory="./",
)
allocator = CUDAPluggableAllocator(
f"./{nccl_allocator_libname}.so", "nccl_alloc_plug", "nccl_free_plug"
)
# setup distributed
rank = int(os.getenv("RANK"))
local_rank = int(os.getenv("LOCAL_RANK"))
world_size = int(os.getenv("WORLD_SIZE"))
torch.cuda.set_device(local_rank)
dist.init_process_group(backend="nccl")
device = torch.device(f"cuda:{local_rank}")
default_pg = _get_default_group()
backend = default_pg._get_backend(device)
# create pool
pool = torch.cuda.MemPool(allocator.allocator())
with torch.cuda.use_mem_pool(pool):
# tensor gets allocated with ncclMemAlloc passed in the pool
tensor = torch.arange(1024 * 1024 * 2, device=device)
print(f"tensor ptr on rank {rank} is {hex(tensor.data_ptr())}")
# register user buffers using ncclCommRegister (called under the hood)
backend.register_user_buffers(device)
# Collective uses Zero Copy NVLS
dist.all_reduce(tensor[0:4])
torch.cuda.synchronize()
print(tensor[0:4])
# release memory to system
del tensor
pool.release()
pool.empty_cache()
Known Issues
- Inference performance of BERT, Conformer, and ViT models regressed, fix is WIP.