NVIDIA Optimized Frameworks
NVIDIA Optimized Frameworks (Latest Release) Download PDF

PyTorch Release 24.08

The NVIDIA container image for PyTorch, release 24.08 is available on NGC.

Contents of the PyTorch container

This container image contains the complete source of the version of PyTorch in /opt/pytorch. It is prebuilt and installed in the default Python environment (/usr/local/lib/python3.10/dist-packages/torch) in the container image. The container also includes the following:

Driver Requirements

Release 24.08 is based on CUDA 12.6 which requires NVIDIA Driver release 560 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 470.57 (or later R470), 525.85 (or later R525), 535.86 (or later R535), or 545.23 (or later R545).

The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R450, R460, R510, R520, R530, R545 and R555 drivers, which are not forward-compatible with CUDA 12.6. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

Key Features and Enhancements

This PyTorch release includes the following key features and enhancements.

Announcements

  • Starting with the 24.06 release, the NVIDIA Optimized PyTorch container release ships with TensorRT Model Optimizer, use pip list |grep modelopt to check version details. At this point TensorRT Model Optimizer supports x86_64 architecture only and support for other architectures (e.g. ARM64) is experimental.
  • Starting with the 24.06 release, the NVIDIA Optimized PyTorch container release builds pytorch with cusparse_lt turned-on, similar to stock PyTorch.
  • Starting with the 24.03 release, the NVIDIA Optimized PyTorch container release provides access to lightning-thunder (/opt/pytorch/lightning-thunder).
  • Starting with the 23.11 release, NVIDIA Optimized PyTorch containers supporting iGPU architectures are published, and able to run on Jetson devices. Please refer to the Frameworks Support Matrix for information regarding which iGPU hardware/software is supported by which container.
  • Starting with the 23.06 release, the NVIDIA Optimized Deep Learning Framework containers are no longer tested on Pascal GPU architectures.
  • Transformer Engine is a library for accelerating Transformer models on NVIDIA GPUs. It includes support for 8-bit floating point (FP8) precision on Hopper GPUs which provides better training and inference performance with lower memory utilization. Transformer Engine also includes a collection of highly optimized modules for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your PyTorch code.
  • A preview of Torch-TensorRT (1.4.0dev0) is now included. Torch-TRT is the TensorRT integration for PyTorch and brings the capabilities of TensorRT directly to Torch in one line Python and C++ APIs.
  • Deep learning framework containers 19.11 and later include experimental support for Singularity v3.0. Starting with the 22.11 PyTorch NGC container, miniforge is removed and all Python packages are installed in the default Python environment. In case you depend on Conda-specific packages, which might not be available on PyPI, we recommend building these packages from source. A workaround is to manually install a Conda package manager, and add the conda path to your PYTHONPATH for example, using export PYTHONPATH="/opt/conda/lib/python3.10/site-packages" if your Conda package manager was installed in /opt/conda.
  • Starting with the 24.05 release, torchtext and torchdata have been removed in the NGC PyTorch container.

NVIDIA PyTorch Container Versions

The following table shows what versions of Ubuntu, CUDA, PyTorch, and TensorRT are supported in each of the NVIDIA containers for PyTorch. For earlier container versions, refer to the Frameworks Support Matrix.
Container Version Ubuntu CUDA Toolkit PyTorch TensorRT
24.08 22.04 NVIDIA CUDA 12.6 2.5.0a0+872d972e41 TensorRT 10.3.0.26
24.07 NVIDIA CUDA 12.5.1 2.4.0a0+3bcc3cddb5 TensorRT 10.2.0.19
24.06 NVIDIA CUDA 12.5.0.23 2.4.0a0+f70bd71a48 TensorRT 10.1.0.27
24.05 NVIDIA CUDA 12.4.1 2.4.0a0+07cecf4168 TensorRT 10.0.1.6
24.04 2.3.0a0+6ddf5cf85e TensorRT 8.6.3
24.03 NVIDIA CUDA 12.4.0.41 2.3.0a0+40ec155e58
24.02 NVIDIA CUDA 12.3.2 2.3.0a0+ebedce2
24.01 2.2.0a0+81ea7a4 TensorRT 8.6.1.6
23.12
23.11 NVIDIA CUDA 12.3.0 2.2.0a0+6a974bec
23.10 NVIDIA CUDA 12.2.2 2.1.0a0+32f93b1
23.09 NVIDIA CUDA 12.2.1
23.08 2.1.0a0+29c30b1
23.07 NVIDIA CUDA 12.1.1 2.1.0a0+b5021ba
23.06 2.1.0a0+4136153
23.05 2.0.0 TensorRT 8.6.1.2
23.04 20.04 NVIDIA CUDA 12.1.0 2.1.0a0+fe05266f TensorRT 8.6.1
23.03 2.0.0a0+1767026 TensorRT 8.5.3
23.02 NVIDIA CUDA 12.0.1 1.14.0a0+44dac51
23.01 TensorRT 8.5.2.2
22.12 NVIDIA CUDA 11.8.0 1.14.0a0+410ce96 TensorRT 8.5.1
22.11 1.13.0a0+936e930
22.10 1.13.0a0+d0d6b1f TensorRT 8.5.0.12
22.09
22.08 NVIDIA CUDA 11.7.1 1.13.0a0+d321be6 TensorRT 8.4.2.4
22.07 NVIDIA CUDA 11.7 Update 1 Preview 1.13.0a0+08820cb TensorRT 8.4.1
22.06 1.13.0a0+340c412 TensorRT 8.2.5
22.05 NVIDIA CUDA 11.7.0 1.12.0a0+8a1a93a
22.04 NVIDIA CUDA 11.6.2 1.12.0a0+bd13bc6 TensorRT 8.2.4.2
22.03 NVIDIA CUDA 11.6.1 1.12.0a0+2c916ef TensorRT 8.2.3
22.02 NVIDIA CUDA 11.6.0 1.11.0a0+17540c5c TensorRT 8.2.3
22.01 1.11.0a0+bfe5ad28 TensorRT 8.2.2
21.12 NVIDIA CUDA 11.5.0 1.11.0a0+b6df043 TensorRT 8.2.1.8
21.11

TensorRT 8.0.3.4 for x64 Linux

TensorRT 8.0.2.2 for Arm SBSA Linux

21.10 NVIDIA CUDA 11.4.2 with cuBLAS 11.6.5.2 1.10.0a0+0aef44c
21.09 NVIDIA CUDA 11.4.2 1.10.0a0+3fd9dcf TensorRT 8.0.3
21.08 NVIDIA CUDA 11.4.1 TensorRT 8.0.1.6
21.07 NVIDIA CUDA 11.4.0 1.10.0a0+ecc3718
21.06 NVIDIA CUDA 11.3.1 1.9.0a0+c3d40fd TensorRT 7.2.3.4
21.05 NVIDIA CUDA 11.3.0 1.9.0a0+2ecb2c7
21.04
21.03 NVIDIA CUDA 11.2.1 1.9.0a0+df837d0 TensorRT 7.2.2.3
21.02 NVIDIA CUDA 11.2.0 1.8.0a0+52ea372 TensorRT 7.2.2.3+cuda11.1.0.024
20.12 NVIDIA CUDA 11.1.1 1.8.0a0+1606899 TensorRT 7.2.2
20.11

18.04

NVIDIA CUDA 11.1.0 1.8.0a0+17f8c32 TensorRT 7.2.1
20.10 1.7.0a0+7036e91
20.09 NVIDIA CUDA 11.0.3 1.7.0a0+8deb4fe TensorRT 7.1.3
20.08 1.7.0a0+6392713
20.07 NVIDIA CUDA 11.0.194 1.6.0a0+9907a3e
20.06 NVIDIA CUDA 11.0.167 TensorRT 7.1.2
20.03 NVIDIA CUDA 10.2.89 1.5.0a0+8f84ded TensorRT 7.0.0
20.02 1.5.0a0+3bbb36e

20.01

1.4.0a0+a5b4d78

19.12

19.11

TensorRT 6.0.1
1.4.0a0+174e1ba
19.10 NVIDIA CUDA 10.1.243 1.3.0a0+24ae9b5
19.09 1.2.0
19.08 1.2.0a0 including upstream commits up through commit 9130ab38 from July 31, 2019 as well as a cherry-picked TensorRT 5.1.5

Automatic Mixed Precision (AMP)

Automatic Mixed Precision (AMP) for PyTorch is available in this container through the native implementation (torch.cuda.amp). AMP enables users to try mixed precision training by adding only three lines of Python to an existing FP32 (default) script. AMP will select an optimal set of operations to cast to FP16. FP16 operations require 2X reduced memory bandwidth (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and convolutions with FP16 inputs can run on Tensor Cores, which provide an 8X increase in computational throughput over FP32 arithmetic.

APEX AMP is included to support models that currently rely on it, but torch.cuda.amp is the future-proof alternative and offers a number of advantages over APEX AMP.

  • Guidance and examples demonstrating torch.cuda.amp can be found here.
  • APEX AMP examples can be found here.

For more information about AMP, see the Training With Mixed Precision Guide.

Tensor Core Examples

The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence from NVIDIA Volta™ tensor cores by using the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on NVIDIA Volta and NVIDIA Turing™, so you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

torch.cuda.MemPool() experimental API

This is an early preview of the API that NVIDIA is working on with the upstream PyTorch community. It is tracked through https://github.com/pytorch/pytorch/issues/124807, and future release notes will include more details.

torch.cuda.MemPool() enables usage of multiple CUDA system allocators in the same PyTorch program. Following is an example that enables NVLink Sharp (NVLS) reductions for part of a PyTorch program, by using ncclMemAlloc allocator, and user buffer registration using ncclCommRegister.

Copy
Copied!
            

# Run with NCCL_ALGO=NVLS NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=NVLS torchrun --nproc-per-node 4 mempool_example.py import os import torch import torch.distributed as dist from torch.cuda.memory import CUDAPluggableAllocator from torch.distributed.distributed_c10d import _get_default_group from torch.utils import cpp_extension # create allocator nccl_allocator_source = """ #include <nccl.h> #include <iostream> extern "C" { void* nccl_alloc_plug(size_t size, int device, void* stream) { std::cout << "Using ncclMemAlloc" << std::endl; void* ptr; ncclResult_t err = ncclMemAlloc(&ptr, size); return ptr; } void nccl_free_plug(void* ptr) { std::cout << "Using ncclMemFree" << std::endl; ncclResult_t err = ncclMemFree(ptr); } } """ nccl_allocator_libname = "nccl_allocator" nccl_allocator = torch.utils.cpp_extension.load_inline( name=nccl_allocator_libname, cpp_sources=nccl_allocator_source, with_cuda=True, extra_ldflags=["-lnccl"], verbose=True, is_python_module=False, build_directory="./", ) allocator = CUDAPluggableAllocator( f"./{nccl_allocator_libname}.so", "nccl_alloc_plug", "nccl_free_plug" ) # setup distributed rank = int(os.getenv("RANK")) local_rank = int(os.getenv("LOCAL_RANK")) world_size = int(os.getenv("WORLD_SIZE")) torch.cuda.set_device(local_rank) dist.init_process_group(backend="nccl") device = torch.device(f"cuda:{local_rank}") default_pg = _get_default_group() backend = default_pg._get_backend(device) # create pool pool = torch.cuda.MemPool(allocator.allocator()) with torch.cuda.use_mem_pool(pool): # tensor gets allocated with ncclMemAlloc passed in the pool tensor = torch.arange(1024 * 1024 * 2, device=device) print(f"tensor ptr on rank {rank} is {hex(tensor.data_ptr())}") # register user buffers using ncclCommRegister (called under the hood) backend.register_user_buffers(device) # Collective uses Zero Copy NVLS dist.all_reduce(tensor[0:4]) torch.cuda.synchronize() print(tensor[0:4]) # release memory to system del tensor pool.release() pool.empty_cache()


Known Issues

  • The use of environment variable TORCH_SHOW_CPP_STACKTRACES=1 might cause a stack overflow and a segmentation fault on ARM servers. See stock PyTorch issue for details.
© Copyright 2024, NVIDIA. Last updated on Sep 30, 2024.