NVIDIA Optimized Frameworks

PyTorch Release 22.08

The NVIDIA container image for PyTorch, release 22.08, is available on NGC.

Contents of the PyTorch container

This container image contains the complete source of the version of PyTorch in /opt/pytorch. It is prebuilt and installed in the Conda default environment (/opt/conda/lib/python3.8/site-packages/torch/) in the container image. The container also includes the following:

Driver Requirements

Release 22.08 is based on CUDA 11.7.1, which requires NVIDIA Driver release 515 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 450.51 (or later R450), 470.57 (or later R470), or 510.47 (or later R510). The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, and R460 drivers, which are not forward-compatible with CUDA 11.7. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

GPU Requirements

Release 22.08 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, and NVIDIA Ampere Architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This PyTorch release includes the following key features and enhancements.

  • PyTorch container image version 22.08 is based on 1.13.0a0+d321be6.
  • CUDA Module loading is set to LAZY starting with the 22.08 container. To enable the default eager loading behavior, use `export CUDA_MODULE_LOADING=EAGER` or `unset CUDA_MODULE_LOADING`. Refer to the CUDA C++ Programming Guide for more information about this environment variable.

Announcements

  • NVIDIA Deep Learning Profiler (DLProf) v1.8, which was included in the 21.12 container, was the final release of DLProf.

    Starting with the 22.01 container, DLProf is no longer included, but it can still be manually installed by using a pip wheel on the nvidia-pyindex.

  • A preview of Torch-TensorRT (1.1.0a0) is now included.

    Torch-TRT is the TensorRT integration for PyTorch and brings the capabilities of TensorRT directly to Torch in one line Python and C++ APIs.

  • Starting with the 22.05 release, the PyTorch container is available for the Arm SBSA platform.
  • Deep learning framework containers 19.11 and later include experimental support for Singularity v3.0.
  • Starting in 21.06, PyProf will no longer be included in the NVIDIA PyTorch container.

    To profile models in PyTorch, use DLProf.

    DLProf can help data scientists, engineers, and researchers understand improve performance of their models with visualization by using the DLProf Viewer in a web browser or by analyzing text reports. DL Prof is available on NGC or through a Python PIP wheel installation.

  • The TensorCore example models are no longer provided in the core PyTorch container (previously shipped in /workspace/nvidia-examples).

    You can obtain the models from Github or the NVIDIA GPU Cloud (NGC) instead. Some Python packages that were included in previous containers to support these example models have also been removed. Depending on their specific use cases, you might need to add some packages that were previously preinstalled.

NVIDIA PyTorch Container Versions

The following table shows what versions of Ubuntu, CUDA, PyTorch, and TensorRT are supported in each of the NVIDIA containers for PyTorch. For earlier container versions, refer to the Frameworks Support Matrix.
Container Version Ubuntu CUDA Toolkit PyTorch TensorRT
22.08 20.04 NVIDIA CUDA 11.7.1 1.13.0a0+d321be6 TensorRT 8.4.2.4
22.07 NVIDIA CUDA 11.7 Update 1 Preview 1.13.0a0+08820cb TensorRT 8.4.1
22.06 1.13.0a0+340c412 TensorRT 8.2.5
22.05 NVIDIA CUDA 11.7.0 1.12.0a0+8a1a93a
22.04 NVIDIA CUDA 11.6.2 1.12.0a0+bd13bc6 TensorRT 8.2.4.2
22.03 NVIDIA CUDA 11.6.1 1.12.0a0+2c916ef TensorRT 8.2.3
22.02 NVIDIA CUDA 11.6.0 1.11.0a0+17540c5c TensorRT 8.2.3
22.01 1.11.0a0+bfe5ad28 TensorRT 8.2.2
21.12 NVIDIA CUDA 11.5.0 1.11.0a0+b6df043 TensorRT 8.2.1.8
21.11

TensorRT 8.0.3.4 for x64 Linux

TensorRT 8.0.2.2 for Arm SBSA Linux

21.10 NVIDIA CUDA 11.4.2 with cuBLAS 11.6.5.2 1.10.0a0+0aef44c
21.09 NVIDIA CUDA 11.4.2 1.10.0a0+3fd9dcf TensorRT 8.0.3
21.08 NVIDIA CUDA 11.4.1 TensorRT 8.0.1.6
21.07 NVIDIA CUDA 11.4.0 1.10.0a0+ecc3718
21.06 NVIDIA CUDA 11.3.1 1.9.0a0+c3d40fd TensorRT 7.2.3.4
21.05 NVIDIA CUDA 11.3.0 1.9.0a0+2ecb2c7
21.04
21.03 NVIDIA CUDA 11.2.1 1.9.0a0+df837d0 TensorRT 7.2.2.3
21.02 NVIDIA CUDA 11.2.0 1.8.0a0+52ea372 TensorRT 7.2.2.3+cuda11.1.0.024
20.12 NVIDIA CUDA 11.1.1 1.8.0a0+1606899 TensorRT 7.2.2
20.11

18.04

NVIDIA CUDA 11.1.0 1.8.0a0+17f8c32 TensorRT 7.2.1
20.10 1.7.0a0+7036e91
20.09 NVIDIA CUDA 11.0.3 1.7.0a0+8deb4fe TensorRT 7.1.3
20.08 1.7.0a0+6392713
20.07 NVIDIA CUDA 11.0.194 1.6.0a0+9907a3e
20.06 NVIDIA CUDA 11.0.167 TensorRT 7.1.2
20.03 NVIDIA CUDA 10.2.89 1.5.0a0+8f84ded TensorRT 7.0.0
20.02 1.5.0a0+3bbb36e

20.01

1.4.0a0+a5b4d78

19.12

19.11

TensorRT 6.0.1
1.4.0a0+174e1ba
19.10 NVIDIA CUDA 10.1.243 1.3.0a0+24ae9b5
19.09 1.2.0
19.08 1.2.0a0 including upstream commits up through commit 9130ab38 from July 31, 2019 as well as a cherry-picked TensorRT 5.1.5

Automatic Mixed Precision (AMP)

Automatic Mixed Precision (AMP) for PyTorch is available in this container through the native implementation and a preinstalled release of Apex. AMP enables users to try mixed precision training by adding only three lines of Python to an existing FP32 (default) script. AMP will select an optimal set of operations to cast to FP16. FP16 operations require 2X reduced memory bandwidth (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and convolutions with FP16 inputs can run on Tensor Cores, which provide an 8X increase in computational throughput over FP32 arithmetic.

APEX AMP is included to support models that currently rely on it, but torch.cuda.amp is the future-proof alternative and offers a number of advantages over APEX AMP.

  • Guidance and examples demonstrating torch.cuda.amp can be found here.
  • APEX AMP examples can be found here.

For more information about AMP, see the Training With Mixed Precision Guide.

Tensor Core Examples

The tensor core examples provided in GitHub and NGC focus on achieving the best performance and convergence from NVIDIA Volta™ tensor cores by using the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on NVIDIA Volta and NVIDIA Turing™, so you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

Known Issues

  • Performance regression on Volta and NVIDIA Ampere Architecture for ResNet-like model inference use cases of up to 18%.
  • Performance regression on Volta and NVIDIA Ampere Architecture for Tactron2+Waveglow inference use cases of up to 35%.
  • The default `antialiasing` argument for resizing operations in DALI 1.16.0 was changed to `True`, which could result in performance regressions on CPU-limited use cases. Set this argument to `False` to restore the previous behavior.
© Copyright 2024, NVIDIA. Last updated on Oct 30, 2024.