NVIDIA Optimized Frameworks
NVIDIA Optimized Frameworks (Latest Release) Download PDF

PyTorch Release 21.12

The NVIDIA container image for PyTorch, release 21.12, is available on NGC.

Contents of the PyTorch container

This container image contains the complete source of the version of PyTorch in /opt/pytorch. It is pre-built and installed in Conda default environment (/opt/conda/lib/python3.8/site-packages/torch/) in the container image. The container also includes the following:

Driver Requirements

Release 21.12 is based on NVIDIA CUDA 11.5.0, which requires NVIDIA Driver release 495 or later. However, if you are running on a Data Center GPU (for example, T4 or any other Tesla board), you may use NVIDIA driver release 418.40 (or later R418), 440.33 (or later R440), 450.51 (or later R450), 460.27 (or later R460), or 470.57 (or later R470). The CUDA driver's compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades and NVIDIA CUDA and Drivers Support.

GPU Requirements

Release 21.12 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, Turing, and NVIDIA Ampere GPU architecture families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This PyTorch release includes the following key features and enhancements.

  • PyTorch container image version 21.12 is based on 1.11.0a0+b6df043.
  • The 21.12 container ships with a preview of the cuDNN v8 API and can be enabled via `export CUDNN_V8_API_ENABLED=1`. To use the new neural network-based heuristics, use `export USE_HEURISTIC_MODE_B=1` in addition to `export CUDNN_V8_API_ENABLED=1`. Please refer to the cuDNN API docs for more information about this heuristic mode (https://docs.nvidia.com/deeplearning/cudnn/api/index.html).

Announcements

  • DLProf v1.8, which is included in the 21.12 container, will be the last release of DLProf. Starting with the 22.01 container, DLProf will no longer be included. It can still be manually installed via a pip wheel on the nvidia-pyindex.
  • A preview of Torch-TensorRT (1.1.0a0) is now included. Torch-TRT is the TensorRT integration for PyTorch bringing the capabilities of TensorRT directly to Torch in one line Python and C++ APIs.
  • Starting with the 21.10 release, a beta version of the PyTorch container is available for the ARM SBSA platform.
  • Deep learning framework containers 19.11 and later include experimental support for Singularity v3.0.
  • Starting in 21.06, PyProf will no longer be included in the NVIDIA PyTorch container. To profile models in PyTorch, please use NVIDIA Deep Learning Profiler (DLProf). DLProf can help data scientists, engineers and researchers understand and improve performance of their models with visualization via DLProf Viewer in the web browser, or by analyzing text reports. DL Prof is available on NGC or a Python PIP wheel installation.
  • The TensorCore example models are no longer provided in the core PyTorch container (previously shipped in /workspace/nvidia-examples). Instead they can be obtained from Github or the NVIDIA GPU Cloud (NGC). Some python packages, included in previous containers to support these example models, have also been removed. Depending on their specific use cases, users may need to add some packages that were previously pre-installed.

NVIDIA PyTorch Container Versions

The following table shows what versions of Ubuntu, CUDA, PyTorch, and TensorRT are supported in each of the NVIDIA containers for PyTorch. For older container versions, refer to the Frameworks Support Matrix.

Automatic Mixed Precision (AMP)

Automatic Mixed Precision (AMP) for PyTorch is available in this container through the native implementation as well as a preinstalled release of Apex. AMP enables users to try mixed precision training by adding only 3 lines of Python to an existing FP32 (default) script. Amp will choose an optimal set of operations to cast to FP16. FP16 operations require 2X reduced memory bandwidth (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and convolutions with FP16 inputs can run on Tensor Cores, which provide an 8X increase in computational throughput over FP32 arithmetic.

Apex AMP is included to support models that currently rely on it, but torch.cuda.amp is the future-proof alternative, and offers a number of advantages over Apex AMP.

Guidance and examples demonstrating torch.cuda.amp can be found here.Apex AMP examples can be found here.

For more information about AMP, see the Training With Mixed Precision Guide.

Tensor Core Examples

The tensor core examples provided in GitHub and NVIDIA GPU Cloud (NGC) focus on achieving the best performance and convergence from NVIDIA Volta tensor cores by using the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on Volta and Turing, therefore you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

Known Issues

  • The version of OpenUCX included with PyTorch container image version 21.11 has known issues with RAPIDS UCX-Py. When using Dask with this container version, pass protocol="tcp" to LocalCUDACluster(), not protocol="ucx", to work around these issues. Additionally, LocalCUDACluster UCX-specific configurations must remain unspecified; they are: enable_tcp_over_ucx, enable_nvlink, enable_infiniband, enable_rdmacm and ucx_net_devices.
  • ARM
    • Passing external CUDA Streams to PyTorch via `torch.cuda.streams.ExternalStream(stream_v)` might fail and is being debugged.
© Copyright 2024, NVIDIA. Last updated on Sep 30, 2024.