NVIDIA Optimized Frameworks
NVIDIA Optimized Frameworks (Latest Release) Download PDF

PyTorch Release 20.07

The NVIDIA container image for PyTorch, release 20.07, is available on NGC.

Contents of the PyTorch container

This container image contains the complete source of the version of PyTorch in /opt/pytorch. It is pre-built and installed in Conda default environment (/opt/conda/lib/python3.6/site-packages/torch/) in the container image. The container also includes the following:

Driver Requirements

Release 20.07 is based on NVIDIA CUDA 11.0.194, which requires NVIDIA Driver release 450 or later. However, if you are running on Tesla (for example, T4 or any other Tesla board), you may use NVIDIA driver release 418.xx or 440.30. The CUDA driver's compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

GPU Requirements

Release 20.07 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, and Turing families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This PyTorch release includes the following key features and enhancements.

Announcements

  • Deep learning framework containers 19.11 and later include experimental support for Singularity v3.0.
  • Transformer has been removed.

NVIDIA PyTorch Container Versions

The following table shows what versions of Ubuntu, CUDA, PyTorch, and TensorRT are supported in each of the NVIDIA containers for PyTorch. For older container versions, refer to the Frameworks Support Matrix.

Automatic Mixed Precision (AMP)

Automatic Mixed Precision (AMP) for PyTorch is available in this container through the native implementation as well as a preinstalled release of Apex. AMP enables users to try mixed precision training by adding only 3 lines of Python to an existing FP32 (default) script. Amp will choose an optimal set of operations to cast to FP16. FP16 operations require 2X reduced memory bandwidth (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and convolutions with FP16 inputs can run on Tensor Cores, which provide an 8X increase in computational throughput over FP32 arithmetic.

Apex AMP is included to support models that currently rely on it, but torch.cuda.amp is the future-proof alternative, and offers a number of advantages over Apex AMP.

Guidance and examples demonstrating torch.cuda.amp can be found here.Apex AMP examples can be found here.

For more information about AMP, see the Training With Mixed Precision Guide.

Tensor Core Examples

The tensor core examples provided in GitHub and NVIDIA GPU Cloud (NGC) focus on achieving the best performance and convergence from NVIDIA Volta tensor cores by using the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on Volta and Turing, therefore you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. This container includes the following tensor core examples.

Known Issues

  • There is up to 5% performance drop on Transformer-XL mixed precision training in the 20.07 container compared to 19.11. Disabling the profiling executor at the beginning of your script might reduce this effect via:
    Copy
    Copied!
                

    torch._C._jit_set_profiling_executor(False) torch._C._jit_set_profiling_mode(False)

  • A workaround for the WaveGlow training regression from our past containers is to use a fake batch dimension when calculating the log determinant via torch.logdet(W.unsqueeze(0).float()).squeeze() as is done in this release.

  • Known Turing performance regressions in 20.07 vs. 20.03 container:
    • Up to 10% performance drop on InceptionV3 for mixed precision training
    • Up to 10% performance drop on MaskNCF FP32 training
    • Up to 25% performance drop on MaskRCNN FP32 training
    • Up to 10% performance drop on ResNet50 for mixed precision training.
  • Known Volta performance regressions in 20.07 vs. 20.03 container:
    • Up to 30% performance drop on WaveGlow for FP32 training
    • Up to 11% performance drop on ResNet101 and ResNet152 mixed precision training
    • Up to 10% performance drop on full FP16 VGG16 training
  • Known Pascal performance regressions in 20.07 vs. 20.03 container:
    • Up to 19% performance drop on MaskRCNN for FP32 training
  • When FFT Tiled algo are used with 3D convolution, an intermittent silent failure might happen due to dependency on the order of the stream execution. In some cases this might be manifested as NaNs in the output and we recommend to disable cuDNN via torch.backends.cudnn.enabled = False.
  • Channels-last memory format is experimental in the 20.07 container. Potential convergence issues for ResNet variants are being investigated. On NVIDIA Ampere architecture based GPUs unexpected NaN values due to a race condition in a cuDNN kernel might be observed. We recommend to use the default memory format in case you run into these issues.
© Copyright 2024, NVIDIA. Last updated on Dec 2, 2024.