The container image for NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet, release 19.07, is available on NGC.
Contents of the Optimized Deep Learning Framework container
This container image contains the complete source of the version of NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet in
/opt/mxnet. It is pre-built and installed to the Python path.
The container also includes the following:
- Ubuntu 18.04 including Python 3.5
- NVIDIA CUDA 10.1.168 including cuBLAS 10.2.0.168
- NVIDIA cuDNN 7.6.1
- NVIDIA NCCL 2.4.7 (optimized for NVLink™ )
- ONNX exporter 0.1 for CNN classification models
The ONNX exporter is being continuously improved. You can try the latest changes by pulling from the main branch.
- Amazon Labs Sockeye sequence-to-sequence framework 1.18.99 (for machine translation)
- MLNX_OFED +3.4
- OpenMPI 3.1.3
- Horovod 0.16.2
- TensorRT 5.1.5
- DALI 0.11.0 Beta
- Tensor Core optimized example:
- Jupyter and JupyterLab:
Release 19.07 is based on NVIDIA CUDA 10.1.168, which requires NVIDIA Driver release 418.67. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384.111+ or 410. The CUDA driver's compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.
Release 19.07 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, and Turing families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
Key Features and Enhancements
- NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet container image version 19.07 is based on Apache MXNet 1.5.0.rc2 and includes upstream commits up through commit 75a9e187d from June 27, 2019.
- NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet container image version 19.07 now uses OpenBLAS instead of Atlas.
- Pointwise operators fusion was introduced to improve performance of training and inference and is controlled by the
MXNET_USE_FUSIONenvironment variable (on by default).
- Latest version of NVIDIA cuDNN 7.6.1
- Latest version of MLNX_OFED +3.4
- Latest versions of Jupyter Client 5.3.1, Jupyter Core 4.5.0, JupyterLab 1.0.0 and JupyterLab Server 1.0.0, including Jupyter-TensorBoard integration.
- Latest version of DALI 0.11.0 Beta
- Latest version of Amazon Labs Sockeye sequence-to-sequence framework 1.18.99
- Latest version of Ubuntu 18.04
Tensor Core Examples
These examples focus on achieving the best performance and convergence from NVIDIA Volta Tensor Cores by using the latest deep learning example networks for training. Each example model trains with mixed precision Tensor Cores on Volta, therefore you can get results much faster than training without tensor cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. This container includes the following tensor core examples.
- ResNet50 v1.5 model. The ResNet50 v1.5 model is a slightly modified version of the original ResNet50 v1 model that trains to a greater accuracy.
Automatic Mixed Precision (AMP)
Training deep learning networks is a very computationally intensive task. Novel model architectures tend to have an increasing number of layers and parameters, which slows down training. Fortunately, new generations of training hardware as well as software optimizations make training these new models a feasible task.
Most of the hardware and software training optimization opportunities involve exploiting lower precision like FP16 in order to utilize the Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines that are needed to ensure proper model training.
That is where AMP (Automatic Mixed Precision) comes into play- it automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16.
The NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet AMP tutorial, located in
/opt/mxnet/nvidia-examples/AMP/AMP_tutorial.md inside this container, shows how to get started with mixed precision training using AMP for Apache MXNet, using by example the SSD network from GluonCV.
For more information about AMP, see the Training With Mixed Precision Guide.
- The Apache MXNet KVStore GPU peer-to-peer communication tree discovery, as of release 18.09, is not compatible with DGX-1V. Only users that set the environment variable
MXNET_KVSTORE_USETREE=1will experience issues, which will be resolved in a subsequent release. Issue tracked under 13341.
- The default setting of the environment variable
MXNET_GPU_COPY_NTHREADS=1in the container may not be optimal for all networks. Networks with a high ratio of parameters and computation, like AlexNet, may achieve greater multi-GPU training speeds with the setting
MXNET_GPU_COPY_NTHREADS=2. Users are encouraged to try this setting for their own use case.
- There is a known issue with the pointwise fusion when calculating the gradient of the
erffunction. When training a network containing the
erfoperator, set the