Release 18.11
The container image for NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet, release 18.11, is available.
Contents of the Optimized Deep Learning Framework container
This container image contains the complete source of the version of NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet in /opt/mxnet. It is pre-built and installed to the Python path.
The container also includes the following:
- Ubuntu 16.04 including Python 3.5
- NVIDIA CUDA 10.0.130 including CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) 10.0.130
- NVIDIA CUDA® Deep Neural Network library™ (cuDNN) 7.4.1
- NCCL 2.3.7 (optimized for NVLink™ )
- ONNX exporter 0.1 for CNN classification models
Note:
The ONNX exporter is being continuously improved. You can try the latest changes by pulling from the main branch.
- Amazon Labs Sockeye sequence-to-sequence framework 1.18.28 (for machine translation)
- OpenMPI 3.1.2
- Horovod 0.15.1
- TensorRT 5.0.2
- DALI 0.4.1 Beta
Driver Requirements
Release 18.11 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This Optimized Deep Learning Framework release includes the following key features and enhancements.
- NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet container image version 18.11 is based on 1.3.0, with all upstream changes from the Apache MXNet main branch up to and including PR 12537.
- Added fused
BatchNormAddReluoperator to the Apache MXNet Symbol package (accessible viamx.sym.BatchNormAddRelu), which performs BatchNorm operation on data, sums the result with a tensor and performs Relu activation on the result of the sum. Currently it is limited to FP16 data type andNHWCdata layout. - Added
MXNET_EXEC_ENABLE_ADDTOenvironment variable, which when set to1increases performance for some networks. - Increased performance of
BatchnormandBatchnorm+Reluoperators in FP16 andNHWCdata format. - Added support for multi-node via Horovod integration. Currently you can use it by specifying
horovodtype of KVStore. - Added
MXNET_UPDATE_ON_KVSTOREenvironment variable, which controls whether to update parameters using KVStore (default is1for KVStoredeviceand0for KVStorehorovod). - Added aggregation of SGD updates which increases performance when update on KVStore is disabled.
- Increased performance when training with small batch sizes.
- Fixed a bug that prevented matrix multiplications to overlap with other computation, which increases performance for some networks.
- Fixed an issue which prevented score function to respect not-full batches of data.
- Added
resnet-v1bas possible network in thetrain_imagenet_runnerscript. - Latest version of NCCL 2.3.7.
- Latest version of NVIDIA cuDNN 7.4.1.
- Latest version of TensorRT 5.0.2
- Latest version of DALI 0.4.1 Beta.
- Ubuntu 16.04 with October 2018 updates.
Known Issues
- The Apache MXNet KVStore GPU peer-to-peer communication tree discovery, as of release 18.09, is not compatible with DGX-1V. Only users that set the environment variable
MXNET_KVSTORE_USETREE=1will experience issues, which will be resolved in a subsequent release. - Apache MXNet ResNet50 regresses in FP32 performance. This issue should be fixed in a later release.