The container image for NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet, release 18.11, is available.
Contents of the Optimized Deep Learning Framework container
This container image contains the complete source of the version of NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet in /opt/mxnet
. It is pre-built and installed to the Python path.
The container also includes the following:
- Ubuntu 16.04 including Python 3.5
- NVIDIA CUDA 10.0.130 including CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) 10.0.130
- NVIDIA CUDA® Deep Neural Network library™ (cuDNN) 7.4.1
- NCCL 2.3.7 (optimized for NVLink™ )
- ONNX exporter 0.1 for CNN classification models
Note:
Note: The ONNX exporter is being continuously improved. You can try the latest changes by pulling from the main branch.
- Amazon Labs Sockeye sequence-to-sequence framework 1.18.28 (for machine translation)
- OpenMPI 3.1.2
- Horovod 0.15.1
- TensorRT 5.0.2
- DALI 0.4.1 Beta
Driver Requirements
Release 18.11 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This Optimized Deep Learning Framework release includes the following key features and enhancements.
- NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet container image version 18.11 is based on 1.3.0, with all upstream changes from the Apache MXNet main branch up to and including PR 12537.
- Added fused
BatchNormAddRelu
operator to the Apache MXNet Symbol package (accessible viamx.sym.BatchNormAddRelu
), which performs BatchNorm operation on data, sums the result with a tensor and performs Relu activation on the result of the sum. Currently it is limited to FP16 data type andNHWC
data layout. - Added
MXNET_EXEC_ENABLE_ADDTO
environment variable, which when set to1
increases performance for some networks. - Increased performance of
Batchnorm
andBatchnorm+Relu
operators in FP16 andNHWC
data format. - Added support for multi-node via Horovod integration. Currently you can use it by specifying
horovod
type of KVStore. - Added
MXNET_UPDATE_ON_KVSTORE
environment variable, which controls whether to update parameters using KVStore (default is1
for KVStoredevice
and0
for KVStorehorovod
). - Added aggregation of SGD updates which increases performance when update on KVStore is disabled.
- Increased performance when training with small batch sizes.
- Fixed a bug that prevented matrix multiplications to overlap with other computation, which increases performance for some networks.
- Fixed an issue which prevented score function to respect not-full batches of data.
- Added
resnet-v1b
as possible network in thetrain_imagenet_runner
script. - Latest version of NCCL 2.3.7.
- Latest version of NVIDIA cuDNN 7.4.1.
- Latest version of TensorRT 5.0.2
- Latest version of DALI 0.4.1 Beta.
- Ubuntu 16.04 with October 2018 updates.
Known Issues
- The Apache MXNet KVStore GPU peer-to-peer communication tree discovery, as of release 18.09, is not compatible with DGX-1V. Only users that set the environment variable
MXNET_KVSTORE_USETREE=1
will experience issues, which will be resolved in a subsequent release. - Apache MXNet ResNet50 regresses in FP32 performance. This issue should be fixed in a later release.