DALI Release 0.1.1 Beta
Key Features and Enhancements
This DALI release includes the following key features and enhancements.
- Performance
- On dense GPU systems, deep learning applications can be significantly bottlenecked on the CPU, limiting the overall performance and scalability of training and inference tasks. DALI enables offloading key deep learning augmentation steps on to GPUs, alleviating CPU bottleneck on the deep learning preprocessing pipelines. This results in out-of-box performance of overall training workflow and efficient utilization of multi-GPU resources on the system.
- Drop-in Integration
- DALI comes with built-in plugins for key frameworks such as MXNet, TensorFlow, and PyTorch. This enables automatic integration with frameworks so that researchers and developers can get up and running with DALI easily and quickly.
- Flexibility
- DALI supports multiple input data formats that are commonly used in computer vision deep learning applications, for example, JPEG images, raw formats, Lightning Memory-Mapped Database (LMDB), RecordIO and TFRecord. The flexibility of input data formats allows portability of training workflows across different frameworks and models, and helps to avoid intermediate data conversion steps. DALI enables better code reuse and maintainability with optimized building blocks and support for different data formats.
Using DALI 0.1.1 Beta
Ensure you are familiar with the following notes when using this release.
- To install DALI, see the DALI Quick Start Guide.
Note: If you are using the 18.07 NGC optimized container for MXNet, PyTorch, or TensorFlow, you do not need to reinstall DALI. DALI now comes included in the container. Instead, start with the Getting Started Tutorial.
- To interact with the code via GitHub, see the Getting Started Tutorial.
- To learn how to define, build, and run a DALI pipeline, see the DALI Developer Guide.