Compiling DALI from source


Compiling DALI from source (bare metal)

Prerequisites

Required Component

Notes

Linux x64

GCC 4.9.2 or later

Boost 1.66 or later

Modules: preprocessor.

NVIDIA CUDA 9.0

CUDA 8.0 compatibility is provided unofficially.

nvJPEG library

This can be unofficially disabled. See below.

protobuf

Supported version: 3.11.1

CMake 3.11 or later

libjpeg-turbo 1.5.x or later

This can be unofficially disabled. See below.

libtiff 4.0.x or later

This can be unofficially disabled. See below. Note: libtiff should be built with zlib support

FFmpeg 4.2.1 or later

We recommend using version 4.2.1 compiled following the instructions below.

libsnd 1.0.28 or later

We recommend using version 1.0.28 compiled following the instructions below.

OpenCV 3 or later

Supported version: 3.4

(Optional) liblmdb 0.9.x or later

One or more of the following Deep Learning frameworks:

Note

TensorFlow installation is required to build the TensorFlow plugin for DALI.

Note

Items marked “unofficial” are community contributions that are believed to work but not officially tested or maintained by NVIDIA.

Note

This software uses the FFmpeg licensed code under the LGPLv2.1. Its source can be downloaded from here.

FFmpeg was compiled using the following command line:

./configure \
--prefix=/usr/local \
--disable-static \
--disable-all \
--disable-autodetect \
--disable-iconv \
--enable-shared \
--enable-avformat \
--enable-avcodec \
--enable-avfilter \
--enable-protocol=file \
--enable-demuxer=mov,matroska,avi \
--enable-bsf=h264_mp4toannexb,hevc_mp4toannexb,mpeg4_unpack_bframes  && \
make

Note

This software uses the libsnd licensed under the LGPLv2.1. Its source can be downloaded from here.

libsnd was compiled using the following command line:

./configure && make

Get the DALI source

git clone --recursive https://github.com/NVIDIA/dali
cd dali

Make the build directory

mkdir build
cd build

Compile DALI

Building DALI without LMDB support:

cmake ..
make -j"$(nproc)"

Building DALI with LMDB support:

cmake -DBUILD_LMDB=ON ..
make -j"$(nproc)"

Building DALI using Clang (experimental):

Note

This build is experimental. It is neither maintained nor tested. It is not guaranteed to work. We recommend using GCC for production builds.

cmake -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang  ..
make -j"$(nproc)"

Optional CMake build parameters:

  • BUILD_PYTHON - build Python bindings (default: ON)

  • BUILD_TEST - include building test suite (default: ON)

  • BUILD_BENCHMARK - include building benchmarks (default: ON)

  • BUILD_LMDB - build with support for LMDB (default: OFF)

  • BUILD_NVTX - build with NVTX profiling enabled (default: OFF)

  • BUILD_NVJPEG - build with nvJPEG support (default: ON)

  • BUILD_LIBTIFF - build with libtiff support (default: ON)

  • BUILD_NVOF - build with NVIDIA OPTICAL FLOW SDK support (default: ON)

  • BUILD_NVDEC - build with NVIDIA NVDEC support (default: ON)

  • BUILD_LIBSND - build with libsnd support (default: ON)

  • BUILD_NVML - build with NVIDIA Management Library (NVML) support (default: ON)

  • BUILD_FFTS - build with ffts support (default: ON)

  • VERBOSE_LOGS - enables verbose loging in DALI. (default: OFF)

  • WERROR - treat all build warnings as errors (default: OFF)

  • BUILD_WITH_ASAN - build with ASAN support (default: OFF). To run issue:

LD_LIBRARY_PATH=. ASAN_OPTIONS=symbolize=1:protect_shadow_gap=0 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)
LD_PRELOAD= *PATH_TO_LIB_ASAN* /libasan.so. *X* *PATH_TO_BINARY*

Where *X* depends on used compiler version, for example GCC 7.x uses 4. Tested with GCC 7.4, CUDA 10.0
and libasan.4. Any earlier version may not work.
  • DALI_BUILD_FLAVOR - Allow to specify custom name sufix (i.e. ‘nightly’) for nvidia-dali whl package

  • (Unofficial) BUILD_JPEG_TURBO - build with libjpeg-turbo (default: ON)

  • (Unofficial) BUILD_LIBTIFF - build with libtiff (default: ON)

Note

DALI release packages are built with the options listed above set to ON and NVTX turned OFF. Testing is done with the same configuration. We ensure that DALI compiles with all of those options turned OFF, but there may exist cross-dependencies between some of those features.

Following CMake parameters could be helpful in setting the right paths:

  • FFMPEG_ROOT_DIR - path to installed FFmpeg

  • NVJPEG_ROOT_DIR - where nvJPEG can be found (from CUDA 10.0 it is shipped with the CUDA toolkit so this option is not needed there)

  • libjpeg-turbo options can be obtained from libjpeg CMake docs page

  • protobuf options can be obtained from protobuf CMake docs page

Install Python bindings

pip install dali/python

Cross-compiling DALI C++ API for aarch64 Linux (Docker)

Note

Support for aarch64 Linux platform is experimental. Some of the features are available only for x86-64 target and they are turned off in this build. There is no support for DALI Python library on aarch64 yet. Some Operators may not work as intended due to x86-64 specific implementations.

Build the aarch64 Linux Build Container

docker build -t nvidia/dali:builder_aarch64-linux -f docker/Dockerfile.build.aarch64-linux .

Compile

From the root of the DALI source tree

docker run -v $(pwd):/dali nvidia/dali:builder_aarch64-linux

The relevant artifacts will be in build/install and build/dali/python/nvidia/dali

Cross-compiling DALI C++ API for aarch64 QNX (Docker)

Note

Support for aarch64 QNX platform is experimental. Some of the features are available only for x86-64 target and they are turned off in this build. There is no support for DALI Python library on aarch64 yet. Some Operators may not work as intended due to x86-64 specific implementations.

Setup

After aquiring the QNX Toolchain, place it in a directory called qnx in the root of the DALI tree. Then using the SDK Manager for NVIDIA DRIVE, select QNX as the Target Operating System and select DRIVE OS 5.1.0.0 SDK.

In STEP 02 under Download & Install Options, select Download Now. Install Later. and agree to the Terms and Conditions. Once downloaded move the cuda-repo-cross-qnx debian package into the qnx directory you created in the DALI tree.

Build the aarch64 Build Container

docker build -t nvidia/dali:tools_aarch64-qnx -f docker/Dockerfile.cuda_qnx.deps .
docker build -t nvidia/dali:builder_aarch64-qnx --build-arg "QNX_CUDA_TOOL_IMAGE_NAME=nvidia/dali:tools_aarch64-qnx" -f docker/Dockerfile.build.aarch64-qnx .

Compile

From the root of the DALI source tree

docker run -v $(pwd):/dali nvidia/dali:builder_aarch64-qnx

The relevant artifacts will be in build/install and build/dali/python/nvidia/dali