Compiling DALI from source¶
Compiling DALI from source (using Docker builder) - recommended¶
Following these steps, it is possible to recreate Python wheels in a similar fashion as we provide as an official prebuild binary.
Prerequisites¶
Linux x64 |
|
Follow installation guide and manual at the link (version 17.05 or later is required). |
Building Python wheel and (optionally) Docker image¶
Change directory (cd
) into docker
directory and run ./build.sh
. If needed, set the following environment variables:
PYVER - Python version. The default is
3.6
.CUDA_VERSION - CUDA toolkit version (10 for 10.0 or 11 for 11.0). The default is
11
. If the version is prefixed with . then any valueXX
can be passed and the user needs to make sure that Dockerfile.cudaXX.deps is present in docker/ directory.NVIDIA_BUILD_ID - Custom ID of the build. The default is
1234
.CREATE_WHL - Create a standalone wheel. The default is
YES
.BUILD_TF_PLUGIN - Create a DALI TensorFlow plugin wheel as well. The default is
NO
.PREBUILD_TF_PLUGINS - Whether to prebuild DALI TensorFlow plugin. It should be used together with BUILD_TF_PLUGIN option. If both options are set to
YES
then DALI TensorFlow plugin package is built with prebuilt plugin binaries inside. If PREBUILD_TF_PLUGINS is set toNO
then the wheel is still built but without prebuilding binaries - no prebuilt binaries are placed inside and the user needs to make sure that he has proper compiler version present (aligned with the one used to build present TensorFlow) so the plugin can be built during the installation of DALI TensorFlow plugin package. If is BUILD_TF_PLUGIN is set toNO
PREBUILD_TF_PLUGINS value is disregarded. The default isYES
.CREATE_RUNNER - Create Docker image with cuDNN, CUDA and DALI installed inside. It will create the
Docker_run_cuda
image, which needs to be run usingnvidia-docker
and DALI wheel in thewheelhouse
directory under$DALI_BUILD_FLAVOR - adds a suffix to DALI package name and put a note about it in the whl package description, i.e. nightly will result in the nvidia-dali-nightly
CMAKE_BUILD_TYPE - build type, available options: Debug, DevDebug, Release, RelWithDebInfo. The default is
Release
.STRIP_BINARY - when used with CMAKE_BUILD_TYPE equal to Debug, DevDebug, or RelWithDebInfo it produces bare wheel binary without any debug information and the second one with *_debug.whl name with this information included. In the case of the other build configurations, these two wheels will be identical.
BUILD_INHOST - ask docker to mount source code instead of copying it. Thank to that consecutive builds are resuing existing object files and are faster for the development. Uses $DALI_BUILD_DIR as a directory for build objects. The default is
YES
.REBUILD_BUILDERS - if builder docker images need to be rebuild or can be reused from the previous build. The default is
NO
.DALI_BUILD_DIR - where DALI build should happen. It matters only bit the in-tree build where user may provide different path for every python/CUDA version. The default is
build-docker-${CMAKE_BUILD_TYPE}-${PYV}-${CUDA_VERSION}
.ARCH - architecture that DALI is build for, currently only x86_64 is supported. The default is
x86_64
.WHL_PLATFORM_NAME - the name of the Python wheel platform tag. The default is
manylinux1_x86_64
.
It is worth to mention that build.sh should accept the same set of environment variables as the project CMake.
The recommended command line is:
PYVER=X.Y CUDA_VERSION=Z ./build.sh
For example:
PYVER=3.6 CUDA_VERSION=10 ./build.sh
Will build CUDA 10 based DALI for Python 3.6 and place relevant Python wheel inside DALI_root/wheelhouse
Compiling DALI from source (bare metal)¶
Prerequisites¶
Required Component |
Notes |
---|---|
Linux x64 |
|
GCC 5.3.1 or later |
|
Boost 1.66 or later |
Modules: preprocessor. |
This can be unofficially disabled. See below. |
|
Supported version: 3.11.1 |
|
CMake 3.13 or later |
|
libjpeg-turbo 2.0.4 (2.0.3 for conda due to availability) or later |
This can be unofficially disabled. See below. |
libtiff 4.1.0 or later |
This can be unofficially disabled. See below. Note: libtiff should be built with zlib support |
FFmpeg 4.2.2 or later |
We recommend using version 4.2.2 compiled following the instructions below. |
libsnd 1.0.28 or later |
We recommend using version 1.0.28 compiled following the instructions below. |
OpenCV 4 or later |
Supported version: 4.3.0 |
(Optional) liblmdb 0.9.x or later |
|
|
Note
TensorFlow installation is required to build the TensorFlow plugin for DALI.
Note
Items marked “unofficial” are community contributions that are believed to work but not officially tested or maintained by NVIDIA.
Note
This software uses the FFmpeg licensed code under the LGPLv2.1. Its source can be downloaded from here.
FFmpeg was compiled using the following command line:
./configure \
--prefix=/usr/local \
--disable-static \
--disable-all \
--disable-autodetect \
--disable-iconv \
--enable-shared \
--enable-avformat \
--enable-avcodec \
--enable-avfilter \
--enable-protocol=file \
--enable-demuxer=mov,matroska,avi \
--enable-bsf=h264_mp4toannexb,hevc_mp4toannexb,mpeg4_unpack_bframes && \
make
Note
This software uses the libsnd licensed under the LGPLv2.1. Its source can be downloaded from here.
libsnd was compiled using the following command line:
./configure && make
Build DALI¶
Get DALI source code:
git clone --recursive https://github.com/NVIDIA/DALI
cd DALI
Create a directory for CMake-generated Makefiles. This will be the directory, that DALI’s built in.
mkdir build
cd build
Run CMake. For additional options you can pass to CMake, refer to Optional CMake build parameters.
cmake -D CMAKE_BUILD_TYPE=Release ..
Build. You can use
-j
option to execute it in several threads
make -j"$(nproc)"
Install Python bindings¶
In order to run DALI using Python API, you need to install Python bindings
cd build
pip install dali/python
Note
Although you can create a wheel here by calling pip wheel dali/python
, we don’t really recommend doing so. Such whl is not self-contained (doesn’t have all the dependencies) and it will work only on the system where you built DALI bare-metal. To build a wheel that contains the dependencies and might be therefore used on other systems, follow Compiling DALI from source (using Docker builder) - recommended.
Verify the build (optional)¶
Obtain test data¶
You can verify the build by running GTest and Nose tests. To do so, you’ll need DALI_extra repository, which contains test data. To download it follow DALI_extra README. Keep in mind, that you need git-lfs to properly clone DALI_extra repo. To install git-lfs, follow this tutorial.
Set test data path¶
DALI uses DALI_EXTRA_PATH
environment variable to localize the test data. You can set it by invoking:
$ export DALI_EXTRA_PATH=<path_to_DALI_extra>
e.g. export DALI_EXTRA_PATH=/home/yourname/workspace/DALI_extra
Run tests¶
DALI tests consist of 2 parts: C++ (GTest) and Python (usually Nose, but that’s not always true). To run the tests there are convenient targets for Make, that you can run after building finished
cd <path_to_DALI>/build
make check-gtest check-python
Building DALI using Clang (experimental)¶
Note
This build is experimental. It is neither maintained nor tested. It is not guaranteed to work. We recommend using GCC for production builds.
cmake -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang ..
make -j"$(nproc)"
Optional CMake build parameters¶
BUILD_PYTHON
- build Python bindings (default: ON)BUILD_TEST
- include building test suite (default: ON)BUILD_BENCHMARK
- include building benchmarks (default: ON)BUILD_LMDB
- build with support for LMDB (default: OFF)BUILD_NVTX
- build with NVTX profiling enabled (default: OFF)BUILD_NVJPEG
- build withnvJPEG
support (default: ON)BUILD_LIBTIFF
- build withlibtiff
support (default: ON)BUILD_NVOF
- build withNVIDIA OPTICAL FLOW SDK
support (default: ON)BUILD_NVDEC
- build withNVIDIA NVDEC
support (default: ON)BUILD_LIBSND
- build with libsnd support (default: ON)BUILD_NVML
- build withNVIDIA Management Library
(NVML
) support (default: ON)BUILD_FFTS
- build withffts
support (default: ON)VERBOSE_LOGS
- enables verbose loging in DALI. (default: OFF)WERROR
- treat all build warnings as errors (default: OFF)BUILD_WITH_ASAN
- build with ASAN support (default: OFF). To run issue:
LD_LIBRARY_PATH=. ASAN_OPTIONS=symbolize=1:protect_shadow_gap=0 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)
LD_PRELOAD= *PATH_TO_LIB_ASAN* /libasan.so. *X* *PATH_TO_BINARY*
Where *X* depends on used compiler version, for example GCC 7.x uses 4. Tested with GCC 7.4, CUDA 10.0
and libasan.4. Any earlier version may not work.
DALI_BUILD_FLAVOR
- Allow to specify custom name sufix (i.e. ‘nightly’) for nvidia-dali whl package(Unofficial)
BUILD_JPEG_TURBO
- build withlibjpeg-turbo
(default: ON)(Unofficial)
BUILD_LIBTIFF
- build withlibtiff
(default: ON)
Note
DALI release packages are built with the options listed above set to ON and NVTX turned OFF. Testing is done with the same configuration. We ensure that DALI compiles with all of those options turned OFF, but there may exist cross-dependencies between some of those features.
Following CMake parameters could be helpful in setting the right paths:
FFMPEG_ROOT_DIR - path to installed FFmpeg
NVJPEG_ROOT_DIR - where nvJPEG can be found (from CUDA 10.0 it is shipped with the CUDA toolkit so this option is not needed there)
libjpeg-turbo options can be obtained from libjpeg CMake docs page
protobuf options can be obtained from protobuf CMake docs page
Cross-compiling DALI C++ API for aarch64 Linux (Docker)¶
Note
Support for aarch64 Linux platform is experimental. Some of the features are available only for x86-64 target and they are turned off in this build. There is no support for DALI Python library on aarch64 yet. Some Operators may not work as intended due to x86-64 specific implementations.
Build the aarch64 Linux Build Container¶
docker build -t nvidia/dali:builder_aarch64-linux -f docker/Dockerfile.build.aarch64-linux .
Compile¶
From the root of the DALI source tree
docker run -v $(pwd):/dali nvidia/dali:builder_aarch64-linux
The relevant artifacts will be in build/install
and build/dali/python/nvidia/dali
Cross-compiling DALI C++ API for aarch64 QNX (Docker)¶
Note
Support for aarch64 QNX platform is experimental. Some of the features are available only for x86-64 target and they are turned off in this build. There is no support for DALI Python library on aarch64 yet. Some Operators may not work as intended due to x86-64 specific implementations.
Setup¶
After aquiring the QNX Toolchain, place it in a directory called qnx
in the root of the DALI tree.
Then using the SDK Manager for NVIDIA DRIVE, select QNX as the Target Operating System
and select DRIVE OS 5.1.0.0 SDK.
In STEP 02 under Download & Install Options, select Download Now. Install Later.
and agree to the Terms and Conditions. Once downloaded move the cuda-repo-cross-qnx
debian package into the qnx
directory you created in the DALI tree.
Build the aarch64 Build Container¶
docker build -t nvidia/dali:tools_aarch64-qnx -f docker/Dockerfile.cuda_qnx.deps .
docker build -t nvidia/dali:builder_aarch64-qnx --build-arg "QNX_CUDA_TOOL_IMAGE_NAME=nvidia/dali:tools_aarch64-qnx" -f docker/Dockerfile.build.aarch64-qnx .
Compile¶
From the root of the DALI source tree
docker run -v $(pwd):/dali nvidia/dali:builder_aarch64-qnx
The relevant artifacts will be in build/install
and build/dali/python/nvidia/dali