Quickstart Guide¶
NVIDIA® DeepStream Software Development Kit (SDK) is an accelerated AI framework to build intelligent video analytics (IVA) pipelines. DeepStream runs on NVIDIA® T4, NVIDIA® Ampere and platforms such as NVIDIA® Jetson™ Nano, NVIDIA® Jetson AGX Xavier™, NVIDIA® Jetson Xavier NX™, NVIDIA® Jetson™ TX1 and TX2.
Jetson Setup¶
This section explains how to prepare a Jetson device before installing the DeepStream SDK.
Install Jetson SDK components¶
Download NVIDIA SDK Manager from https://developer.nvidia.com/embedded/jetpack. You will use this to install JetPack 4.6 GA (corresponding to L4T 32.6.1 release)
NVIDIA SDK Manager is a graphical application which flashes and installs the JetPack packages.
The flashing procedure takes approximately 10-30 minutes, depending on the host system.
Note
If you are using Jetson Nano or Jetson Xavier NX developer kit, you can download the SD card image from https://developer.nvidia.com/embedded/jetpack. This comes packaged with CUDA, TensorRT and cuDNN.
Install Dependencies¶
Enter the following commands to install the prerequisite packages:
$ sudo apt install \
libssl1.0.0 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstrtspserver-1.0-0 \
libjansson4=2.11-1
Install librdkafka (to enable Kafka protocol adaptor for message broker)¶
Clone the librdkafka repository from GitHub:
$ git clone https://github.com/edenhill/librdkafka.git
Configure and build the library:
$ cd librdkafka $ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a ./configure $ make $ sudo make install
Copy the generated libraries to the deepstream directory:
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.0/lib $ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.0/lib
Install latest NVIDIA BSP packages¶
Open the apt source configuration file in a text editor, for example:
$ sudo vi /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
Change the repository name and download URL in the deb commands shown below:
deb https://repo.download.nvidia.com/jetson/common r32.6 main deb https://repo.download.nvidia.com/jetson/<platform> r32.6 main
<platform>
identifies the platform’s processor:
t186 for Jetson TX2 series
t194 for Jetson AGX Xavier series or Jetson Xavier NX
t210 for Jetson Nano or Jetson TX1
For example, if your platform is Jetson Xavier NX:
deb https://repo.download.nvidia.com/jetson/common r32.6 main
deb https://repo.download.nvidia.com/jetson/t194 r32.6 main
Save and close the source configuration file.
Enter the commands:
$ sudo apt update
Install latest NVIDIA V4L2 Gstreamer Plugin using the following command:
$ sudo apt install --reinstall nvidia-l4t-gstreamer
If apt prompts you to choose a configuration file, reply Y for yes (to use the NVIDIA updated version of the file).
Install latest L4T MM and L4T Core packages using following commands:
$ sudo apt install --reinstall nvidia-l4t-multimedia $ sudo apt install --reinstall nvidia-l4t-core
Note
You must update the NVIDIA V4L2 GStreamer plugin after flashing Jetson OS from SDK Manager.
Install the DeepStream SDK¶
Method 1: Using SDK Manager
Select
DeepStreamSDK
from theAdditional SDKs
section along with JP 4.6 software components for installation.Method 2: Using the DeepStream tar package: https://developer.nvidia.com/deepstream_sdk_v6.0.0_jetsontbz2
Download the DeepStream 6.0 Jetson tar package
deepstream_sdk_v6.0.0_jetson.tbz2
to the Jetson device.Enter the following commands to extract and install the DeepStream SDK:
$ sudo tar -xvf deepstream_sdk_v6.0.0_jetson.tbz2 -C / $ cd /opt/nvidia/deepstream/deepstream-6.0 $ sudo ./install.sh $ sudo ldconfig
Method 3: Using the DeepStream Debian package: https://developer.nvidia.com/deepstream-6.0_6.0.0-1_arm64deb
Download the DeepStream 6.0 Jetson Debian package
deepstream-6.0_6.0.0-1_arm64.deb
to the Jetson device. Enter the following command:$ sudo apt-get install ./deepstream-6.0_6.0.0-1_arm64.deb
Note
If you install the DeepStream SDK Debian package using the
dpkg
command, you must install the following packages before installing the Debian package:libgstrtspserver-1.0-0
libgstreamer-plugins-base1.0-dev
Method 4: Use Docker container DeepStream docker containers are available on NGC. See the Docker Containers section to learn about developing and deploying DeepStream using docker containers.
Run deepstream-app (the reference application)¶
Navigate to the samples directory on the development kit.
Enter the following command to run the reference application:
$ deepstream-app -c <path_to_config_file>
Where
<path_to_config_file>
is the pathname of one of the reference application’s configuration files, found inconfigs/deepstream-app/
. See Package Contents for a list of the available files.Config files that can be run with deepstream-app:
source30_1080p_dec_infer-resnet_tiled_display_int8.txt
source30_1080p_dec_preprocess_infer-resnet_tiled_display_int8.txt
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt
(dGPU only)source1_usb_dec_infer_resnet_int8.txt
source1_csi_dec_infer_resnet_int8.txt
(Jetson only)source2_csi_usb_dec_infer_resnet_int8.txt
(Jetson only)source6_csi_dec_infer_resnet_int8.txt
(Jetson only)source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
(Jetson Nano only)source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt
(Jetson TX1 only)source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt
(Jetson TX2 only)source2_1080p_dec_infer-resnet_demux_int8.txt
Note
You can find sample configuration files under
/opt/nvidia/deepstream/deepstream-6.0/samples
directory. Enter this command to see application usage:$ deepstream-app --help
To save TensorRT Engine/Plan file, run the following command:
$ sudo deepstream-app -c <path_to_config_file>
For Jetson Nano, TX1 and TX2 config files mentioned above, user can set number of streams, inference interval and tracker config file as per the requirement.
To show labels in 2D Tiled display view, expand the source of interest with mouse left-click on the source. To return to the tiled display, right-click anywhere in the window.
Keyboard selection of source is also supported. On the console where application is running, press the
z
key followed by the desired row index (0 to 9), then the column index (0 to 9) to expand the source. To restore 2D Tiled display view, pressz
again.
Boost the clocks¶
After you have installed DeepStream SDK, run these commands on the Jetson device to boost the clocks:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
Note
For Jetson Xavier NX, run sudo nvpmodel -m 2 instead of 0.
Run precompiled sample applications¶
Navigate to the chosen application directory inside
sources/apps/sample_apps
.Follow the directory’s README file to run the application.
Note
If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.aarch64.bin
When the application is run for a model which does not have an existing engine file, it may take up to a few minutes (depending on the platform and the model) for the file generation and the application launch. For later runs, these generated engine files can be reused for faster loading.
dGPU Setup for Ubuntu¶
This section explains how to prepare an Ubuntu x86_64
system with NVIDIA dGPU devices before installing the DeepStream SDK.
Note
This document uses the term dGPU (“discrete GPU”) to refer to NVIDIA GPU expansion card products such as NVIDIA Tesla® T4 and P4, NVIDIA GeForce® GTX 1080, and NVIDIA GeForce® RTX 2080. This version of DeepStream SDK runs on specific dGPU products on x86_64 platforms supported by NVIDIA driver 470.63.01 and NVIDIA TensorRT™ 8.0.1 and later versions.
You must install the following components:
Ubuntu 18.04
GStreamer 1.14.5
NVIDIA driver 470.63.01
CUDA 11.4
TensorRT 8.0.1
Remove all previous DeepStream installations¶
Enter the following commands to remove all previous DeepStream 3.0 or prior installations:
$ sudo rm -rf /usr/local/deepstream /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstnv* /usr/bin/deepstream* /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libnvdsgst*
/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream*
/opt/nvidia/deepstream/deepstream*
$ sudo rm -rf /usr/lib/x86_64-linux-gnu/libv41/plugins/libcuvidv4l2_plugin.so
To remove DeepStream 4.0 or later installations:
Open the
uninstall.sh
file in/opt/nvidia/deepstream/deepstream/
Set PREV_DS_VER as 4.0
Run the following script as
sudo ./uninstall.sh
Install Dependencies¶
Enter the following commands to install the necessary packages before installing the DeepStream SDK:
$ sudo apt install \
libssl1.0.0 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstrtspserver-1.0-0 \
libjansson4 \
gcc \
make \
git \
python3
Install NVIDIA driver 470.63.01¶
Download and install NVIDIA driver 470.63.01 from NVIDIA Unix drivers page at: https://www.nvidia.com/Download/driverResults.aspx/179599/en-us
Run the following commands:
$chmod 755 NVIDIA-Linux-x86_64-470.63.01.run $sudo ./NVIDIA-Linux-x86_64-470.63.01.run
Install CUDA Toolkit 11.4.1 (CUDA 11.4 Update 1)¶
Download and install CUDA Toolkit 11.4.1 from: https://developer.nvidia.com/cuda-11-4-1-download-archive
In this page, it is mentioned NVIDIA Linux GPU driver 470.57.02 but still current version DeepStream uses is 470.63.01
Install TensorRT 8.0.1¶
Following are the steps to install TensorRT 8.0.1:
Run the following commands:
$ echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" | sudo tee /etc/apt/sources.list.d/cuda-repo.list $ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub $ sudo apt-key add 7fa2af80.pub $ sudo apt-get update
Download
TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
from: https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.0.1/local_repos/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.debRun the following commands to install TensorRT 8.0.1:
$ sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb $ sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub $ sudo apt-get update $ sudo apt-get install libnvinfer8=8.0.1-1+cuda11.3 libnvinfer-plugin8=8.0.1-1+cuda11.3 libnvparsers8=8.0.1-1+cuda11.3 libnvonnxparsers8=8.0.1-1+cuda11.3 libnvinfer-bin=8.0.1-1+cuda11.3 libnvinfer-dev=8.0.1-1+cuda11.3 libnvinfer-plugin-dev=8.0.1-1+cuda11.3 libnvparsers-dev=8.0.1-1+cuda11.3 libnvonnxparsers-dev=8.0.1-1+cuda11.3 libnvinfer-samples=8.0.1-1+cuda11.3 libnvinfer-doc=8.0.1-1+cuda11.3
Note
Since TensorRT 8.0.1 depends on a few packages of CUDA 11.3, those extra CUDA packages will be automatically installed when the above command is executed.
Install librdkafka (to enable Kafka protocol adaptor for message broker)¶
Clone the librdkafka repository from GitHub:
$ git clone https://github.com/edenhill/librdkafka.git
Configure and build the library:
$ cd librdkafka $ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a ./configure $ make $ sudo make install
Copy the generated libraries to the deepstream directory:
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.0/lib $ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.0/lib
Install the DeepStream SDK¶
Method 1: Using the DeepStream Debian package
Download the DeepStream 6.0 dGPU Debian package
deepstream-6.0_6.0.0-1_amd64.deb
: https://developer.nvidia.com/deepstream-6.0_6.0.0-1_amd64debEnter the command:
$ sudo apt-get install ./deepstream-6.0_6.0.0-1_amd64.deb
Method 2: Download the DeepStream tar package: https://developer.nvidia.com/deepstream_sdk_v6.0.0_x86_64tbz2
Navigate to the location of the downloaded DeepStream package to extract and install the DeepStream SDK:
$ sudo tar -xvf deepstream_sdk_v6.0.0_x86_64.tbz2 -C / $ cd /opt/nvidia/deepstream/deepstream-6.0/ $ sudo ./install.sh $ sudo ldconfig
Method 3: Use Docker container DeepStream docker containers are available on NGC. See the Docker Containers section to learn about developing and deploying DeepStream using docker containers.
Run the deepstream-app (the reference application)¶
Go to the samples directory and enter this command:
$ deepstream-app -c <path_to_config_file>
Where
<path_to_config_file>
is the pathname of one of the reference application’s configuration files, found inconfigs/deepstream-app
. See Package Contents for a list of the available files.Note
To dump engine file, run the following command:
$ sudo deepstream-app -c <path_to_config_file>
You can find sample configuration files under
/opt/nvidia/deepstream/deepstream-6.0/samples
directory. Enter this command to see application usage:$ deepstream-app --help
To show labels in 2D tiled display view, expand the source of interest with a mouse left-click on the source. To return to the tiled display, right-click anywhere in the window.
Keyboard selection of source is also supported. On the console where application is running, press the
z
key followed by the desired row index (0 to 9), then the column index (0 to 9) to expand the source. To restore the 2D Tiled display view, pressz
again.
Run precompiled sample applications¶
Navigate to the chosen application directory inside
sources/apps/sample_apps
.Follow that directory’s README file to run the application.
Note
If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
When the application is run for a model which does not have an existing engine file, it may take up to a few minutes (depending on the platform and the model) for the file generation and application launch. For later runs, these generated engine files can be reused for faster loading.
dGPU Setup for RedHat Enterprise Linux (RHEL)¶
This section explains how to prepare an RHEL system with NVIDIA dGPU devices before installing the DeepStream SDK.
Note
This document uses the term dGPU (“discrete GPU”) to refer to NVIDIA GPU expansion card products such as NVIDIA Tesla T4 and P4, NVIDIA GeForce GTX 1080, and NVIDIA GeForce RTX 2080. This version of DeepStream SDK runs on specific dGPU products on x86_64 platforms supported by NVIDIA driver 470+ and NVIDIA® TensorRT™ 8.0.1 and later versions.
You must install the following components:
RHEL 8.4
GStreamer 1.16.1
NVIDIA driver 470.63.01 (https://www.nvidia.com/Download/driverResults.aspx/179599/en-us)
CUDA 11.4
TensorRT 8.0.1
Remove all previous DeepStream installations¶
To remove DeepStream 4.0 or later installations:
Open the uninstall.sh file which will be present in
/opt/nvidia/deepstream/deepstream/
Set
PREV_DS_VER
as4.0
Run the following script as sudo:
./uninstall.sh
Install Dependencies¶
Enter the following commands to install the necessary packages before installing the DeepStream SDK:
$ yum install \
gstreamer1 \
gstreamer1-plugins-base \
gstreamer1-plugins-good \
gstreamer1-plugins-bad-free \
gstreamer1-plugins-ugly-free \
gstreamer1-svt-av1 \
json-glib \
openssl \
libuuid \
gstreamer1-plugins-base-devel \
json-glib-devel \
opencv-devel \
jansson-devel \
openssl-devel \
libuuid-devel
gst-rtsp-server:
gst-rtsp-server-devel
package is not available for RHEL which is required to compile deepstream-app.
Download sources from https://gstreamer.freedesktop.org/src/gst-rtsp-server/gst-rtsp-server-1.14.5.tar.xz:
$ ./configure
$ make
$ make install
$ sudo cp -r /usr/local/include/gstreamer-1.0/gst/rtsp-server/ /usr/include/gstreamer-1.0/gst/
$ sudo cp /usr/local/lib/libgstrtspserver-1.0.so /usr/local/lib/libgstrtspserver-1.0.so.0 \
/usr/local/lib/libgstrtspserver-1.0.so.0.1601.0 /usr/lib64/
Note
Packages required for RHEL 8.x are also mentioned in README.rhel
in DeepStream package.
Install NVIDIA driver 470.63.01¶
Download and install NVIDIA driver 470.63.01 from NVIDIA unix drivers page at https://www.nvidia.com/Download/driverResults.aspx/179599/en-us
Run the following commands:
$chmod 755 NVIDIA-Linux-x86_64-470.63.01.run $./NVIDIA-Linux-x86_64-470.63.01.run
Install CUDA Toolkit 11.4 (CUDA 11.4 Update 1)¶
Download and install CUDA Toolkit 11.4.1 from: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#redhat8-installation
In this page, it is mentioned NVIDIA Linux GPU driver R470.57.02 but still current version DeepStream uses is R470.63.01.
Install TensorRT 8.0.1¶
Refer to https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install-rpm to download and install TensorRT 8.0.1.
Note
Since TensorRT 8.0.1 depends on a few packages of CUDA 11.3, those extra CUDA packages will be automatically installed when TensorRT 8.0.1 is installed.
Install librdkafka (to enable Kafka protocol adaptor for message broker)¶
Clone the librdkafka repository from GitHub:
$ git clone https://github.com/edenhill/librdkafka.git
Configure and build the library:
$ cd librdkafka $ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a ./configure $ make $ sudo make install
Copy the generated libraries to the deepstream directory:
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.0/lib $ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.0/lib
Install the DeepStream SDK¶
Method 1: Download the DeepStream tar package: https://developer.nvidia.com/deepstream_sdk_v6.0.0_x86_64tbz2
Navigate to the location to which the DeepStream package was downloaded and extract and install the DeepStream SDK:
$ sudo tar -xvf deepstream_sdk_v6.0.0_x86_64.tbz2 -C / $ cd /opt/nvidia/deepstream/deepstream-6.0/ $ sudo ./install.sh $ sudo ldconfig
Method 2: Use Docker container
DeepStream docker containers are available on NGC. See the Docker Containers section to learn about developing and deploying DeepStream using docker containers.
Run the deepstream-app (the reference application)¶
Go to the samples directory and enter this command:
$ deepstream-app -c <path_to_config_file>
Where
<path_to_config_file>
is the pathname of one of the reference application’s configuration files, found inconfigs/deepstream-app
. See Package Contents for a list of the available files.Note
To dump engine file run the following command:
$ sudo deepstream-app -c <path_to_config_file>
You can find sample configuration files in the
/opt/nvidia/deepstream/deepstream-6.0/samples
directory. Enter this command to see application usage:$ deepstream-app --help
To show labels in 2D tiled display view, expand the source of interest with a mouse left-click on the source. To return to the tiled display, right-click anywhere in the window.
Keyboard selection of source is also supported. On the console where application is running, press the
z
key followed by the desired row index (0 to 9), then the column index (0 to 9) to expand the source. To restore the 2D Tiled display view, pressz
again.
Run precompiled sample applications¶
Navigate to the chosen application directory inside
sources/apps/sample_apps
.Follow that directory’s README file to run the application.
Note
If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
When the application is run for a model which does not have an existing engine file, it may take up to a few minutes (depending on the platform and the model) for the file generation and application launch. For later runs, these generated engine files can be reused for faster loading.
Running without an X server¶
The default configuration files provided with the SDK have the EGL based nveglglessink
as the default renderer (indicated by type=2 in the [sink] groups). The renderer requires a running X server and fails without one.
In case of absence of an X server, DeepStream reference applications provide an alternate functionality of streaming the output over RTSP. This can be enabled by adding an RTSP out sink group in the configuration file. Refer to [sink2]
group in source30_1080p_dec_infer-resnet_tiled_display_int8.txt
file for an example. Don’t forget to disable the nveglglessink
renderer by setting enable=0 for the corresponding sink group.
Platform and OS Compatibility¶
The following table provides information about platform and operating system compatibility in the current and earlier versions of DeepStream.
¶ DS release
DS 2.0
DS 3.0
DS 4.0.2 (Unified)
DS 5.0 GA, 5.0.1, 5.1 (Unified)
DS 6.0
Jetson platforms
Not supported
AGX Xavier
Nano, AGX Xavier, TX2, TX1
Nano, AGX Xavier, TX2, TX1, Jetson NX
Nano, AGX Xavier, TX2, TX1, Jetson NX
OS
Not supported
L4T Ubuntu 18.04/16.04
L4T Ubuntu 18.04
L4T Ubuntu 18.04
L4T Ubuntu 18.04
JetPack release
Not supported
4.1.1
4.3
4.4 GA (4.5.1 GA for DS 5.1)
4.6 GA
L4T release
Not supported
31.1
32.3.1
32.4.3 (32.5.1 for DS 5.1)
32.6.1
CUDA release
Not supported
CUDA 10.0
CUDA 10.0
CUDA 10.2
CUDA 10.2
cuDNN release
Not supported
cuDNN 7.3
cuDNN 7.6.3
cuDNN 8.0.0.x
cuDNN 8.2.1.32
TensorRT release
Not supported
TRT 5.0
TRT 6.0.1
TRT 7.1.3
TRT 8.0.1
OpenCV release
Not supported
OpenCV 3.3.1
OpenCV 4.1
OpenCV 4.1.1
OpenCV 4.1.1
VisionWorks
Not supported
VisionWorks 1.6
VisionWorks 1.6
VisionWorks 1.6
VisionWorks 1.6.502
GStreamer
Not supported
GStreamer 1.8.3
GStreamer 1.14.1
GStreamer 1.14.1
GStreamer 1.14.5
Docker image
Not available
Not available
deepstream-l4t:4.0.2
deepstream-l4t:5.0, deepstream-l4t:5.0.1, deepstream-l4t:5.1
deepstream-l4t:6.0-ga
¶ DS release
DS 2.0
DS 3.0
DS 4.0.2 (Unified)
DS 5.0 GA, 5.0.1 (Unified), 5.1
DS 6.0
GPU platforms
P4, P40
P4, P40, V100, T4
P4, T4, V100
P4, T4, V100, GA100 (DS 5.1)
P4, T4, V100, GA100
OS
Ubuntu 16.04
Ubuntu 16.04
Ubuntu 18.04
Ubuntu 18.04 RHEL 8.x
Ubuntu 18.04 RHEL 8.x
GCC
GCC 5.4
GCC 5.4
GCC 7.3.0
GCC 7.3.0
GCC 7.3.0
CUDA release
CUDA 9.2
CUDA 10.0
CUDA 10.1
CUDA 10.2 ( Cuda 11.1 for DS 5.1)
CUDA 11.4.1
cuDNN release
cuDNN 7.1
cuDNN 7.3
cuDNN 7.6.5+
cuDNN 7.6.5+ (CuDNN 8.0+ for DS 5.1)
cuDNN 8.2+
TRT release
TRT 4.0
TRT 5.0
TRT 6.0.1
TRT 7.0.0 (TRT 7.2.2 for DS 5.1)
TRT 8.0.1
Display Driver
R396+
R410+
R418+
R450.51 (R460.32 for DS 5.1)
R470.63.01
VideoSDK release
SDK 7.9
SDK 8.2
SDK 9.0
SDK 9.1
SDK 9.1
OFSDK release
Not available
Not available
1.0.10
1.0.10
2.0.23
GStreamer release
GStreamer 1.8.3
GStreamer 1.8.3
GStreamer 1.14.1
GStreamer 1.14.1
GStreamer 1.14.5
OpenCV release
OpenCV 3.4.x
OpenCV 3.4.x
OpenCV 3.3.1
OpenCV 3.4.0
OpenCV 3.4.0
Docker image
Not available
deepstream:3.0
deepstream:4.0.2
deepstream:5.0, deepstream:5.0.1, deepstream:5.1
deepstream:6.0-ga
Note
By default, OpenCV has been deprecated. However, OpenCV can be enabled in plugins such as nvinfer (nvdsinfer) and dsexample (gst-dsexample) by setting WITH_OPENCV=1 in the Makefile of these components. Please refer component README for more instructions.
DeepStream Triton Inference Server Usage Guidelines¶
To migrate the Triton version in a DeepStream 6.0 deployment (Triton 21.08) to a newer version (say Triton 21.09 or newer), follow the instructions at DeepStream Triton Migration Guide.
dGPU¶
Pull the DeepStream Triton Inference Server docker
docker pull nvcr.io/nvidia/deepstream:6.0-triton
Start the docker
docker run --gpus "device=0" -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -e CUDA_CACHE_DISABLE=0 nvcr.io/nvidia/deepstream:6.0-triton
Note
The triton docker for x86 is based on tritonserver 21.08 docker, and has Ubuntu 20.04. Jetson docker uses libraries from tritonserver 21.08.
When the triton docker is launched for the first time, it might take a few minutes to start since it has to generate compute cache.
Jetson¶
DeepStream Triton container image (nvcr.io/nvidia/deepstream-l4t:6.0-triton) has Triton Inference Server and supported backend libraries pre-installed.
In order to run the Triton Inference Server directly on device, i.e., without docker, Triton Server setup will be required.
Go to samples directory and run the following commands to set up the Triton Server and backends.
$ cd /opt/nvidia/deepstream/deepstream/samples/
$ sudo ./triton_backend_setup.sh
Note
By default script will download the Triton Server version 2.13. For setting up any other version change the package path accordingly.
Triton backends are installed into /opt/nvidia/deepstream/deepstream/lib/triton_backends
by default by the script. User can update infer_config
settings for specific folders as follows:
model_repo {
backend_dir: /opt/nvidia/tritonserver/backends/
}
Using DLA for inference¶
DLA is Deep Learning Accelerator present on the Jetson AGX Xavier and Jetson NX platforms. Both these platforms have two DLA engines. DeepStream can be configured to run inference on either of the DLA engines through the Gst-nvinfer plugin. One instance of Gst-nvinfer plugin and thus a single instance of a model can be configured to be executed on a single DLA engine or the GPU. However, multiple Gst-nvinfer plugin instances can be configured to use the same DLA. To configure Gst-nvinfer to use the DLA engine for inference, modify the corresponding property in nvinfer component configuration file (example: samples/configs/deepstream-app/config_infer_primary.txt): Set enable-dla=1 in [property] group. Set use-dla-core=0 or use-dla-core=1 depending on the DLA engine to use.
DeepStream does support inferencing using GPU and DLAs in parallel. You can run this in separate processes or single process. You will need three separate sets of configs configured to run on GPU, DLA0 and DLA1:
Separate processes¶
When GPU and DLA are run in separate processes, set the environment variable CUDA_DEVICE_MAX_CONNECTIONS
as 1
from the terminal where DLA config is running.
Single process¶
DeepStream reference application supports multiple configs in the same process. To run DLA and GPU in same process, set environment variable CUDA_DEVICE_MAX_CONNECTIONS
as 32
:
$ deepstream-app -c <gpuconfig> -c <dla0config> -c <dla1config>