Quick Start Guide

NVIDIA® DeepStream Software Development Kit (SDK) is an accelerated AI framework to build intelligent video analytics (IVA) pipelines. DeepStream runs on NVIDIA® T4 and platforms such as NVIDIA® Jetson™ Nano, NVIDIA® Jetson AGX Xavier, NVIDIA® Jetson Xavier NX, NVIDIA® Jetson™ TX1 and TX2.

Jetson Setup

This section explains how to prepare a Jetson device before installing the DeepStream SDK.
To install Jetson SDK components
Download NVIDIA SDK Manager, which you will use to install JetPack 4.4 Developer Preview (DP) (corresponding to L4T 32.4.2 release) from:
https://developer.nvidia.com/embedded/jetpack
NVIDIA SDK Manager is a graphical application which flashes and installs the JetPack packages.
The flashing procedure takes approximately 10-30 minutes, depending on the host system.
To install additional packages
Enter the following command to install the prerequisite packages for installing the DeepStream SDK:
$ sudo apt install \
libssl1.0.0 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstrtspserver-1.0-0 \
libjansson4=2.11-1
To install librdkafka
Install librdkafka by running apt-get on the Jetson device:
$ apt-get install librdkafka1=0.11.3-1build1
To install latest NVIDIA V4L2 GStreamer plugin
1. Open the apt source configuration file in a text editor, for example:
$ sudo vi /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
2. Change the repository name and download URL in the deb commands as below.
deb https://repo.download.nvidia.com/jetson/common r32.4 main
deb https://repo.download.nvidia.com/jetson/<platform> r32.4 main
Where <platform> identifies the platform’s processor:
t186 for Jetson TX2 series
t194 for Jetson AGX Xavier series or Jetson Xavier NX
t210 for Jetson Nano or Jetson TX1
If your platform is Jetson Xavier NX, for example:
deb https://repo.download.nvidia.com/jetson/common r32.4 main
deb https://repo.download.nvidia.com/jetson/t194 r32.4 main
3. Save and close the source configuration file.
4. Enter the commands:
$ sudo apt update
$ sudo apt install --reinstall nvidia-l4t-gstreamer
If apt prompts you to choose a configuration file, reply Y for yes (to use the NVIDIA updated version of the file).
Note:
This step of updating NVIDIA V4L2 GStreamer plugin should be performed after flashing Jetson OS from SDK Manager.
To install the DeepStream SDK
Method 1: Using SDK Manager
Select DeepStreamSDK from the “Additional SDKs” section along with JP 4.4 software components for installation.
Method 2: Using the DeepStream tar package
1. Download the DeepStream 5.0 Jetson tar package, deepstream_sdk_v5.0.0_jetson.tbz2, to the Jetson device.
2. Enter this command to extract and install DeepStream SDK:
 
$ sudo tar -xvpf deepstream_sdk_v5.0.0_jetson.tbz2 -C /
$ cd /opt/nvidia/deepstream/deepstream-5.0
$ sudo ./install.sh
$ sudo ldconfig
Method 3: Using the DeepStream Debian package
Download the DeepStream 5.0 Jetson Debian package, deepstream-5.0_5.0.0-1_arm64.deb, to the Jetson device. Then enter the command:
$ sudo apt-get install ./deepstream-5.0_5.0.0-1_arm64.deb
Note:
If you install the DeepStream SDK Debian package using the dpkg command, you must install these packages first, then install the DeepStream deb package:
libgstrtspserver-1.0-0
libgstreamer-plugins-base1.0-dev
To boost the clocks
After you have installed DeepStream SDK, run these commands on the Jetson device to boost the clocks:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
To run deepstream-app (the reference application)
1. Navigate to the samples directory on the development kit.
2. Enter this command to run the reference application:
$ deepstream-app -c <path_to_config_file>
Where <path_to_config_file> is the pathname of one of the reference application’s configuration files, found in configs/deepstream-app/. See Package Contents for a list of the available files.
Note:
You can find sample configuration files under /opt/nvidia/deepstream/deepstream-5.0/samples directory.
Enter this command to see application usage:
$ deepstream-app --help
To show labels in 2D Tiled display view you can expand the source of interest with mouse left-click the source. To return to the tiled display, right-click anywhere in the window.
Keyboard selection of source is also supported. On the console where application is running, press the ‘z’ key followed by the desired row index (0 to 9), then the column index (0 to 9) to expand the source. To restore 2D Tiled display view, press ‘z’ again.
To run precompiled sample applications
1. Navigate to the to the chosen application directory inside sources/apps/sample_apps.
2. Follow the directory’s README file to run the application.
Note:
If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.aarch64.bin
When the application is run for a model which does not have an existing engine file, it may take up to a few minutes depending on the platform and the model for the engine file to be generated and application to start playing. For later runs, these generated engine files can be reused for faster loading.

dGPU Setup

This section explains how to prepare an Ubuntu x86_64 system with NVIDIA dGPU devices before installing the DeepStream SDK.
Note:
This document uses the term dGPU (“discrete GPU”) to refer to NVIDIA GPU expansion card products such as NVIDIA® Tesla® T4 and P4, NVIDIA® GeForce® GTX 1080, and NVIDIA® GeForce® RTX 2080.
This version of DeepStream SDK runs on specific dGPU products on x86_64 platforms supported by NVIDIA driver 440+ and NVIDIA® TensorRT™ 7.0 and later versions.
You must install the following components:
Ubuntu 18.04
GStreamer 1.14.1
NVIDIA driver 440+
CUDA 10.2
TensorRT 7.0 or later
To remove all previous DeepStream installations
To remove all previous DeepStream 3.0 or prior installations, enter the command:
$ sudo rm -rf /usr/local/deepstream /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstnv* /usr/bin/deepstream* /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libnvdsgst*
/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream*
/opt/nvidia/deepstream/deepstream*
$ sudo rm -rf /usr/lib/x86_64-linux-gnu/libv41/plugins/libcuvidv4l2_plugin.so
To remove DeepStream 4.0 or later installations:
1. Open the uninstall.sh file which will be present in /opt/nvidia/deepstream/deepstream/
2. Set PREV_DS_VER as 4.0
3. Run the following script as sudo: ./uninstall.sh
To install packages
Enter the following command to install the necessary packages before installing the DeepStream SDK:
$ sudo apt install \
libssl1.0.0 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstrtspserver-1.0-0 \
libjansson4
To install CUDA Toolkit 10.2
Download and install CUDA Toolkit 10.2 from NVIDIA Developer at:
To install TensorRT 7.0
Download and install TensorRT 7.0 from the NVIDIA Developer Center:
To install librdkafka (to enable Kafka protocol adaptor for message broker)
1. Clone the librdkafka repository from GitHub:
$ git clone https://github.com/edenhill/librdkafka.git
2. Configure and build the library:
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
./configure
$ make
$ sudo make install
3. Copy the generated libraries to the deepstream directory:
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-5.0/lib
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-5.0/lib
To install the DeepStream SDK
Method 1: Using the DeepStream Debian package
Download the DeepStream 5.0 dGPU Debian package, deepstream-5.0_5.0.0-1_amd64.deb, then enter the command:
$ sudo apt-get install ./deepstream-5.0_5.0.0-1_amd64.deb
Method 2: Using the DeepStream tar package
1. Navigate to the location to which the DeepStream package was downloaded and extract and install the DeepStream SDK:
$ sudo tar -xvpf deepstream_sdk_v5.0.0_x86_64.tbz2 -C /
$ cd /opt/nvidia/deepstream/deepstream-5.0/
$ sudo ./install.sh
$ sudo ldconfig
To run the deepstream-app (the reference application)
Go to the samples directory and enter this command:
$ deepstream-app -c <path_to_config_file>
Where <path_to_config_file> is the pathname of one of the reference application’s configuration files, found in configs/deepstream-app. See Package Contents for a list of the available files.
Note:
You can find sample configuration files under /opt/nvidia/deepstream/deepstream-5.0/samples directory.
Enter this command to see application usage:
$ deepstream-app --help
To show labels in 2D tiled display view, expand the source of interest with a mouse left-click on the source. To return to the tiled display, right-click anywhere in the window.
Keyboard selection of source is also supported. On the console where application is running, press the ‘z’ key followed by the desired row index (0 to 9), then the column index (0 to 9) to expand the source. To restore the 2D Tiled display view, press ‘z’ again.
To run precompiled sample applications
1. Navigate to the chosen application directory inside sources/apps/sample_apps.
2. Follow that directory’s README file to run the application.
Note:
If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
When the application is run for a model which does not have an existing engine file, it may take up to a few minutes depending on the platform and the model for the engine file to be generated and application to start playing. For later runs, these generated engine files can be reused for faster loading.

DeepStream Triton Inference Server Usage Guidelines

dGPU:

1. Pull the DeepStream Triton Inference Server docker
docker pull nvcr.io/nvidia/deepstream:5.0.0-20.04-triton
2. Start the docker
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:5.0.0-20.04-triton

Jetson:

The Triton Inference Server shared libraries come pre-installed as part of DeepStream on Jetson. No extra steps are required for installing the Triton Inference Server.
For both the platforms, to run the samples follow the steps in the “Running the Triton Inference Server samples” section of the README at: /opt/nvidia/deepstream/deepstream-5.0

Configuration File Usage Guidelines

Number of streams your platform can process is dependent on its capability. See Package Contents to find a suitable configuration file for your platform.

Expected Output for the DeepStream Reference Application (deepstream-app)

The image below shows the expected output:

Package Contents

The DeepStream SDK package includes the following archives. These archives contain plugins, libraries, applications and source code.
The sources directory is located at /opt/nvidia/deepstream/deepstream-5.0/sources for Debian installation (on Jetson or dGPU) and for installation by SDK Manager. For tar packages the source files are in the extracted deepstream package.
DeepStream Python bindings and sample applications are available as a separate package. More information and documentation can be found at https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.

Plugin and Library Source Details

The following table describes the contents of the sources directory except for the reference test applications, which are listed separately below.
Plugin or library
Path inside
sources directory
Description
DsExample GStreamer plugin
gst-plugins/gst-dsexample
Template plugin for integrating custom algorithms into DeepStream SDK graph.
GStreamer Gst-nvmsgconv plugin
gst-plugins/gst-nvmsgconv
Source code for the GStreamer Gst-nvmsgconv plugin for converting metadata to schema format.
GStreamer Gst-nvmsgbroker plugin
gst-plugins/gst-nvmsgbroker
Source code for the GStreamer Gst-nvmsgbroker plugin for sending data to the server.
GStreamer Gst-nvinfer plugin
gst-plugins/gst-nvinfer
Source code for the GStreamer Gst-nvinfer plugin for inference.
NvDsInfer library
libs/nvdsinfer
Source code for the NvDsInfer library, used by the Gst-nvinfer GStreamer plugin.
NvMsgConv library
libs/nvmsgsconv
Source code for the NvMsgConv library, required by the Gst-nvmsgconv GStreamer plugin.
Kafka protocol adapter
libs/kafka_protocol_adapter
Protocol adapter for Kafka.
nvdsinfer_customparser
libs/nvdsinfer_customparser
Custom model output parsing example for detectors and classifiers.
Gst-v4l2
See the note below *
Source code for v4l2 codecs.
* Gst-v4l2 sources are not present in DeepStream package. To download, follow these steps:
2. In the “Search filter” field, enter “L4T sources.”
3. Select the appropriate item for L4T Release 32.4.2.
4. Download the file and un-tar it, yielding a .tbz2 file.
5. Expand the .tbz2 file. Gst-v4l2 source files are in gst-nvvideo4linux2_src.tbz2.

Sample Application Source Details

The following table shows the location of the sample test applications.
Reference
test application
Path inside
sources directory
Description
Simple test application 1
apps/sample_apps/deepstream-test1
Simple example of how to use DeepStream elements for a single H.264 stream: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvdsosd→ renderer.
Simple test application 2
apps/sample_apps/deepstream-test2
Simple example of how to use DeepStream elements for a single H.264 stream: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.
Simple test application 3
apps/sample_apps/deepstream-test3
Builds on deepstream-test1 (simple test application 1) to demonstrate how to:
Use multiple sources in the pipeline
Use a uridecodebin to accept any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec
Configure Gst-nvstreammux to generate a batch of frames and infer on it for better resource utilization
Extract the stream metadata, which contains useful information about the frames in the batched buffer
Simple test application 4
apps/sample_apps/deepstream-test4
Builds on deepstream-test1 for a single H.264 stream: filesrc, decode, nvstreammux, nvinfer, nvdsosd, renderer to demonstrate how to:
Use the Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline
Create NVDS_META_EVENT_MSG type metadata and attach it to the buffer
Use NVDS_META_EVENT_MSG for different types of objects, e.g. vehicle and person
Implement “copy” and “free” functions for use if metadata is extended through the extMsg field
Simple test application 5
apps/sample_apps/deepstream-test5
Builds on top of deepstream-app. Demonstrates:
Use of Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline for multistream
How to configure Gst-nvmsgbroker plugin from the config file as a sink plugin (for KAFKA, Azure, etc.)
How to handle the RTCP sender reports from RTSP servers or cameras and translate the Gst Buffer PTS to a UTC timestamp.
For more details refer the RTCP Sender Report callback function test5_rtcp_sender_report_callback() registration and usage in deepstream_test5_app_main.c.
GStreamer callback registration with rtpmanager element’s “handle-sync” signal is documented in apps-common/src/deepstream_source_bin.c.
AMQP protocol test application
libs/amqp_protocol_adaptor
Application to test AMQP protocol.
Azure MQTT test application
libs/azure_protocol_adaptor
Test application to show Azure IoT device2edge messaging and device2cloud messaging using MQTT.
DeepStream reference application
apps/sample_apps/deepstream-app
Source code for the DeepStream reference application.
UFF SSD detector
sources/objectDetector_SSD
Configuration files and custom library implementation for the SSD detector model.
Faster RCNN detector
sources/objectDetector_FasterRCNN
Configuration files and custom library implementation for the FasterRCNN model.
Yolo detector
sources/objectDetector_Yolo
Configuration files and custom library implementation for the Yolo models, currently Yolo v2, v2 tiny, v3, and v3 tiny.
Dewarper example
apps/sample_apps/deepstream-dewarper-test
Demonstrates dewarper functionality for single or multiple 360-degree camera streams. Reads camera calibration parameters from a CSV file and renders aisle and spot surfaces on the display.
Optical flow example
apps/sample_apps/deepstream-nvof-test
Demonstrates optical flow functionality for single or multiple streams. This example uses two GStreamer plugins (Gst-nvof and Gst-nvofvisual). The Gst-nvof element generates the MV (motion vector) data and attaches it as user metadata. The Gst-nvofvisual element visualizes the MV data using a predefined color wheel matrix.
Custom meta data example
apps/sample_apps/deepstream-user-metadata-test
Demonstrates how to add custom or user-specific metadata to any component of DeepStream. The test code attaches a 16-byte array filled with user data to the chosen component. The data is retrieved in another component.
MJPEG and JPEG decoder and inferencing example
apps/sample_apps/deepstream-image-decode-test
Builds on deepstream-test3 to demonstrate image decoding instead of video. This example uses a custom decode bin so the MJPEG codec can be used as input.
Image/Video segmentation example
apps/sample_apps/deepstream-segmentation-test
Demonstrates segmentation of multi-stream video or images using a semantic or industrial neural network and rendering output to a display.
Handling metadata before Gst-nvstreammux
apps/sample_apps/deepstream-gst-metadata-test
Demonstrates how to set metadata before the Gst-nvstreammux plugin in the DeepStream pipeline, and how to access it after Gst-nvstreammux.
Gst-nvinfer tensor meta flow example
apps/sample_apps/deepstream-infer-tensor-meta-app
Demonstrates how to flow and access nvinfer tensor output as metadata.
Performance demo
apps/sample_apps/deepstream-perf-demo
Performs single channel cascaded inferencing and object tracking sequentially on all streams in a directory.
Analytics example
apps/sample_apps/deepstream-nvdsanalytics-test
Demonstrates batched analytics like ROI filtering, Line crossing, direction detection and overcrowding
OpenCV example
apps/sample_apps/deepstream-opencv-test
Demonstrates the use of OpenCV in dsexample plugin
Image as Metadata example
Apps/sample_apps / deepstream-image-meta-test
Demonstrates how to attach encoded image as meta data and save the images in jpeg format.
Appsrc and Appsink example
apps/sample_apps/deepstream-appsrc-test
Demonstrates AppSrc and AppSink usage for consuming and giving data from non DeepStream code respectively.

Python Sample Application Source Details

The following table shows the location of the Python sample applications under https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
Reference
test application
Path inside the GitHub repo
Description
Simple test application 1
apps/deepstream-test1
Simple example of how to use DeepStream elements for a single H.264 stream: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvdsosd→ renderer.
Simple test application 2
apps/deepstream-test2
Simple example of how to use DeepStream elements for a single H.264 stream: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.
Simple test application 3
apps/deepstream-test3
Builds on deepstream-test1 (simple test application 1) to demonstrate how to:
Use multiple sources in the pipeline
Use a uridecodebin to accept any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec
Configure Gst-nvstreammux to generate a batch of frames and infer on it for better resource utilization
Extract the stream metadata, which contains useful information about the frames in the batched buffer
Simple test application 4
apps/deepstream-test4
Builds on deepstream-test1 for a single H.264 stream: filesrc, decode, nvstreammux, nvinfer, nvdsosd, renderer to demonstrate how to:
Use the Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline
Create NVDS_META_EVENT_MSG type metadata and attach it to the buffer
Use NVDS_META_EVENT_MSG for different types of objects, e.g. vehicle and person
Implement “copy” and “free” functions for use if metadata is extended through the extMsg field
 
Reference
test application
Path inside the GitHub repo
Description
USB camera source application
apps/deepstream-test1-usbcam
Simple test application 1 modified to process a single stream from a USB camera.
RTSP output application
apps/deepstream-test1-rtsp-out
Simple test application 1 modified to output visualization stream over RTSP.
Image data access application
apps/deepstream-imagedata-multistream
Builds on simple test application 3 to demonstrate how to:
Access decoded frames as NumPy arrays in the pipeline
Check detection confidence of detected objects (DBSCAN or NMS clustering required)
Use OpenCV to annotate the frames and save them to file
SSD detector output parser application
apps/deepstream-ssd-parser
Demonstrates how to perform custom post-processing for inference output from Triton Inference Server
 
Use SSD model on Triton Inference Server for object detection
Enable custom post-processing and raw tensor export for Triton Inference Server via configuration file settings
Access inference output tensors in the pipeline for post-processing in Python
Add detected objects to the metadata
Output the OSD visualization to MP4 file
 
Contents of the package:
samples: Directory containing sample configuration files, streams, and models to run the sample applications.
samples/configs/deepstream-app: Configuration files for the reference application:
source30_1080p_resnet_dec_infer_tiled_display_int8.txt: Demonstrates 30 stream decodes with primary inferencing. (For dGPU and Jetson AGX Xavier platforms only.)
source4_1080p_resnet_dec_infer_tiled_display_int8.txt: Demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers. (For dGPU and Jetson AGX Xavier platforms only.)
source4_1080p_resnet_dec_infer_tracker_sgie_tiled_display_int8_gpu1.txt: Demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers on GPU 1 (for systems that have multiple GPU cards). For dGPU platforms only.
config_infer_primary.txt: Configures a nvinfer element as primary detector.
config_infer_secondary_carcolor.txt, config_infer_secondary_carmake.txt, config_infer_secondary_vehicletypes.txt: Configure a nvinfer element as secondary classifier.
iou_config.txt: Configures a low-level IOU (Intersection over Union) tracker.
tracker_config.yml: Configures the NvDCF tracker.
source1_usb_dec_infer_resnet_int8.txt: Demonstrates one USB camera as input.
source1_csi_dec_infer_resnet_int8.txt: Demonstrates one CSI camera as input; for Jetson only.
source2_csi_usb_dec_infer_resnet_int8.txt: Demonstrates one CSI camera and one USB camera as inputs; for Jetson only.
source6_csi_dec_infer_resnet_int8.txt: Demonstrates six CSI cameras as inputs; for Jetson only.
source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt: Demonstrates 8 Decode + Infer + Tracker; for Jetson Nano only.
source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt: Demonstrates 8 Decode + Infer + Tracker; for Jetson TX1 only.
source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt: Demonstrates 12 Decode + Infer + Tracker; for Jetson TX2 only.
samples/configs/deepstream-app-trtis: Configuration files for the reference application for inferencing using Triton Inference Server
source30_1080p_dec_infer-resnet_tiled_display_int8.txt (30 Decode + Infer)
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt (4 Decode + Infer + SGIE + Tracker)
source1_primary_classifier.txt (Single source + full frame classification)
Note:
Other classification models can be used by changing the nvinferserver config file in the [*-gie] group of application config file.
source1_primary_detector.txt (Single source + object detection using ssd)
Configuration files for "nvinferserver" element in `configs/deepstream-app-trtis/`
config_infer_plan_engine_primary.txt (Primary Object Detector)
config_infer_secondary_plan_engine_carcolor.txt (Secondary Car Color Classifier)
config_infer_secondary_plan_engine_carmake.txt (Secondary Car Make Classifier)
config_infer_secondary_plan_engine_vehicletypes.txt (Secondary Vehicle Type Classifier)
config_infer_primary_classifier_densenet_onnx.txt (DenseNet-121 v1.2 classifier)
config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt (Tensorflow Inception v3 classifier - Post processing in Triton)
config_infer_primary_classifier_inception_graphdef_postprocessInDS.txt (Tensorflow Inception v3 classifier - Post processing in DeepStream)
config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt (TensorFlow SSD Inception V2 Object Detector)
Nvidia Transfer Learning Toolkit (TLT) pretrained Models:
samples/configs/tlt_pretrained_models: Reference application configuration files for the pre-trained models provided by Nvidia Transfer Learning Toolkit (TLT)
deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt (Demonstrates object detection using DashCamNet model with VehicleMakeNet and VehicleTypeNet as secondary classification models on one source)
deepstream_app_source1_faceirnet.txt (Demonstrates face detection for IR camera using FaceDetectIR object detection model on one source)
deepstream_app_source1_peoplenet.txt (Demonstrates object detection using PeopleNet object detection model on one source)
deepstream_app_source1_trafficcamnet.txt (Demonstrates object detection using TrafficCamNet object detection model on one source)
nvinfer element configuration files and label files in `configs/ tlt_pretrained_models
config_infer_primary_dashcamnet.txt, labels_dashcamnet.txt (DashCamNet – Resnet18 based object detection model for Vehicle, Bicycle, Person, Roadsign)
config_infer_secondary_vehiclemakenet.txt, labels_vehiclemakenet.txt (VehicleMakeNet – Resnet18 based classification model for make of the vehicle)
config_infer_secondary_vehicletypenet.txt, labels_vehicletypenet.txt (VehicleTypeNet – Resnet18 based classification model for type of the vehicle)
config_infer_primary_faceirnet.txt, labels_faceirnet.txt (FaceIRNet – Resnet18 based face detection model for IR images)
config_infer_primary_peoplenet.txt, labels_peoplenet.txt (PeopleNet – Resnet18 based object detection model for Person, Bag, Face)
config_infer_primary_trafficcamnet.txt, labels_trafficnet.txt (TrafficCamNet – Resnet18 based object detection model for Vehicle, Bicycle, Person, Roadsign for traffic camera viewpoint)
samples: Directory containing sample configuration files, models, and streams to run the sample applications.
samples/streams: The following streams are provided with the DeepStream SDK:
Stream
Type of Stream
sample_1080p_h264.mp4
H264 containerized stream
sample_1080p_h265.mp4
H265 containerized stream
sample_720p.h264
H264 elementary stream
sample_720p.jpg
JPEG image
sample_720p.mjpeg
MJPEG stream
sample_cam6.mp4
H264 containerized stream (360D camera stream)
sample_industrial.jpg
JPEG image
samples/models: The following sample models are provided with the SDK:
DeepStream Reference Application
Model
Model Type
Number of Classes
Resolution
Primary Detector
Resnet10
4
640 × 368
Secondary Car Color Classifier
Resnet18
12
224 × 224
Secondary Car Make Classifier
Resnet18
6
224 × 224
Secondary Vehicle Type Classifier
Resnet18
20
224 × 224
 
Segmentation example
Model
Model Type
Number of Classes
Resolution
Industrial
Resnet18 + UNet
1
512 x 512
Semantic
Resnet18 + UNet
4
512 x 512
Scripts included along with package:
samples/ prepare_classification_test_video.sh: Downloads Imagenet test images and creates a video out of it to test with Classification models like Tensorflow Inception, ONNX DenseNet etc.
samples/ prepare_ds_trtis_model_repo.sh: Prepare the Model repository for Triton Inference Server
1. Creates engine files for Caffe and UFF based models provided as part of SDK.
2. Downloads Model files for ONNX DenseNet , SSD Inception V2 Coco, Inception v3.
For additional information on the above models, refer to:
Inception V3 - https://github.com/tensorflow/models/tree/master/research/slim
uninstall.sh: Used to clean up previous DS installation.