NVIDIA Tegra
NVIDIA DeepStream Plugin Manual

Application Note
4.0.2 Release


 
Frequently Asked Questions
How do I uninstall DeepStream 3.0?
You must clean up the DeepStream 3.0 libraries and binaries. Enter one of these commands to clean up:
For dGPU: Enter this command:
$ sudo rm -rf /usr/local/deepstream /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libnvdsgst_* /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstnv* /usr/bin/deepstream* /usr/lib/x86_64-linux-gnu/libv4l/plugins/libcuvidv4l2_plugin.so
For Jetson: Flash the target device with the latest release of JetPack.
What types of input streams does DeepStream 4.0.1 support?
It supports H.264, H.265, JPEG, and MJPEG streams
What’s the throughput of H.264 and H.265 decode on dGPU (Tesla)?
See https://developer.nvidia.com/nvidia-video-codec-sdk for information.
How can I run the DeepStream sample application in debug mode?
Enter this command:
$ deepstream-app -c <config> --gst-debug=<debug#>
Where:
<config> is the pathname of the configuration file
<debug#> is a number specifying the amount of detail in the debugging output
For information about debugging tools, see:
https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html
Where can I find the DeepStream sample applications?
The DeepStream sample applications are located at:
<DeepStream installation dir>/sources/apps/sample_apps/
The configuration files for the sample applications are located at:
<DeepStream installation dir>/samples/configs/deepstream-app
For more information, see the NVIDIA DeepStream SDK Development Guide.
How can I verify that CUDA was installed correctly?
Check the CUDA version:
$ nvcc --version
How can I interpret frames per second (FPS) display information on console?
The FPS number shown on the console when deepstream-app runs is an average over the most recent five seconds. The number in brackets is average FPS over the entire run. The numbers are displayed per stream. The performance measurement interval is set by the perf-measurement-interval-sec setting in the configuration file.
My DeepStream performance is lower than expected. How can I determine the reason?
See the “Troubleshooting” chapter of DeepStream 4.0.1 Plugin Manual.
How can I specify RTSP streaming of DeepStream output?
You can enable remote display by adding an RTSP sink in the application configuration file. The sample configuration file source30_720p_dec_infer-resnet_tiled_display_int8.txt has an example of this in the [sink2] section. You must set the enable flag to 1.
Once you enable remote display, the application prints the RTSP URL, which you can open in any media player like VLC.
What is the official DeepStream Docker image and where do I get it?
You can download the official DeepStream Docker image from DeepStream docker image. See https://ngc.nvidia.com/containers/nvidia:deepstream.
What is the recipe for creating my own Docker image?
Use the DeepStream container as the base image. Add your own custom layers on top of it using standard technique in Docker.
How can I display graphical output remotely over VNC? How can I determine whether X11 is running?
If the host machine is running X, starting VNC is trivial. Otherwise you must start X, then start VNC.
To determine whether X is running, check the DISPLAY environment variable.
If X is not running you must start it first, then run DeepStream with GUI, or set type to 1 or 3 under sink groups to select fakesink or save to a file. If you are using an NVIDIA® Tesla® V100 or P100 GPU Accelerator (both compute-only cards without a display), you must set type to 4 for DeepStream output RTSP streaming. See the NVIDIA DeepStream SDK Development Guide for sink settings.
Why does the deepstream-nvof-test application show the error message “Device Does NOT support Optical Flow Functionality” if run with NVIDIA Tesla P4 or NVIDIA Jetson Nano, Jetson TX2, or Jetson TX1?
Optical flow functionality is supported only on NVIDIA® Jetson AGX Xavier™ and on GPUs with Turing architecture (NVIDIA® T4, NVIDIA® GeForce® RTX 2080 etc.).
Why is a Gst-nvstreammux plugin required in DeepStream 4.0.1?
Multiple source components like decoder, camera, etc. are connected to the Gst-nvstreammux plugin to form a batch.
This plugin is responsible for creating batch metadata, which is stored in the structure NvDsBatchMeta. This is the primary form of metadata in DeepStream 4.0.1.
All plugins downstream from Gst-nvstreammux work on NvDsBatchMeta to access metadata and fill in the metadata they generate.
Why is a Gst-nvegltransform plugin required on a Jetson platform upstream from Gst-nveglglessink?
On a Jetson platform Gst-nveglglessink works on EGLImage structures. Gst-nvegltranform is required to convert incoming data (wrapped in an NVMM structure) to an EGLImage instance. On a dGPU platform, Gst-nveglglessink works directly on data wrapped in an NVMM structure.
How to do I profile DeepStream pipeline?
You can use NVIDIA® Nsight™ Systems, a system-wide performance analysis tool. See https://developer.nvidia.com/nsight-systems for more details.
How can I check GPU and memory utilization on a dGPU system?
Enter nvidia-smi or nvidia-settings on the console.
What is the approximate memory utilization for 1080p streams on dGPU?
Use the table below as a guide to memory utilization in this case.
Note:
Width and height in Gst-nvstreammux are set to the input stream resolution specified in the configuration file.
The pipeline is: decoder → nvstreammux → nvinfer → fakesink.
 
Batch size
(Number of streams)
Decode
memory
Gst-nvinfer
memory
Gst-nvstreammux
memory
1
32 MB
333 MB
0 MB
2
64 MB
341 MB
0 MB
4
128 MB
359 MB
0 MB
8
256 MB
391 MB
0 MB
16
512 MB
457 MB
0 MB
If input stream resolution and Gst-nvstreammux resolution (set in the configuration file) are the same, no additional GPU memory is allocated in Gst-nvstreammux.
If input stream resolution is not same as Gst-nvstreammux resolution, Gst-nvstreammux allocates memory of size:
Where:
buffers is the number of Gst-nvstreammux output buffers (set to 4).
width and height are the mux output width and height.
mismatches is the number of sources with resolution mismatch.
This table shows some examples:
Example
Gst-nvstreammux width × height settings
Gst-nvstreammux
GPU memory size
16 sources at 1920×1080 resolution
1280×720
4 * (1.5 * 1280 * 720) * 16 ≈ 84 MB
15 sources at 1280×720 resolution &
one source at 1920×1080 resolution
1280×720
4 * (1.5 * 1280 * 720) * 1 ≈ 5.2 MB
What trackers are included in DeepStream and which one should I choose for my application?
DeepStream ships with three trackers: KLT, IOU, and NvDCF. The trackers vary from high performance to high accuracy. The trade-off table below can help you choose the best tracker for your applications. For more information about the trackers, read the “Gst-nvtracker” chapter in the DeepStream 4.0.1 Plugin Manual.
Tracker
Computational Load
Pros
Cons
Best Use Cases
GPU
CPU
IOU
X
Very Low
Light-weight
No visual features for matching, so prone to frequent tracker ID switches and failures.
Not suitable for fast moving scene.
Objects are sparsely located, with distinct sizes.
Detector is expected to run every frame or very frequently (ex. every alternate frame).
KLT
X
High
Works reasonably well for simple scenes
High CPU utilization.
Susceptible to change in the visual appearance due to noise and perturbations, such as shadow, non-rigid deformation, out-of-plane rotation, and partial occlusion.
Cannot work on objects with low textures.
Objects with strong textures and simpler background.
Ideal for high CPU resource availability.
NvDCF
Medium
Low
Highly robust against partial occlusions, shadow, and other transient visual changes.
Less frequent ID switches.
Slower than KLT and IOU due to increased computational complexity.
Reduces the total number of streams processed.
Multi-object, complex scenes with partial occlusion.
When deepstream-app is run in loop on Jetson AGX Xavier using “while true; do deepstream-app -c <config_file>; done;”, after a few iterations I see low FPS for certain iterations.
This may happen when you are running thirty 1080p streams at 30 frames/second. The issue is caused by initial load. I/O operations bog down the CPU, and with qos=1 as a default property of the [sink0] group, decodebin starts dropping frames. To avoid this, set qos=0 in the [sink0] group in the configuration file.
Why do I get the error “Makefile:13: *** "CUDA_VER is not set". Stop” when I compile DeepStream sample applications?
Export this environment variable:
For dGPU: CUDA_VER=10.1
For Jetson: CUDA_VER=10.0
Then compile again.
How can I construct the DeepStream GStreamer pipeline?
Here are few examples of how to construct the pipeline. To run these example pipelines as-is, run the applications from the samples directory:
V4l2 decoder → nvinfer → nvtracker → nvinfer (secondary) → nvmultistreamtiler → nvdsosd → nveglglessink
For multistream (4×1080p) operation on dGPU:
$ gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=4 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=2 columns=2 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nveglglessink filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_1 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_2 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_3
For multistream (4×1080p) operation on Jetson:
$ gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=4 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=2 columns=2 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_1 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_2 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_3
For single stream (1080p) operation on dGPU:
$ gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nveglglessink
For single stream (1080p) operation on Jetson:
$ gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
JPEG decode
Using nvv4l2decoder on Jetson:
$ gst-launch-1.0 filesrc location= ./streams/sample_720p.jpg ! jpegparse ! nvv4l2decoder ! nvegltransform ! nveglglessink
Using nvv4l2decoder on dGPU:
$ gst-launch-1.0 filesrc location= ./streams/sample_720p.jpg ! jpegparse ! nvv4l2decoder ! nveglglessink
Using nvjpegdec on Jetson:
$ gst-launch-1.0 filesrc location= ./streams/sample_720p.jpg ! nvjpegdec ! nvegltransform ! nveglglessink
Using nvjpegdec on dGPU:
$ gst-launch-1.0 filesrc location= ./streams/sample_720p.jpg ! nvjpegdec ! nveglglessink
Dewarper
On dGPU:
$ gst-launch-1.0 uridecodebin uri= file://`pwd`/../../../../samples/streams/sample_cam6.mp4 ! nvvideoconvert ! nvdewarper source-id=6 num-output-buffers=4 config-file=config_dewarper.txt ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=4 batched-push-timeout=100000 num-surfaces-per-frame=4 ! nvmultistreamtiler rows=1 columns=1 width=720 height=576 ! nvvideoconvert ! nveglglessink
On Jetson:
$ gst-launch-1.0 uridecodebin uri= file://`pwd`/../../../../samples/streams/sample_cam6.mp4 ! nvvideoconvert ! nvdewarper source-id=6 num-output-buffers=4 config-file=config_dewarper.txt ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=4 batched-push-timeout=100000 num-surfaces-per-frame=4 ! nvmultistreamtiler rows=1 columns=1 width=720 height=576 ! nvvideoconvert ! nvegltransform ! nveglglessink
Note:
This Gst pipeline must be run from the dewarper test application directory, sources/apps/sample_apps/deepstream-dewarper-test.
This pipeline runs only for four surfaces. To run for one, two, or three surfaces, use the dewarper test application.
Dsexample
On dGPU:
$ gst-launch-1.0 filesrc location = ./streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path= ./configs/deepstream-app/config_infer_primary.txt ! dsexample full-frame=1 ! nvvideoconvert ! nvdsosd ! nveglglessink sync=0
On Jetson:
$ gst-launch-1.0 filesrc location = ./streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path= ./configs/deepstream-app/config_infer_primary.txt ! dsexample full-frame=1 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=0
Why am I getting “ImportError: No module named google.protobuf.internal when running convert_to_uff.py on Jetson AGX Xavier”?
If you set up Tensorflow using https://elinux.org/Jetson_Zoo#TensorFlow, please use Python 3 for running convert_to_uff.py:
$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py
When I set source type to 4 (RTSP) in the deepstream-app source configuration file, why do some live HEVC streams not work?
For RTSP )source type =4), deepstream-app assumes that the stream is H.264 encoded, so HEVC streams won’t work. Use source type 2 (URI) in that scenario,
Alternatively, you can modify the file sources/app/apps-common/src/deepstream_source_bin.c to change rtph264depay to rtph265depay and compile deepstream-app again.
Platform and OS Compatibility
The following table provides information about platform and operating system compatibility in the current and earlier versions of DeepStream.
NVIDIA® Jetson™ Platforms
DeepStream release
DeepStream 1.0
DeepStream 1.5
DeepStream 2.0
DeepStream 3.0
DeepStream 4.0.1 (Unified)
DeepStream 4.0.2 (Unified)
Jetson
platforms
TX2, TX1
TX2, TX1
Not supported
AGX Xavier
Nano, AGX Xavier, TX2, TX1
Nano, AGX Xavier, TX2, TX1
OS
L4T Ubuntu 16.04
L4T Ubuntu 16.04
Not supported
L4T Ubuntu 18.04/16.04
L4T Ubuntu 18.04
L4T Ubuntu 18.04
JetPack release
3.1
3.2
Not supported
4.1.1
4.2.1
4.3
L4T release
28.1
28.2
Not supported
31.1
32.2
32.3.1
CUDA release
CUDA 8.0
CUDA 9.0
Not supported
CUDA 10.0
CUDA 10.0
CUDA 10.0
cuDNN release
cuDNN 6.0
cuDNN 7.0.5
Not supported
cuDNN 7.3
cuDNN 7.5.1
cuDNN 7.6.3
TRT release
TRT 2.1
TRT 3.0
Not supported
TRT 5.0
TRT 5.1.6
TRT 6.0.1
OpenCV release
OpenCV 2.4.13
OpenCV 3.3.1
Not supported
OpenCV 3.3.1
OpenCV 3.3.1
OpenCV 4.1
OFSDK release
Not available
Not available
Not available
Not available
1.0.0
1.0.0
VisionWorks
VisionWorks 1.6
VisionWorks 1.6
Not supported
VisionWorks 1.6
VisionWorks 1.6
VisionWorks 1.6
GStreamer
GStreamer 1.8.3
GStreamer 1.8.3
Not supported
GStreamer 1.8.3
GStreamer 1.14.1
GStreamer 1.14.1
Docker image
Not available
Not available
Not available
Not available
deepstream-l4t:4.0
deepstream-l4t:4.0.2
 
dGPU Platforms
DeepStream release
DeepStream 1.0
DeepStream 1.5
DeepStream 2.0
DeepStream 3.0
DeepStream 4.0.1 (Unified)
DeepStream 4.0.2 (Unified)
GPU platforms
P4, P40
P4, P40
P4, P40
P4, P40, V100, T4
P4, T4, V100
P4, T4, V100
OS
Ubuntu 16.04
Ubuntu 16.04
Ubuntu 16.04
Ubuntu 16.04
Ubuntu 18.04
Ubuntu 18.04
GCC
GCC 5.4
GCC 5.4
GCC 5.4
GCC 5.4
GCC 7.3.0
GCC 7.3.0
CUDA release
CUDA 8.0
CUDA 9.0
CUDA 9.2
CUDA 10.0
CUDA 10.1
CUDA 10.1
cuDNN release
cuDNN 6.0
cuDNN 7.0
cuDNN 7.1
cuDNN 7.3
cuDNN 7.5.0+
cuDNN 7.6.5+
TRT release
TRT 2.1
TRT 3.0
TRT 4.0
TRT 5.0
TRT 5.1.5
TRT 6.0.1
Display Driver
R375
R384
R396+
R410+
R418+
R418+
VideoSDK release
SDK 7.1
SDK 7.9
SDK 7.9
SDK 8.2
SDK 9.0
SDK 9.0
OFSDK release
Not available
Not available
Not available
Not available
1.0.10
1.0.10
GStreamer release
Not available
GStreamer 1.8.3
GStreamer 1.8.3
GStreamer 1.8.3
GStreamer 1.14.1
GStreamer 1.14.1
OpenCV release
Not available
OpenCV 2.4.13
OpenCV 3.4.x
OpenCV 3.4.x
OpenCV 3.3.1
OpenCV 3.3.1
Docker image
Not available
Not available
Not available
deepstream:3.0
deepstream:4.0
deepstream:4.0.2