C/C++ Sample Apps Source Details¶
The DeepStream SDK package includes archives containing plugins, libraries, applications, and source code.
The sources directory is located at
/opt/nvidia/deepstream/deepstream-6.3/sources for both Debian installation (on Jetson or dGPU) and SDK Manager installation. For tar packages the source files are in the extracted deepstream package.
DeepStream Python bindings and sample applications are available as separate packages. For more information, see https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.
Reference test application
Path inside sources directory
Sample test application 1
Sample of how to use DeepStream elements for a single H.264 stream: filesrc → decode → nvstreammux → nvinfer or nvinferserver (primary detector) → nvdsosd → renderer. This app uses resnet10.caffemodel for detection.
Sample test application 2
Sample of how to use DeepStream elements for a single H.264 stream: filesrc → decode → nvstreammux → nvinfer or nvinferserver (primary detector) → nvtracker → nvinfer or nvinferserver (secondary classifier) → nvdsosd → renderer. This app uses resnet10.caffemodel for detection and 3 classifier models (i.e., Car Color, Make and Model).
Sample test application 3
Builds on deepstream-test1 (simple test application 1) to demonstrate how to:
Use multiple sources in the pipeline.
Use a uridecodebin to accept any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec.
Configure Gst-nvstreammux to generate a batch of frames and infer on it for better resource utilization.
Extract the stream metadata, which contains useful information about the frames in the batched buffer.
This app uses resnet10.caffemodel for detection.
Sample test application 4
Builds on deepstream-test1 for a single H.264 stream: filesrc, decode, nvstreammux, nvinfer or nvinferserver, nvdsosd, renderer to demonstrate how to:
Use the Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline.
Create NVDS_META_EVENT_MSG type metadata and attach it to the buffer.
Use NVDS_META_EVENT_MSG for different types of objects, e.g. vehicle and person.
Implement “copy” and “free” functions for use if metadata is extended through the extMsg field.
This app uses resnet10.caffemodel for detection.
Sample test application 5
Builds on top of deepstream-app. Demonstrates:
Use of Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline for multistream.
How to configure Gst-nvmsgbroker plugin from the config file as a sink plugin (for KAFKA, Azure, etc.).
How to handle the RTCP sender reports from RTSP servers or cameras and translate the Gst Buffer PTS to a UTC timestamp.
For more details refer the RTCP Sender Report callback function
test5_rtcp_sender_report_callback()registration and usage in
deepstream_test5_app_main.c. GStreamer callback registration with rtpmanager element’s “handle-sync” signal is documented in
This app uses resnet10.caffemodel for detection.
AMQP protocol test application
Application to test AMQP protocol. This app uses resnet10.caffemodel for detection.
Azure MQTT test application
Test application to show Azure IoT device2edge messaging and device2cloud messaging using MQTT. This app uses resnet10.caffemodel for detection.
DeepStream reference application
Source code for the DeepStream reference application. This app uses resnet10.caffemodel for detection and 3 classifier models (i.e., Car Color, Make and Model).
UFF SSD detector
Configuration files and custom library implementation for the SSD detector model.
Faster RCNN detector
Configuration files and custom library implementation for the FasterRCNN model.
Configuration files and custom library implementation for the Yolo models, currently Yolo v2, v2 tiny, v3, and v3 tiny.
Demonstrates dewarper functionality for single or multiple 360-degree camera streams. Reads camera calibration parameters from a CSV file and renders aisle and spot surfaces on the display.
Optical flow example
Demonstrates optical flow functionality for single or multiple streams. This example uses two GStreamer plugins (Gst-nvof and Gst-nvofvisual). The Gst-nvof element generates the MV (motion vector) data and attaches it as user metadata. The Gst-nvofvisual element visualizes the MV data using a predefined color wheel matrix.
Custom meta data example
Demonstrates how to add custom or user-specific metadata to any component of DeepStream. The test code attaches a 16-byte array filled with user data to the chosen component. The data is retrieved in another component. This app uses resnet10.caffemodel for detection.
MJPEG and JPEG decoder and inferencing example
Builds on deepstream-test3 to demonstrate image decoding instead of video. This example uses a custom decode bin so the MJPEG codec can be used as input. This app uses resnet10.caffemodel for detection.
Image/Video segmentation example
Demonstrates segmentation of multi-stream video or images using a semantic or industrial neural network and rendering output to a display. This app uses unet_output_graph.uff for industrial and unetres18_v4_pruned0.65_800_data.uff for semantic usecases.
Handling metadata before Gst-nvstreammux
Demonstrates how to set metadata before the Gst-nvstreammux plugin in the DeepStream pipeline, and how to access it after Gst-nvstreammux. This app uses resnet10.caffemodel for detection.
Gst-nvinfer tensor meta flow example
Demonstrates how to flow and access nvinfer tensor output as metadata. NOTE: This binary is not packaged due to OpenCV deprecation. This app needs to be compiled by the user. This app uses resnet10.caffemodel for detection and 3 classifier models (i.e., Car Color, Make and Model).
Demonstrates inference on preprocessed ROIs configured for the streams. This app uses resnet10.caffemodel for detection.
3D action recognition Reference app
Demonstrates a sequence batching based 3D or 2D model inference pipeline for action recognition. It also includes a sequence based preprocess custom lib for NCSHW temporal batching. Refer Prerequisites in README before running the application. This app uses resnet18_2d_rgb_hmdb5_32.etlt for 2D and resnet18_3d_rgb_hmdb5_32.etlt for 3D action recognition.
Demonstrates batched analytics like ROI filtering, Line crossing, direction detection and overcrowding. This app uses resnet10.caffemodel for detection.
Demonstrates the use of OpenCV in dsexample plugin. Need to compile dsexample with flag
WITH_OPENCV=1. This app uses resnet10.caffemodel for detection.
Image as Metadata example
Apps/sample_apps / deepstream-image-meta-test
Demonstrates how to attach encoded image as meta data and save the images in jpeg format. This app uses resnet10.caffemodel for detection.
Appsrc and Appsink example
Demonstrates AppSrc and AppSink usage for consuming and giving data from non DeepStream code respectively. This app uses resnet10.caffemodel for detection.
Cuda Appsrc and Appsink example
Demonstrates how cuda frames acquired from outside DeepStream can be fed to a DeepStream pipeline.
Transfer learning example
Demonstrates a mechanism to save the images for objects which have lesser confidence and the same can be used for training further. This app uses resnet10.caffemodel for detection.
Demonstrates Instance segmentation using Mask-RCNN model. NOTE: This binary is not packaged due to OpenCV deprecation. This app needs to be compiled by the user.
DeepStream Audio Reference Application
Source code for the DeepStream reference application demonstrating audio analytics pipeline. This app uses SONYC audio model to classify labels.
Smart Record example
Demonstrates event based smart record functionality. This app uses resnet10.caffemodel for detection.
Automatic Speech Recognition
Demonstrates Automatic Speech Recognition functionality. Note: This application requires Riva ASR services to be available . Refer Prequisites in README before running the application. The default model for this application is Jasper, other options are CitriNet and QuartzNet.
Text To Speech Conversion (Alpha)
Demonstrates Text To Speech conversion functionality along with Automatic Speech Recognition. Note: This application requires Riva TTS and ASR services to be available. Refer prerequisites in README before running the application. This application uses the CitriNet model for ASR and FastPitch, HiFi-GAN models for TTS.
Audio+video+Text Synchronization (Alpha)
Demonstrates synchronization of audio, video and text output from nvdsasr in DeepStream pipeline. Note: This application requires Riva ASR services to be available. Refer prerequisites in README before running the application. This app uses Jasper models for Speech Recognition.
DeepStream NMOS Application
This application demonstrates how to create a DeepStream app as an NMOS Node. It uses a library (NvDsNmos) which provides the APIs to create, destroy and internally manage the NMOS Node. The NMOS Node can automatically discover and register with an NMOS Registry on the network using the AMWA IS-04 Registration API.
It also shows how to create various Video and Audio pipelines, run them simultaneously and reconfigure them based on NMOS events such as AMWA IS-05 Connection API requests from an NMOS Controller.
DeepStream UCX test 1
Demonstrates how to use the communication plugin gst-nvdsucx to send and receive video data over RDMA without any special metadata.
DeepStream UCX test 2
Demonstrates how to use the communication plugin gst-nvdsucx to send and receive video/metadata data over RDMA along with the custom serialization and deserialization through the libnvds_video_metadata_serialization.so library.
DeepStream UCX test 3
Demonstrates how to use the communication plugin gst-nvdsucx to send and receive audio/metadata data over RDMA using the custom audio serialization and deserialization through the libnvds_audio_metadata_serialization.so library.
DeepStream 3D Depth Camera Reference App
Demonstrates how to setup depth capture, depth render, 3D-point-cloud processing and 3D-points render pipelines over DS3D interfaces and custom-libs of ds3d::dataloader, ds3d::datafilter and ds3d::datarender. See more details in DeepStream 3D Depth Camera App
DeepStream Lidar Data Inferencing Reference App
Demonstrates how to read in point cloud data, inference pont cloud data with pointpillar 3D objects detection model with Triton, render point cloud data and 3D objects with GLES. The whole application is based on DS3D interfaces and custom-libs of ds3d::dataloader, ds3d::datafilter and ds3d::datarender. See more details in DeepStream Lidar Inference App (Alpha)
Triton Onnx YOLO-v3
Configuration files and custom library implementation for the ONNX YOLO-V3 model. Demonstrates how to use DS-Triton to run models with dynamic-sized output tensors and how to implement custom-lib to run ONNX YoloV3 models with multi-input tensors and how to postprocess mixed-batch tensor data and attach them into nvds metadata
Deepstream Server Application
Demonstrates REST API support to control DeepStream pipeline on-the-fly.
DeepStream Can Orientation Sample App
Demonstrates can orientation detection with CV-based VPI template matching algorithm. VPI template matching is implemented with DeepStream video template plugin. See more details in apps/sample_apps/deepstream-can-orientation-app/README
Triton Ensemble Model Example
Configuration files, Triton custom C++ backend implementation and custom library implementation for Triton ensemble model example. Demonstrates use of Triton ensemble models with gst-nvinferserver plugin and how to implement custom Triton C++ backend to access DeespStream metadata like stream ID using multi-input tensors.
Uses gst-nvdsxfer plugin to simulate pipelines with NVLINK enabled multi-gpu setup to achieve better performance. User can use “position” param of nvxfer config section from dsmultigpu_config.yml file to simulate gst-nvxfer plugin supported various multi-gpu usecase pipelines.
Apps which write output files (example:
deepstream-transfer-learning-app) should be run with
Plugin and Library Source Details¶
The following table describes the contents of the sources directory except for the reference test applications:
Plugin or library
Path inside sources directory
DsExample GStreamer plugin
Template plugin for integrating custom algorithms into DeepStream SDK graph.
GStreamer Gst-nvmsgconv plugin
Source code for the GStreamer Gst-nvmsgconv plugin to convert metadata to schema format.
GStreamer Gst-nvmsgbroker plugin
Source code for the GStreamer Gst-nvmsgbroker plugin to send data to the server.
GStreamer Gst-nvdspreprocess plugin
Source code for the GStreamer Gst-nvdspreprocess plugin for preprocessing on the predefined ROIs.
GStreamer Gst-nvinfer plugin
Source code for the GStreamer Gst-nvinfer plugin for inference.
GStreamer Gst-nvinferserver plugin
Source code for the GStreamer Gst-nvinferserver plugin for inference using Triton Inference Server.
GStreamer Gst-nvdsosd plugin
Source code for the GStreamer Gst-nvdsosd plugin to draw bboxes, text and other objects.
Gstreamer Gst-nvdewarper plugin
Source code for the GStreamer Gst-nvdewarper plugin to dewarp frames
Source code for the NvDsInfer library, used by the Gst-nvinfer GStreamer plugin.
Source code for the NvDsInferServer library, used by the Gst-nvinferserver GStreamer plugin.
Source code for the NvDsNmos library, demonstrated by the DeepStream NMOS Application.
Source code for the NvMsgConv library, required by the Gst-nvmsgconv GStreamer plugin.
Kafka protocol adapter
Protocol adapter for Kafka.
Source code for the rest server.
Source code for “nvdsmultiurisrcbin helper” and custom “gst-events, gst-messages and common configs” required for rest server.
Custom model output parsing example for detectors and classifiers.
See the note below 1
Source code for v4l2 codecs.
Gstreamer gst-nvdsvideotemplate plugin
Source code for template plugin to implement video custom algorithms (non Gstreamer based)
NvDsVideoTemplate custom library
Source code for custom library to implement video custom algorithms
Gstreamer gst-nvdsaudiotemplate plugin
Source code for template plugin to implement audio custom algorithms (non Gstreamer based)
NvDsVideoTemplate custom library
Source code for custom library to implement audio custom algorithms
Source code for Gstreamer Gst-nvdsmetainsert and Gst-nvdsmetaextract plugins to process metadata
NvDsMetaUtils SEI serialization library
Source code for custom meta de/serialization to embed in encoded bitstream as SEI data, required by Gst-nvdsmetautils plugins
NvDsMetaUtils Audio serialization library
Source code for Audio NvDsFrameMeta de/serialization, required by Gst-nvdsmetautils plugins
NvDsMetaUtils Video serialization library
Source code for Video NvDsFrameMeta & NvDsObjectMeta de/serialization, required by Gst-nvdsmetautils plugins
Gstreamer gst-nvvideotestsrc plugin
Source code to generate video test data in a variety of formats and patterns that is written directly to GPU output buffers
Gstreamer gst-nvdsspeech plugin
Interface for custom low level Automatic Speech Recognition (ASR) library that can be loaded by the Gst-nvdsasr plugin
Gstreamer gst-nvdstexttospeech plugin
Interface for custom low level Text To Speech (TTS) library that can be loaded by the Gst-nvds_text_to_speech plugin
Gstreamer gst-nvdspostprocess plugin
Source code for the plugin and low level lib to provide a custom library interface for post processing on Tensor output of inference plugins (nvinfer/nvinferserver)
Gstreamer gst-nvtracker plugin
Source code for the plugin to track the detected objects with persistent (possibly unique) IDs over time
Gstreamer gst-nvdsanalytics plugin
Interface for performing analytics on metadata attached by nvinfer (primary detector) and nvtracker
Gstreamer gst-nvstreammux New plugin
Source code for the plugin to form a batch of frames from multiple input sources
Gst-v4l2 sources are not present in DeepStream package. To download, follow these steps:
Search filterfield, enter
Select the appropriate item for L4T Release
L4T Driver Package (BSP) Sources
Download the file and un-tar it, to get the
Gst-v4l2source files are in
gst-nvvideo4linux2_src.tbz2. libnvv4l2 sources are present in