C/C++ Sample Apps Source Details

The DeepStream SDK package includes archives containing plugins, libraries, applications, and source code. The sources directory is located at /opt/nvidia/deepstream/deepstream-6.4/sources for both Debian installation (on Jetson or dGPU) and SDK Manager installation. For tar packages the source files are in the extracted deepstream package. DeepStream Python bindings and sample applications are available as separate packages. For more information, see https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.

DeepStream graphs created using the Graph Composer are listed under Reference graphs section. For more information, see the Graph Composer Introduction.

Sample source details

Reference test application

Path inside sources directory

Description

Sample test application 1

apps/sample_apps/deepstream-test1

Sample of how to use DeepStream elements for a single H.264 stream: filesrc → decode → nvstreammux → nvinfer or nvinferserver (primary detector) → nvdsosd → renderer. This app uses resnet18_trafficcamnet.etlt for detection.

Sample test application 2

apps/sample_apps/deepstream-test2

Sample of how to use DeepStream elements for a single H.264 stream: filesrc → decode → nvstreammux → nvinfer or nvinferserver (primary detector) → nvtracker → nvinfer or nvinferserver (secondary classifier) → nvdsosd → renderer. This app uses resnet18_trafficcamnet.etlt for detection and 2 classifier models (i.e., resnet18_vehiclemakenet.etlt, resnet18_vehicletypenet.etlt).

Sample test application 3

apps/sample_apps/deepstream-test3

Builds on deepstream-test1 (simple test application 1) to demonstrate how to:

  • Use multiple sources in the pipeline.

  • Use a uridecodebin to accept any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec.

  • Configure Gst-nvstreammux to generate a batch of frames and infer on it for better resource utilization.

  • Extract the stream metadata, which contains useful information about the frames in the batched buffer.

This app uses resnet18_trafficcamnet.etlt for detection.

Sample test application 4

apps/sample_apps/­deepstream-test4

Builds on deepstream-test1 for a single H.264 stream: filesrc, decode, nvstreammux, nvinfer or nvinferserver, nvdsosd, renderer to demonstrate how to:

  • Use the Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline.

  • Create NVDS_META_EVENT_MSG type metadata and attach it to the buffer.

  • Use NVDS_META_EVENT_MSG for different types of objects, e.g. vehicle and person.

  • Implement “copy” and “free” functions for use if metadata is extended through the extMsg field.

This app uses resnet18_trafficcamnet.etlt for detection.

Sample test application 5

apps/sample_apps/­deepstream-test5

Builds on top of deepstream-app. Demonstrates:

  • Use of Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline for multistream.

  • How to configure Gst-nvmsgbroker plugin from the config file as a sink plugin (for KAFKA, Azure, etc.).

  • How to handle the RTCP sender reports from RTSP servers or cameras and translate the Gst Buffer PTS to a UTC timestamp.

For more details refer the RTCP Sender Report callback function test5_rtcp_sender_report_callback() registration and usage in deepstream_test5_app_main.c. GStreamer callback registration with rtpmanager element’s “handle-sync” signal is documented in apps-common/src/deepstream_source_bin.c.

This app uses resnet18_trafficcamnet.etlt for detection.

AMQP protocol test application

libs/amqp_­protocol_adaptor

Application to test AMQP protocol. This app uses resnet18_trafficcamnet.etlt for detection.

Azure MQTT test application

libs/azure_protocol_adaptor

Test application to show Azure IoT device2edge messaging and device2cloud messaging using MQTT. This app uses resnet18_trafficcamnet.etlt for detection.

DeepStream reference application

apps/sample_apps/­deepstream-app

Source code for the DeepStream reference application. This app uses resnet18_trafficcamnet.etlt for detection and 2 classifier models (i.e., resnet18_vehiclemakenet.etlt, resnet18_vehicletypenet.etlt).

UFF SSD detector

sources/objectDetector_SSD

Configuration files and custom library implementation for the SSD detector model.

Yolo detector

sources/objectDetector_Yolo

Configuration files and custom library implementation for the Yolo models, currently Yolo v2, v2 tiny, v3, and v3 tiny.

Dewarper example

apps/sample_apps/deepstream-dewarper-test

Demonstrates dewarper functionality for single or multiple 360-degree camera streams. Reads camera calibration parameters from a CSV file and renders aisle and spot surfaces on the display.

Optical flow example

apps/sample_apps/deepstream-nvof-test

Demonstrates optical flow functionality for single or multiple streams. This example uses two GStreamer plugins (Gst-nvof and Gst-nvofvisual). The Gst-nvof element generates the MV (motion vector) data and attaches it as user metadata. The Gst-nvofvisual element visualizes the MV data using a predefined color wheel matrix.

Custom meta data example

apps/sample_apps/deepstream-user-metadata-test

Demonstrates how to add custom or user-specific metadata to any component of DeepStream. The test code attaches a 16-byte array filled with user data to the chosen component. The data is retrieved in another component. This app uses resnet18_trafficcamnet.etlt for detection.

MJPEG and JPEG decoder and inferencing example

apps/sample_apps/deepstream-image-decode-test

Builds on deepstream-test3 to demonstrate image decoding instead of video. This example uses a custom decode bin so the MJPEG codec can be used as input. This app uses resnet18_trafficcamnet.etlt for detection.

Image/Video segmentation example

apps/sample_apps/deepstream-segmentation-test

Demonstrates segmentation of multi-stream video or images using a semantic or industrial neural network and rendering output to a display. This app uses unet_output_graph.uff for industrial and unetres18_v4_pruned0.65_800_data.uff for semantic usecases.

Handling metadata before Gst-nvstreammux

apps/sample_apps/deepstream-gst-metadata-test

Demonstrates how to set metadata before the Gst-nvstreammux plugin in the DeepStream pipeline, and how to access it after Gst-nvstreammux. This app uses resnet18_trafficcamnet.etlt for detection.

Gst-nvinfer tensor meta flow example

apps/sample_apps/deepstream-infer-tensor-meta-app

Demonstrates how to flow and access nvinfer tensor output as metadata. NOTE: This binary is not packaged due to OpenCV deprecation. This app needs to be compiled by the user. This app uses resnet18_trafficcamnet.etlt for detection and 2 classifier models (i.e., resnet18_vehiclemakenet.etlt, resnet18_vehicletypenet.etlt).

Preprocess example

apps/sample_apps/deepstream-preprocess-test

Demonstrates inference on preprocessed ROIs configured for the streams. This app uses resnet18_trafficcamnet.etlt for detection.

3D action recognition Reference app

apps/sample_apps/deepstream-3d-action-recognition

Demonstrates a sequence batching based 3D or 2D model inference pipeline for action recognition. It also includes a sequence based preprocess custom lib for NCSHW temporal batching. Refer Prerequisites in README before running the application. This app uses resnet18_2d_rgb_hmdb5_32.etlt for 2D and resnet18_3d_rgb_hmdb5_32.etlt for 3D action recognition.

Analytics example

apps/sample_apps/deepstream-nvdsanalytics-test

Demonstrates batched analytics like ROI filtering, Line crossing, direction detection and overcrowding. This app uses resnet18_trafficcamnet.etlt for detection.

OpenCV example

apps/sample_apps/deepstream-opencv-test

Demonstrates the use of OpenCV in dsexample plugin. Need to compile dsexample with flag WITH_OPENCV=1. This app uses resnet18_trafficcamnet.etlt for detection.

Image as Metadata example

Apps/sample_apps / deepstream-image-meta-test

Demonstrates how to attach encoded image as meta data and save the images in jpeg format. This app uses resnet18_trafficcamnet.etlt for detection.

Appsrc and Appsink example

apps/sample_apps/deepstream-appsrc-test

Demonstrates AppSrc and AppSink usage for consuming and giving data from non DeepStream code respectively. This app uses resnet18_trafficcamnet.etlt for detection.

Cuda Appsrc and Appsink example

apps/sample_apps/deepstream-appsrc-cuda-test

Demonstrates how cuda frames acquired from outside DeepStream can be fed to a DeepStream pipeline.

Transfer learning example

apps/sample_apps/ deepstream-transfer-learning-app

Demonstrates a mechanism to save the images for objects which have lesser confidence and the same can be used for training further. This app uses resnet18_trafficcamnet.etlt for detection.

Mask-RCNN example

apps/sample_apps/ deepstream-mrcnn-test

Demonstrates Instance segmentation using Mask-RCNN model. NOTE: This binary is not packaged due to OpenCV deprecation. This app needs to be compiled by the user.

DeepStream Audio Reference Application

apps/sample_apps/deepstream-audio

Source code for the DeepStream reference application demonstrating audio analytics pipeline. This app uses SONYC audio model to classify labels.

Smart Record example

apps/sample_apps/deepstream-testsr

Demonstrates event based smart record functionality. This app uses resnet18_trafficcamnet.etlt for detection.

Automatic Speech Recognition

apps/audio_apps/deepstream-asr-app

Demonstrates Automatic Speech Recognition functionality. Note: This application requires Riva ASR services to be available . Refer Prequisites in README before running the application. The default model for this application is Jasper, other options are CitriNet and QuartzNet.

Text To Speech Conversion (Alpha)

apps/audio_apps/deepstream-asr-tts-app

Demonstrates Text To Speech conversion functionality along with Automatic Speech Recognition. Note: This application requires Riva TTS and ASR services to be available. Refer prerequisites in README before running the application. This application uses the CitriNet model for ASR and FastPitch, HiFi-GAN models for TTS.

Audio+video+Text Synchronization (Alpha)

apps/sample_apps/deepstream-avsync-app

Demonstrates synchronization of audio, video and text output from nvdsasr in DeepStream pipeline. Note: This application requires Riva ASR services to be available. Refer prerequisites in README before running the application. This app uses Jasper models for Speech Recognition.

DeepStream NMOS Application

apps/sample_apps/deepstream-nmos

This application demonstrates how to create a DeepStream app as an NMOS Node. It uses a library (NvDsNmos) which provides the APIs to create, destroy and internally manage the NMOS Node. The NMOS Node can automatically discover and register with an NMOS Registry on the network using the AMWA IS-04 Registration API.

It also shows how to create various Video and Audio pipelines, run them simultaneously and reconfigure them based on NMOS events such as AMWA IS-05 Connection API requests from an NMOS Controller.

DeepStream UCX test

apps/sample_apps/deepstream-ucx-test

Demonstrates how to use the communication plugin gst-nvdsucx with DeepStream SDK. The application has been validated with kernel-5.15.

DeepStream 3D Depth Camera Reference App

apps/sample_apps/deepstream-3d-depth-camera

Demonstrates how to setup depth capture, depth render, 3D-point-cloud processing and 3D-points render pipelines over DS3D interfaces and custom-libs of ds3d::dataloader, ds3d::datafilter and ds3d::datarender. See more details in DeepStream 3D Depth Camera App

DeepStream Lidar Data Inferencing Reference App

apps/sample_apps/deepstream-lidar-inference-app

Demonstrates how to read in point cloud data, inference pont cloud data with pointpillar 3D objects detection model with Triton, render point cloud data and 3D objects with GLES. The whole application is based on DS3D interfaces and custom-libs of ds3d::dataloader, ds3d::datafilter and ds3d::datarender. See more details in DeepStream Lidar Inference App (Alpha)

Triton Onnx YOLO-v3

sources/TritonOnnxYolo

Configuration files and custom library implementation for the ONNX YOLO-V3 model. Demonstrates how to use DS-Triton to run models with dynamic-sized output tensors and how to implement custom-lib to run ONNX YoloV3 models with multi-input tensors and how to postprocess mixed-batch tensor data and attach them into nvds metadata

Deepstream Server Application

apps/sample_apps/deepstream-server

Demonstrates REST API support to control DeepStream pipeline on-the-fly.

DeepStream Can Orientation Sample App

apps/sample_apps/deepstream-can-orientation-app

Demonstrates can orientation detection with CV-based VPI template matching algorithm. VPI template matching is implemented with DeepStream video template plugin. See more details in apps/sample_apps/deepstream-can-orientation-app/README

Triton Ensemble Model Example

sources/TritonBackendEnsemble

Configuration files, Triton custom C++ backend implementation and custom library implementation for Triton ensemble model example. Demonstrates use of Triton ensemble models with gst-nvinferserver plugin and how to implement custom Triton C++ backend to access DeespStream metadata like stream ID using multi-input tensors.

deepstream-multigpu-nvlink-test

apps/sample_apps/deepstream-multigpu-nvlink-test

Uses gst-nvdsxfer plugin to simulate pipelines with NVLINK enabled multi-gpu setup to achieve better performance. User can use “position” param of nvxfer config section from dsmultigpu_config.yml file to simulate gst-nvxfer plugin supported various multi-gpu usecase pipelines.

Note

Apps which write output files (example: deepstream-image-meta-test, deepstream-testsr, deepstream-transfer-learning-app) should be run with sudo permission.

Plugin and Library Source Details

The following table describes the contents of the sources directory except for the reference test applications:

Plugin and Library source details

Plugin or library

Path inside sources directory

Description

DsExample GStreamer plugin

gst-plugins/gst-dsexample

Template plugin for integrating custom algorithms into DeepStream SDK graph.

GStreamer Gst-nvmsgconv plugin

gst-plugins/gst-nvmsgconv

Source code for the GStreamer Gst-nvmsgconv plugin to convert metadata to schema format.

GStreamer Gst-nvmsgbroker plugin

gst-plugins/gst-nvmsgbroker

Source code for the GStreamer Gst-nvmsgbroker plugin to send data to the server.

GStreamer Gst-nvdspreprocess plugin

gst-plugins/gst-nvdspreprocess

Source code for the GStreamer Gst-nvdspreprocess plugin for preprocessing on the predefined ROIs.

GStreamer Gst-nvinfer plugin

gst-plugins/gst-nvinfer

Source code for the GStreamer Gst-nvinfer plugin for inference.

GStreamer Gst-nvinferserver plugin

gst-plugins/gst-nvinferserver

Source code for the GStreamer Gst-nvinferserver plugin for inference using Triton Inference Server.

GStreamer Gst-nvdsosd plugin

gst-plugins/gst-nvdsosd

Source code for the GStreamer Gst-nvdsosd plugin to draw bboxes, text and other objects.

Gstreamer Gst-nvdewarper plugin

gst-plugins/gst-nvdewarper

Source code for the GStreamer Gst-nvdewarper plugin to dewarp frames

NvDsInfer library

libs/nvdsinfer

Source code for the NvDsInfer library, used by the Gst-nvinfer GStreamer plugin.

NvDsInferServer library

libs/nvdsinferserver

Source code for the NvDsInferServer library, used by the Gst-nvinferserver GStreamer plugin.

NvDsNmos library

libs/nvdsnmos

Source code for the NvDsNmos library, demonstrated by the DeepStream NMOS Application.

NvMsgConv library

libs/nvmsgsconv

Source code for the NvMsgConv library, required by the Gst-nvmsgconv GStreamer plugin.

Kafka protocol adapter

libs/kafka_protocol_adapter

Protocol adapter for Kafka.

nvds_rest_server library

libs/nvds_rest_server

Source code for the rest server.

nvds_customhelper

libs/gstnvdscustomhelper

Source code for “nvdsmultiurisrcbin helper” and custom “gst-events, gst-messages and common configs” required for rest server.

nvdsinfer_customparser

libs/nvdsinfer_customparser

Custom model output parsing example for detectors and classifiers.

Gst-v4l2

See the note below 1

Source code for v4l2 codecs.

Gstreamer gst-nvdsvideotemplate plugin

gst-plugins/gst-nvdsvideotemplate

Source code for template plugin to implement video custom algorithms (non Gstreamer based)

NvDsVideoTemplate custom library

gst-plugins/gst-nvdsvideotemplate/customlib_impl

Source code for custom library to implement video custom algorithms

Gstreamer gst-nvdsaudiotemplate plugin

gst-plugins/gst-nvdsaudiotemplate

Source code for template plugin to implement audio custom algorithms (non Gstreamer based)

NvDsVideoTemplate custom library

gst-plugins/gst-nvdsaudiotemplate/customlib_impl

Source code for custom library to implement audio custom algorithms

Gstreamer gst-nvdsmetautils

gst-plugins/gst-nvdsmetautils

Source code for Gstreamer Gst-nvdsmetainsert and Gst-nvdsmetaextract plugins to process metadata

NvDsMetaUtils SEI serialization library

gst-plugins/gst-nvdsmetautils/sei_serialization

Source code for custom meta de/serialization to embed in encoded bitstream as SEI data, required by Gst-nvdsmetautils plugins

NvDsMetaUtils Audio serialization library

gst-plugins/gst-nvdsmetautils/audio_metadata_serialization

Source code for Audio NvDsFrameMeta de/serialization, required by Gst-nvdsmetautils plugins

NvDsMetaUtils Video serialization library

gst-plugins/gst-nvdsmetautils/video_metadata_serialization

Source code for Video NvDsFrameMeta & NvDsObjectMeta de/serialization, required by Gst-nvdsmetautils plugins

Gstreamer gst-nvvideotestsrc plugin

gst-plugins/gst-nvvideotestsrc

Source code to generate video test data in a variety of formats and patterns that is written directly to GPU output buffers

Gstreamer gst-nvdsspeech plugin

gst-plugins/gst-nvdsspeech

Interface for custom low level Automatic Speech Recognition (ASR) library that can be loaded by the Gst-nvdsasr plugin

Gstreamer gst-nvdstexttospeech plugin

gst-plugins/gst-nvdstexttospeech

Interface for custom low level Text To Speech (TTS) library that can be loaded by the Gst-nvds_text_to_speech plugin

Gstreamer gst-nvdspostprocess plugin

gst-plugins/gst-nvdspostprocess

Source code for the plugin and low level lib to provide a custom library interface for post processing on Tensor output of inference plugins (nvinfer/nvinferserver)

Gstreamer gst-nvtracker plugin

gst-plugins/gst-nvtracker

Source code for the plugin to track the detected objects with persistent (possibly unique) IDs over time

Gstreamer gst-nvdsanalytics plugin

gst-plugins/gst-nvdsanalytics

Interface for performing analytics on metadata attached by nvinfer (primary detector) and nvtracker

Gstreamer gst-nvstreammux New plugin

gst-plugins/gst-nvmultistream2

Source code for the plugin to form a batch of frames from multiple input sources

Footnotes

1

Gst-v4l2 sources are not present in DeepStream package. To download, follow these steps:

  1. Go to: https://developer.nvidia.com/embedded/downloads.

  2. In the Search filter field, enter L4T

  3. Select the appropriate item for L4T Release 36.2.

  4. Search for L4T Driver Package (BSP) Sources

  5. Download the file and un-tar it, to get the .tbz2 file.

  6. Expand the .tbz2 file. Gst-v4l2 source files are in gst-nvvideo4linux2_src.tbz2. libnvv4l2 sources are present in v4l2_libs_src.tbz2