Gst-nvdspostprocess in DeepStream

The Gst-nvdspostprocess plugin is released in DeepStream 6.1. The plugin supports parsing of various inferencing models in DeepStream SDK. The plugin can perform parsing on the tensors of the output layers provided by the Gst-nvinfer and Gst-nvinferserver. The aim of this document is to provide guidance on how to use the Gst-nvdspostprocess plugin for various inference models.

This document provides details about: The document is divided into four parts.

Detector models

To use Yolo V3 detector, follow the prerequisite steps mentioned in /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/README.

  1. Check if the setup is configured correctly by running below test pipelines in following folder /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/.

#For dGPU
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin !  \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_yoloV3.txt ! \
nvvideoconvert ! nvdsosd ! nveglglessink sync=0

#For Jetson
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin !  \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_yoloV3.txt ! \
nvvideoconvert ! nvdsosd ! nv3dsink sync=0
  1. To update the above pipeline to use the post processing plugin for parsing, the /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/config_infer_primary_yoloV3.txt file must be modified by:

  1. changing the network-type=0 to network-type=100. By doing this, output post processing is disabled in nvinfer plugin.

  2. Set the output-tensor-meta=1, nvinfer plugin then attaches the tensor meta to the input buffer.

  1. Store the modified file as config_infer_primary_yoloV3_modified.txt. The post processing plugin config file in YAML format has to be created as below.

property:
 gpu-id: 0 #Set the GPU id
 process-mode: 1 # Set the mode as primary inference
 num-detected-classes: 80 # Change according the models output
 gie-unique-id: 1  # This should match the one set in inference config
 ## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
 cluster-mode: 2  # Set  appropriate clustering algorithm
 network-type: 0  # Set the network type as detector
 labelfile-path: labels.txt # Set the path of labels wrt to this config file
 parse-bbox-func-name: NvDsPostProcessParseCustomYoloV3 # Set custom parsing function

class-attrs-all: # Set as done in the original infer configuration
 nms-iou-threshold: 0.5
 pre-cluster-threshold: 0.7
  1. Save the above config as config_detector.yml. The following pipeline can be executed as given below.

#For dGPU
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin ! \
sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_yoloV3_modified.txt ! \
nvdspostprocess postprocesslib-config-file=config_detector.yml \
postprocesslib-name=/opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so ! nvvideoconvert ! nvdsosd ! nveglglessink sync=0

#For Jetson
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin ! \
sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_yoloV3_modified.txt ! \
nvdspostprocess postprocesslib-config-file=config_detector.yml \
postprocesslib-name=/opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so ! nvvideoconvert ! nvdsosd ! \
nv3dsink sync=0

Note

The NvDsPostProcessParseCustomYoloV3 function is defined in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspostprocess/postprocesslib_impl/post_processor_custom_impl.cpp

Process similar to the above can be followed to demonstrate the usage of Faster RCNN network (/opt/nvidia/deepstream/deepstream/sources/objectDetector_FasterRCNN/README), with nvdspostprocess plugin with below config_detector.yml

property:
  gpu-id: 0 #Set the GPU id
  process-mode: 1 # Set the mode as primary inference
  num-detected-classes: 21 # Change according the models output
  gie-unique-id: 1  # This should match the one set in inference config
  ## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
  cluster-mode: 2  # Set  appropriate clustering algorithm
  network-type: 0  # Set the network type as detector
  labelfile-path: labels.txt # Set the path of labels wrt to this config file
  parse-bbox-func-name: NvDsPostProcessParseCustomFasterRCNN # Set custom parsing function FRCNN

class-attrs-all: # Set as done in the original infer configuration
  topk: 20
  nms-iou-threshold: 0.4
  pre-cluster-threshold: 0.5

class-attrs-0:
  pre-cluster-threshold: 1.1

The pipeline for running the Faster RCNN network with modified nvinfer config and post process plugin is given below.

#For dGPU
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin !  \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_fasterRCNN_modified.txt ! \
nvdspostprocess postprocesslib-config-file=config_detector.yml postprocesslib-name=/opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so ! \
nvvideoconvert ! nvdsosd ! nveglglessink sync=0

#For Jetson
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin !  \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! nvinfer config-file-path=config_infer_primary_fasterRCNN_modified.txt ! \
nvdspostprocess postprocesslib-config-file=config_detector.yml postprocesslib-name=/opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so ! \
nvvideoconvert ! nvdsosd ! nv3dsink sync=0

Primary Classification model

The primary classification model is demonstrated using the DeepStream Triton Docker Containers on dGPU. Once the docker is running the model repo and classification video should be created.

  1. Execute following commands to download the model repo and create a sample classification video.

cd /opt/nvidia/deepstream/deepstream/samples
./prepare_ds_triton_model_repo.sh
apt install ffmpeg
./prepare_classification_test_video.sh
cd /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app-triton
  1. Check by running following sample classification pipeline

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/classification_test_video.mp4  ! decodebin ! \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 ! \
nvinferserver config-file-path=config_infer_primary_classifier_densenet_onnx.txt  \
! nvvideoconvert ! nvdsosd ! nveglglessink sync=1

Note

To use nveglglessink inside docker ensure xhost + done from the host, and set appropriate DISPLAY environment variable inside the docker.

  1. Now, update the config_infer_primary_classifier_densenet_onnx.txt to disable post processing and attaching tensor output meta in nvinferserver. This can be done by updating configuration file with following parameters infer_config { postprocess { other {} } } and output_control { output_tensor_meta : true }

infer_config {
 unique_id: 5
 gpu_ids: [0]
 max_batch_size: 1
 backend {
   triton {
     model_name: "densenet_onnx"
     version: -1
     model_repo {
       root: "../../triton_model_repo"
       strict_model_config: true
       tf_gpu_memory_fraction: 0.0
       tf_disable_soft_placement: 0
     }
   }
 }
 preprocess {
   network_format: IMAGE_FORMAT_RGB
   tensor_order: TENSOR_ORDER_LINEAR
   maintain_aspect_ratio: 0
   frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
   frame_scaling_filter: 1
   normalize {
   scale_factor: 0.0078125
   channel_offsets: [128, 128, 128]
   }
 }
 #Disable post processing in nvinferserver
 postprocess {
   other {
   }
 }
 extra {
   copy_input_to_host_buffers: false
   output_buffer_pool_size: 2
 }
}
input_control {
 process_mode: PROCESS_MODE_FULL_FRAME
 interval: 0
}
#Enable attaching output tensor meta in nvinferserver
output_control {
 output_tensor_meta: true
}
  1. Save the above config as config_infer_primary_classifier_densenet_onnx_modified.txt. Create a config_classifier.yml as given below.

property:
 gpu-id: 0
 network-type: 1 # Type of network i.e. classifier
 process-mode: 1 # Operate in primary mode i.e. operate on full frame
 classifier-threshold: 0.2 #Set classifier threshold
 gie-unique-id: 5 # Set the unique_id matching one in the inference
 classifier-type: ObjectClassifier # type of classifier
 labelfile-path: /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/densenet_onnx/densenet_labels.txt #Path of the labels fine
  1. The following pipeline with nvdspostprocess plugin can now be executed to view the classification results

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/classification_test_video.mp4  ! decodebin ! \
m.sink_0 nvstreammux name=m batch-size=1 width=1920  height=1080 !   nvinferserver \
config-file-path=config_infer_primary_classifier_densenet_onnx_modified.txt ! \
nvdspostprocess postprocesslib-config-file= config_classifier.yml postprocesslib-name= \
/opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so ! nvvideoconvert ! nvdsosd ! nveglglessink sync=1

Mask RCNN Model

To use the instance segmentation model follow the README in package /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README.md to obtain TAO toolkit config files and PeopleSegNet model.

  1. Once setup is done, execute following pipeline to validate the model.

cd /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin ! \
m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= config_infer_primary_peopleSegNet.txt  ! \
nvvideoconvert ! nvdsosd display-mask=1 process-mode=0 ! nveglglessink sync=0

Note

For correct operation ensure the Tensor-RT OSS plugin is compiled and replaced as mentioned in the TAO README.

  1. As mentioned in earlier sections update the nvinfer configuration file to disable post processing and enable attaching tensor output meta. This is done by changing the network-type=100 and output-tensor-meta=1.

  2. Store the file by the name config_infer_primary_peopleSegNet_modified.txt. The config_mrcnn.yml can be created as given below.

property:
 gpu-id: 0
 process-mode: 1 # Process on full frame
 num-detected-classes: 2 #Total Detected classes
 gie-unique-id: 1  #Match with gie-unique-id of inference config
 ## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
 cluster-mode: 4 # Disable clustering
 network-type: 3 # Network is instance segmentation
 labelfile-path: peopleSegNet_labels.txt
 parse-bbox-instance-mask-func-name: NvDsPostProcessParseCustomMrcnnTLTV2

class-attrs-all:
 pre-cluster-threshold: 0.8
  1. Following pipeline can be used for testing the nvdspostprocess plugin with MRCNN network, using the above configuration files.

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin !   \
m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! \
nvinfer config-file-path= config_infer_primary_peopleSegNet.txt ! \
nvdspostprocess postprocesslib-name= /opt/nvidia/deepstream/deepstream/lib/libpostprocess_impl.so \
postprocesslib-config-file= config_mrcnn.yml  !   nvvideoconvert ! nvdsosd display-mask=1 process-mode=0 ! nveglglessink sync=0

Custom Parsing functions

This section mentions the parsing functions present in postprocess library for available network architectures.

Custom Parsing functions supported

Custom Parsing Function

Description

NvDsPostProcessParseCustomResnet

Parsing Resnet 10 model packaged in DeepStream

NvDsPostProcessParseCustomTfSSD

Tensorflow/Onnx SSD detector

NvDsPostProcessParseCustomNMSTLT

Parsing TAO Toolkit Open Architecture Models SSD, FRCNN, DSSD, RetinaNet

NvDsPostProcessParseCustomBatchedNMSTLT

Parsing of TAO Toolkit Open Architecture Models Yolo V3, Yolo V4

NvDsPostProcessParseCustomMrcnnTLTV2

Parsing of TAO Toolkit Open Architecture Model MaskRCNN

NvDsPostProcessParseCustomFasterRCNN

Parsing of Faster R-CNN Network

NvDsPostProcessClassiferParseCustomSoftmax

Parsing Resnet 18 vehicle type classifier model packaged in DeepStream

NvDsPostProcessParseCustomSSD

Parsing of SSD Network

NvDsPostProcessParseCustomYoloV3

Parsing of Yolo V3 Network

NvDsPostProcessParseCustomYoloV3Tiny

Parsing of Yolo V3 Tiny Network

NvDsPostProcessParseCustomYoloV2

Parsing of Yolo V2 Network

NvDsPostProcessParseCustomYoloV2Tiny

Parsing of Yolo V2 Tiny Network