Sample Configurations and Streams

Contents of the package

This section provides information about included sample configs and streams.

  • samples: Directory containing sample configuration files, streams, and models to run the sample applications.

  • samples/configs/deepstream-app: Configuration files for the reference application:

    • source30_1080p_resnet_dec_infer_tiled_display_int8.txt: Demonstrates 30 stream decodes with primary inferencing. (For dGPU and Jetson AGX Xavier platforms only.)

    • source4_1080p_resnet_dec_infer_tiled_display_int8.txt: Demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers. (For dGPU and Jetson AGX Xavier platforms only.)

    • source4_1080p_resnet_dec_infer_tracker_sgie_tiled_display_int8_gpu1.txt: Demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers on GPU 1 (for systems that have multiple GPU cards). For dGPU platforms only.

    • config_infer_primary.txt: Configures a nvinfer element as primary detector.

    • config_infer_secondary_carcolor.txt, config_infer_secondary_carmake.txt, config_infer_secondary_vehicletypes.txt: Configure a nvinfer element as secondary classifier.

    • iou_config.txt: Configures a low-level IOU (Intersection over Union) tracker.

    • tracker_config.yml: Configures the NvDCF tracker.

    • source1_usb_dec_infer_resnet_int8.txt: Demonstrates one USB camera as input.

    • source1_csi_dec_infer_resnet_int8.txt: Demonstrates one CSI camera as input; for Jetson only.

    • source2_csi_usb_dec_infer_resnet_int8.txt: Demonstrates one CSI camera and one USB camera as inputs; for Jetson only.

    • source6_csi_dec_infer_resnet_int8.txt: Demonstrates six CSI cameras as inputs; for Jetson only.

    • source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt: Demonstrates 8 Decode + Infer + Tracker; for Jetson Nano only.

    • source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt: Demonstrates 8 Decode + Infer + Tracker; for Jetson TX1 only.

    • source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt: Demonstrates 12 Decode + Infer + Tracker; for Jetson TX2 only.

  • samples/configs/deepstream-app-trtis: Configuration files for the reference application for inferencing using Triton Inference Server

  • source30_1080p_dec_infer-resnet_tiled_display_int8.txt (30 Decode + Infer)

  • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt (4 Decode + Infer + SGIE + Tracker)

  • source1_primary_classifier.txt (Single source + full frame classification)

Note

Other classification models can be used by changing the nvinferserver config file in the [*-gie] group of application config file.

  • source1_primary_detector.txt (Single source + object detection using ssd)

  • Configuration files for “nvinferserver” element in configs/deepstream-app-trtis/

    • config_infer_plan_engine_primary.txt (Primary Object Detector)

    • config_infer_secondary_plan_engine_carcolor.txt (Secondary Car Color Classifier)

    • config_infer_secondary_plan_engine_carmake.txt (Secondary Car Make Classifier)

    • config_infer_secondary_plan_engine_vehicletypes.txt (Secondary Vehicle Type Classifier)

    • config_infer_primary_classifier_densenet_onnx.txt (DenseNet-121 v1.2 classifier)

    • config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt (Tensorflow Inception v3 classifier - Post processing in Triton)

    • config_infer_primary_classifier_inception_graphdef_postprocessInDS.txt (Tensorflow Inception v3 classifier - Post processing in DeepStream)

    • config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt (TensorFlow SSD Inception V2 Object Detector)

  • NVIDIA Transfer Learning Toolkit (TLT) pretrained Models: samples/configs/tlt_pretrained_models: Reference application configuration files for the pre-trained models provided by NVIDIA Transfer Learning Toolkit (TLT)

    • deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt (Demonstrates object detection using DashCamNet model with VehicleMakeNet and VehicleTypeNet as secondary classification models on one source)

    • deepstream_app_source1_faceirnet.txt (Demonstrates face detection for IR camera using FaceDetectIR object detection model on one source)

    • deepstream_app_source1_peoplenet.txt (Demonstrates object detection using PeopleNet object detection model on one source)

    • deepstream_app_source1_trafficcamnet.txt (Demonstrates object detection using TrafficCamNet object detection model on one source)

    • deepstream_app_source1_detection_models.txt (Demonstrates object detection using multiple TLT exported models located at https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps. Models can be switched by changing the nvinfer configuration file.)

  • nvinfer element configuration files and label files in configs/ tlt_pretrained_models

    • config_infer_primary_dashcamnet.txt, labels_dashcamnet.txt (DashCamNet – Resnet18 based object detection model for Vehicle, Bicycle, Person, Roadsign)

    • config_infer_secondary_vehiclemakenet.txt, labels_vehiclemakenet.txt (VehicleMakeNet – Resnet18 based classification model for make of the vehicle)

    • config_infer_secondary_vehicletypenet.txt, labels_vehicletypenet.txt (VehicleTypeNet – Resnet18 based classification model for type of the vehicle)

    • config_infer_primary_faceirnet.txt, labels_faceirnet.txt (FaceIRNet – Resnet18 based face detection model for IR images)

    • config_infer_primary_peoplenet.txt, labels_peoplenet.txt (PeopleNet – Resnet18 based object detection model for Person, Bag, Face)

    • config_infer_primary_trafficcamnet.txt, labels_trafficnet.txt (TrafficCamNet – Resnet18 based object detection model for Vehicle, Bicycle, Person, Roadsign for traffic camera viewpoint)

    • config_infer_primary_detectnet_v2.txt, detectnet_v2_labels.txt (DetectNetv2 – Object detection model for Bicycle, Car, Person, Roadsign)

    • config_infer_primary_dssd.txt, dssd_labels.txt(DSSD – Object detection model for Bicycle, Car, Person, Roadsign)

    • config_infer_primary_frcnn.txt, frcnn_labels.txt (FasterRCNN – Object detection model for Bicycle, Car, Person, Roadsign, Background)

    • config_infer_primary_retinanet.txt, retinanet_labels.txt (RetinaNet – Object detection model for Bicycle, Car, Person, Roadsign)

    • config_infer_primary_ssd.txt, ssd_labels.txt (SSD – Object detection model for Bicycle, Car, Person, Roadsign)

    • config_infer_primary_yolov3.txt, yolov3_labels.txt (YoloV3 – Object detection model for Bicycle, Car, Person, Roadsign)

    • config_infer_primary_mrcnn.txt, mrcnn_labels.txt (MaskRCNN – Instance segmentation model for Background and Car)

  • samples: Directory containing sample configuration files, models, and streams to run the sample applications.

  • samples/streams: The following streams are provided with the DeepStream SDK:

    Streams

    Type of Stream

    sample_1080p_h264.mp4

    H264 containerized stream

    sample_1080p_h265.mp4

    H265 containerized stream

    sample_720p.h264

    H264 elementary stream

    sample_720p.jpg

    JPEG image

    sample_720p.mjpeg

    MJPEG stream

    sample_cam6.mp4

    H264 containerized stream (360D camera stream)

    sample_industrial.jpg

    JPEG image

    yoga.jpg

    Image for perspective projection in Dewarper

    sample_qHD.mp4

    Used for MaskRCNN

  • samples/models: The following sample models are provided with the SDK:

DeepStream Reference application

Model

Model Type

No. of Classes

Resolution

Primary Detector

Resnet10

4

640 × 368

Secondary Car Color Classifier

Resnet18

12

224 × 224

Secondary Car Make Classifier

Resnet18

6

224 × 224

Secondary Vehicle Type Classifier

Resnet18

20

224 × 224

Segmentation example

Model

Model Type

No. of Classes

Resolution

Industrial

Resnet18 + UNet

1

512 x 512

Semantic

Resnet18 + UNet

4

512 x 512

Instance

Resnet50 + Maskrcnn

2

1344 x 832

Scripts included along with package

The following scripts are included along with the sample applications package: