Deploying to Deepstream

The deep learning and computer vision models that you trained can be deployed on edge devices, such as a Jetson Xavier, Jetson Nano or a Tesla or in the cloud with NVIDIA GPUs. TLT has been designed to integrate with DeepStream SDK, so models trained with TLT will work out of the box with DeepStream SDK.

DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of Classification and DetectNet_v2 exported models into the deepstream sample app. The documentation for the DeepStream SDK is provided here. For other models such as YOLOv3, FasterRCNN, SSD, DSSD, RetinaNet, and MaskRCNN there are few extra steps that are required which are covered in this chapter.

To deploy a model trained by TLT to DeepStream you can run multiple options:

  • Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export.

  • Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream.

Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the inference environment’s TensorRT or CUDA libraries are updated – including minor version updates or if a new model is generated– new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

This image shows DeepStream deployment method for all the models plus the two deployment options. Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate TensorRT engine file and then run inference. The generation of TensorRT engine can take some time depending on size of the model and type of Hardware. The generation of TensorRT engine can be done ahead of time with Option 2. With option 2, use tlt-converter to convert the .etlt file to TensorRT engine and then provide the engine file directly to DeepStream.

dstream_deploy_options.png

Running TLT models on DeepStream for DetectNet_v2 based detection and image classification, shown on the top half of the table is very straightforward. All that is required is the encrypted tlt model (.etlt), optional INT8 calibration cache and DeepStream config file. Go to Integrating a DetectNet_v2 model to see the DeepStream config file.

For other detection models such as FasterRCNN, YOLOv3, RetinaNet, SSD, and DSSD, and segmentation model such as MaskRCNN there are extra steps that need to be completed before the models will work with DeepStream. Here are the steps with detailed instructions in the following sections.

  • Step 1: Build TensorRT Open source software (OSS). This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. For more information and instructions, see the TensorRT Open Source Software section.

  • Step 2: Build custom parsers for DeepStream. The parsers are required to convert the raw Tensor data from the inference to (x,y) location of bounding boxes around the detected object. This post-processing algorithm will vary based on the detection architecture. For DetectNet_v2, the custom parsers are not required because the parsers are built-in with DeepStream SDK. For other detectors, DeepStream provides flexibility to add your own custom bounding box parser and that will be used for these 5 models.

TensorRT OSS build is required for FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, and MaskRCNN models. This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. The table below shows the plugins that are required by each network.

Network

Plugins required

SSD

batchTilePlugin and NMSPlugin

FasterRCNN

cropAndResizePlugin and proposalPlugin

YOLOV3

batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin

DSSD

batchTilePlugin and NMSPlugin

RetinaNet

batchTilePlugin and NMSPlugin

MaskRCNN

generateDetectionPlugin, multilevelProposeROI, multilevelCropAndResizePlugin, resizeNearestPlugin

If the deployment platform is x86 with NVIDIA GPU, follow instructions for x86 and if your deployment is on NVIDIA Jetson platform, follow instructions for Jetson.

TensorRT OSS on x86

Building TensorRT OSS on x86:

  1. Install Cmake (>=3.13).

    Note

    TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c

    Copy
    Copied!
                

    sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake


  2. Get GPU Arch. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample:

    Copy
    Copied!
                

    cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery

    If the “/usr/local/cuda/samples” doesn’t exist in your system, you could download deviceQuery.cpp from this repo. Compile and run deviceQuery.

    Copy
    Copied!
                

    nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery

    This command will output something like this, which indicates the “GPU_ARCHS” is 75 based on CUDA Capability major/minor version.

    Copy
    Copied!
                

    Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5

  3. Build TensorRT OSS:

    Copy
    Copied!
                

    git clone -b release/7.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build

    Note

    Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2.

    Copy
    Copied!
                

    /usr/local/bin/cmake .. -DGPU_ARCHS=xy -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc)

    After building ends successfully, libnvinfer_plugin.so* will be generated under \`pwd\`/out/.

  4. Replace the original “libnvinfer_plugin.so*”:

    Copy
    Copied!
                

    sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp $TRT_SOURCE/`pwd`/out/libnvinfer_plugin.so.7.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig

TensorRT OSS on Jetson (ARM64)

  1. Install Cmake (>=3.13)

    Note

    TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/UBuntu 18.04 is cmake 3.10.2.

    Upgrade TensorRT OSS using:

    Copy
    Copied!
                

    sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake

  2. Get GPU Arch based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

    Jetson Platform

    GPU_ARCHS

    Nano/Tx1

    53

    Tx2

    62

    AGX Xavier/Xavier NX

    72

  3. Build TensorRT OSS:

    Copy
    Copied!
                

    git clone -b release/7.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build

    Note

    The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, please change “72” referring to “GPU_ARCH” from step 2.

    Copy
    Copied!
                

    /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc)

    After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

  4. Replace "libnvinfer_plugin.so*" with the newly generated.

    Copy
    Copied!
                

    sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp `pwd`/out/libnvinfer_plugin.so.7.m.n /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig

This is part of option 2 from the DeepStream deployment table above. The tlt-converter is a tool that is provided with the Transfer Learning Toolkit to facilitate the deployment of TLT trained models on TensorRT and/or Deepstream. For deployment platforms with an x86 based CPU and discrete GPU’s, the tlt-converter is distributed within the TLT docker. Therefore, it is suggested to use the docker to generate the engine. However, this requires that the user adhere to the same minor version of TensorRT as distributed with the docker. The TLT docker includes TensorRT version 5.1 for JetPack 4.2.2 and TensorRT version 6.0.1 for JetPack 4.2.3 / 4.3. In order to use the engine with a different minor version of TensorRT, copy the converter from /opt/nvidia/tools/tlt-converter to the target machine and follow the instructions for x86 to run it and generate a TensorRT engine.

Instructions for x86

  1. Copy /opt/nvidia/tools/tlt-converter to the target machine.

  2. Install TensorRT 7.0+ for the respective target machine.

  3. If you are deploying FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, or MaskRCNN model, you need to build TensorRT Open source software on the machine. If you are using DetectNet_v2 or image classification, you can skip this step. Instructions to build TensorRT OSS on x86 can be found in TensorRT OSS on x86 section above or in this GitHub repo.

  4. Run tlt-converter using the sample command below and generate the engine.

Instructions for Jetson

For the Jetson platform, the tlt-converter is available to download in the dev zone. Once the tlt-converter is downloaded, please follow the instructions below to generate a TensorRT engine.

  1. Unzip tlt-converter-trt7.1.zip on the target machine.

  2. Install the open ssl package using the command:

    Copy
    Copied!
                

    sudo apt-get install libssl-dev

  3. Export the following environment variables:

Copy
Copied!
            

$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” $ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

  1. For Jetson devices, TensorRT 7.1 comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack 4.4.

  2. If you are deploying FasterRCNN, SSD, DSSD, YOLOv3, or RetinaNet model, you need to build TensorRT Open source software on the machine. If you are using DetectNet_v2 or image classification, you can skip this step. Instructions to build TensorRT OSS on Jetson can be found in TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.

  3. Run the tlt-converter using the sample command below and generate the engine.

Note

Make sure to follow the output node names as mentioned in Exporting the Model.


Using the tlt-converter

Copy
Copied!
            

tlt-converter [-h] -k <encryption_key> -d <input_dimensions> -o <comma separated output nodes> [-c <path to calibration cache file>] [-e <path to output engine>] [-b <calibration batch size>] [-m <maximum batch size of the TRT engine>] [-t <engine datatype>] [-w <maximum workspace size of the TRT Engine>] [-i <input dimension ordering>] input_file

Required Arguments

  • input_file: Path to the model exported using tlt-export.

  • -k: The API key used to configure the ngc cli to download the models.

  • -d: Comma-separated list of input dimensions that should match the dimensions used for tlt-export. Unlike tlt-export this cannot be inferred from calibration data.

  • -o: Comma-separated list of output blob names that should match the output configuration used for tlt-export. * For classification use: predictions/Softmax. * For DetectNet_v2: output_bbox/BiasAdd,output_cov/Sigmoid * For FasterRCNN: dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal * For SSD, DSSD, RetinaNet: NMS * For YOLOv3: BatchedNMS * For MaskRCNN: generate_detections, mask_head/mask_fcn_logits/BiasAdd

Optional Arguments

  • -e: Path to save the engine to. (default: ./saved.engine)

  • -t: Desired engine data type, generates calibration cache if in INT8 mode. The default

    value is fp32.The options are {fp32, fp16, int8}

  • -w: Maximum workspace size for the TensorRT engine. The default value is 1<<30.

  • -i: Input dimension ordering, all other tlt commands use NCHW. The default value is

    nchw. The options are {nchw, nhwc, nc}.

INT8 Mode Arguments

  • -c: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin.

  • -b: Batch size used during the tlt-export step for INT8 calibration cache generation. (default: 8).

  • -m: Maximum batch size of TensorRT engine. The default value is 16.

Sample Output Log

Sample log for exporting a resnet10 detectnet_v2 model.

Here’s a sample:

Copy
Copied!
            

export API_KEY=<NGC API key used to download the original model> export OUTPUT_NODES=output_bbox/BiasAdd,output_cov/Sigmoid export INPUT_DIMS=3,384,124 export D_TYPE=fp32 export ENGINE_PATH=resnet10_kitti_multiclass_v1.engine export MODEL_PATH=resnet10_kitti_multiclass_v1.etlt tlt-converter -k $API_KEY \ -o $OUTPUT_NODES \ -d $INPUT_DIMS \ -e $ENGINE_PATH \ $MODEL_PATH [INFO] UFFParser: parsing input_1 [INFO] UFFParser: parsing conv1/kernel [INFO] UFFParser: parsing conv1/convolution [INFO] UFFParser: parsing conv1/bias [INFO] UFFParser: parsing conv1/BiasAdd [INFO] UFFParser: parsing bn_conv1/moving_variance .. .. .. [INFO] Tactic 4 scratch requested: 1908801536, available: 16 [INFO] Tactic 5 scratch requested: 55567168, available: 16 [INFO] --------------- Chose 1 (0) [INFO] Formats and tactics selection completed in 5.0141 seconds. [INFO] After reformat layers: 16 layers [INFO] Block size 490733568 [INFO] Block size 122683392 [INFO] Block size 122683392 [INFO] Block size 30670848 [INFO] Block size 16 [INFO] Total Activation Memory: 766771216 [INFO] Data initialization and engine generation completed in 0.0412826 seconds

There are 2 options to integrate models from TLT with DeepStream:

  • Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export.

  • Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream.

dstream_deploy_options.png

As shown in the lower half of the table, for models such as YOLOv3, FasterRCNN, SSD, DSSD, RetinaNet, and MaskRCNN, you will need to build TensorRT Open source plugins and custom bounding box parsing. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this GitHub repo.

In order to integrate the models with DeepStream, you need the following:

  1. Download and install DeepStream SDK. The installation instructions for DeepStream are provided in the DeepStream Development Guide.

  2. An exported .etlt model file and optional calibration cache for INT8 precision.

  3. TensorRT 7+ OSS Plugins (Required for FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, MaskRCNN).

  4. A labels.txt file containing the labels for classes in the order in which the networks produces outputs.

  5. A sample config_infer_*.txt file to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream.

DeepStream SDK ships with an end-to-end reference application which is fully configurable. Users can configure input sources, inference model and output sinks. The app requires a primary object detection model, followed by an optional secondary classification model. The reference application is installed as deepstream-app. The graphic below shows the architecture of the reference application.

arch_ref_appl.png

There are typically 2 or more configuration files that are used with this app. In the install directory, the config files are located in samples/configs/deepstream-app or sample/configs/tlt_pretrained_models. The main config file configures all the high level parameters in the pipeline above. This would set input source and resolution, number of inferences, tracker and output sinks. The other supporting config files are for each individual inference engine. The inference specific config files are used to specify models, inference resolution, batch size, number of classes and other customization. The main config file will call all the supporting config files. Here are some config files in samples/configs/deepstream-app for your reference.

  • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt: Main config file

  • config_infer_primary.txt: Supporting config file for primary detector in the pipeline above

  • config_infer_secondary_*.txt: Supporting config file for secondary classifier in the pipeline above

The deepstream-app will only work with the main config file. This file will most likely remain the same for all models and can be used directly from the DeepStream SDK will little to no change. User will only have to modify or create config_infer_primary.txt and config_infer_secondary_*.txt.

Integrating a Classification model

See Exporting the model for more details on how to export a TLT model. Once the model has been generated two extra files are required:

  1. Label file

  2. DeepStream configuration file

Label File

The label file is a text file, containing the names of the classes that the TLT model is trained to classify against. The order in which the classes are listed must match the order in which the model predicts the output. This order may be deduced from the classmap.json file that is generated by TLT. This file is a simple dictionary containing the ‘class_name’ to ‘index map’. For example, in the sample classification sample notebook file included with the tlt-docker, the classmap.json file generated for pascal voc would look like this:

Copy
Copied!
            

{"sheep": 16,"horse": 12,"bicycle": 1, "aeroplane": 0, "cow": 9, "sofa": 17, "bus": 5, "dog": 11, "cat": 7, "person": 14, "train": 18, "diningtable": 10, "bottle": 4, "car": 6, "pottedplant": 15, "tvmonitor": 19, "chair": 8, "bird": 2, "boat": 3, "motorbike": 13}

The 0th index corresponds to aeroplane, the 1st index corresponds to bicycle, etc. up to 19 which corresponds to tvmonitor. Here is a sample label.txt file, classification_labels.txt, arranged in the order of index.

Copy
Copied!
            

aeroplane bicycle bird boat bottle bus .. .. tvmonitor


DeepStream Configuration File

A typical use case for video analytic is first to do an object detection and then crop the detected object and send it further for classification. This is supported by deepstream-app and the app architecture can be seen above. For example, to classify models of cars on the road, first you will need to detect all the cars in a frame. Once you do detection, you do classification on the cropped image of the car. So in the sample DeepStream app, the classifier is configured as a secondary inference engine after the primary detection. If configured appropriately, deepstream-app will automatically crop the detected image and send the frame to the secondary classifier. The config_infer_secondary_*.txt is used to configure the classification model.

dstream_deploy_options2.png

Option 1: Integrate the model (.etlt) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. Detail instructions are provided in the Generating an Engine Using tlt-converter section.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Copy
Copied!
            

[property] gpu-id=0 # preprocessing parameters: These are the same for all classification models generated by TLT. net-scale-factor=1.0 offsets=123.67;116.28;103.53 model-color-format=1 batch-size=30 # Model specific paths. These need to be updated for every classification model. int8-calib-file=<Path to optional INT8 calibration cache> labelfile-path=<Path to classification_labels.txt> tlt-encoded-model=<Path to Classification TLT model> tlt-model-key=<Key to decrypt model> input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format. uff-input-blob-name=input_1 output-blob-names=predictions/Softmax #output node name for classification ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 # process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame process-mode=2 interval=0 network-type=1 # defines that the model is a classifier. gie-unique-id=1 classifier-threshold=0.2

Integrating a DetectNet_v2 model

See Exporting the Model for more details on how to export a TLT model. Once the model has been generated two extra files are required:

  1. Label file

  2. DS configuration file

Label File

The label file is a text file, containing the names of the classes that the DetectNet_v2 model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the order the objects are instantiated in the cost_function_config field of the DetectNet_v2 experiment config file. Here’s an example, of the DetectNet_v2 sample notebook file included with the TLT docker, the cost_function_config parameter looks like this:

Copy
Copied!
            

cost_function_config { target_classes { name: "sheep" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "bottle" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "horse" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } .. .. target_classes { name: "boat" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "car" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } enable_autoweighting: False max_objective_weight: 0.9999 min_objective_weight: 0.0001 }

Here’s an example of the corresponding, detectnet_v2_labels.txt. The order in the labels.txt should match the order in the cost_function_config:

Copy
Copied!
            

sheep bottle horse .. .. boat car


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model.

dstream_deploy_options2.png

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. Detail instructions are provided in the Generating an engine using tlt-converter section above.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the two approaches. Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Copy
Copied!
            

[property] gpu-id=0 # preprocessing parameters. net-scale-factor=0.0039215697906911373 model-color-format=0 # model paths. int8-calib-file=<Path to optional INT8 calibration cache> labelfile-path=<Path to detectNet_v2_labels.txt> tlt-encoded-model=<Path to DetectNet_v2 TLT model> tlt-model-key=<Key to decrypt the model> input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format. uff-input-blob-name=input_1 batch-size=4 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd #enable_dbscan=0 [class-attrs-all] threshold=0.2 group-threshold=1 ## Set eps=0.7 and minBoxes for enable-dbscan=1 eps=0.2 #minBoxes=3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating an SSD Model

To run an SSD model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and SSD bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained SSD models from TLT is provided on github here.

Prerequisites for SSD Model

SSD requires batchTilePlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

SSD requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for SSD is available here. The following instructions can be used to build bounding box parser:

Step 1: Install git-lfs (git >= 1.8.2):

Note

git-lfs are needed to support downloading model files >5MB.

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with HTTPS:

Copy
Copied!
            

git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

Step 3: Build:

Copy
Copied!
            

export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_ssd_tlt make

This generates libnvds_infercustomparser_ssd_tlt.so in the directory.

Label File

The label file is a text file containing the names of the classes that the SSD model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT SSD will specify all class names in lower case and sort them in alphabetical order. For example, if the dataset_config is:

Copy
Copied!
            

dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 }

Then the corresponding classification_lables.txt file would look like this:

Copy
Copied!
            

bicycle car person


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.jpg

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. See the Generating an engine using tlt-converter for detailed instructions.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the two approaches. Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to ssd_labels.txt> tlt-encoded-model=<Path to SSD TLT model> tlt-model-key=<Key to decrypt model> uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path=<Path to libnvds_infercustomparser_ssd_tlt.so> [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating a FasterRCNN Model

To run a FasterRCNN model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and FasterRCNN bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained FasterRCNN models from TLT is provided on github here.

Prerequisite for FasterRCNN Model

  1. FasterRCNN requires the cropAndResizePlugin and the proposalPlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

  2. FasterRCNN requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for FasterRCNN is available here https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps. The following instructions can be used to build bounding box parser:

Step1: Install git-lfs (git >= 1.8.2)

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with SSH or HTTPS

Copy
Copied!
            

git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

Step 3: Build

Copy
Copied!
            

export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_frcnn_tlt make

This generates libnvds_infercustomparser_frcnn_tlt.so in the directory.

Label File

The label file is a text file, containing the names of the classes that the FasterRCNN model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the order the objects are instantiated in the target_class_mapping field of the FasterRCNN experiment specification file. During the training, TLT FasterRCNN will make all the class names in lower case and sort them in alphabetical order. For example, if the target_class_mapping label file is:

Copy
Copied!
            

target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" }

The actual class name list is bicycle, car, person. The example of the corresponding label_file_frcnn.txt file is:

Copy
Copied!
            

bicycle car person


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.jpg

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. See the Generating an engine using tlt-converter section above for detailed instructions.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!
            

parse-bbox-func-name=NvDsInferParseCustomFrcnnUff custom-lib-path=<PATH to libnvds_infercustomparser_frcnn_tlt.so>

Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Here’s a sample config file, config_infer_primary.txt:

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=<image mean values as in the training spec file> # e.g.: 103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to frcnn_labels.txt> tlt-encoded-model=<Path to FasterRCNN model> tlt-model-key=<Key to decrypt the model> uff-input-dims=<c;h;w;0> # 3;272;480;0. Where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format uff-input-blob-name=<input_blob_name> # e.g.: input_image batch-size=<batch size> e.g.: 1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=<number of classes to detect(including background)> # e.g.: 5 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=<output_blob_names> e.g.: dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal parse-bbox-func-name=NvDsInferParseCustomFrcnnTLT custom-lib-path=<PATH to libnvds_infercustomparser_frcnn_tlt.so> [class-attrs-all] roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating a YOLOv3 Model

To run a YOLOv3 model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and YOLOv3 bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained YOLOv3 models from TLT is provided here.

Prerequisite for YOLOv3 model

  1. YOLOv3 requires batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

  2. YOLOv3 requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for YOLOv3 is available here. The following instructions can be used to build bounding box parser:

Step1: Install git-lfs (git >= 1.8.2):

Note

git-lfs are needed to support downloading model files >5MB.

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with HTTPS:

Copy
Copied!
            

git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

Step 3: Build

Copy
Copied!
            

export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_yolov3_tlt make

This will generate libnvds_infercustomparser_yolov3_tlt.so in the directory.

Label File

The label file is a text file, containing the names of the classes that the YOLOv3 model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT YOLOv3 will specify all class names in lower case and sort them in alphabetical order. For example, if the dataset_config is:

Copy
Copied!
            

dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 }

Then the corresponding yolov3_labels.txt would look like this:

Copy
Copied!
            

bicycle car person


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.jpg

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. See the Generating an engine using tlt-converter section above for detailed instructions.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

codemodel-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!
            

parse-bbox-func-name=NvDsInferParseCustomYOLO3TLT custom-lib-path=<PATH to libnvds_infercustomparser_yolov3_tlt.so>

Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Here’s a sample config file, pgie_yolov3_config.txt:

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to yolov3_labels.txt> tlt-encoded-model=<Path to YOLOV3 etlt model> tlt-model-key=<Key to decrypt model> uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=BatchedNMS parse-bbox-func-name=NvDsInferParseCustomYOLOV3TLT custom-lib-path=<Path to libnvds_infercustomparser_yolov3_tlt.so> [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating a DSSD Model

To run a DSSD model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and DSSD bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained DSSD models from TLT is provided here.

Prerequisite for DSSD model

DSSD requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

DSSD requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for DSSD is here. The following instructions can be used to build bounding box parser:

Step1: Install git-lfs (git >= 1.8.2)

Note

git-lfs are needed to support downloading model files >5MB.

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with HTTPS

Copy
Copied!
            

git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

Step 3: Build

Copy
Copied!
            

export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_dssd_tlt make

This will generate code:libnvds_infercustomparser_dssd_tlt.so in the directory.

Label File

The label file is a text file, containing the names of the classes that the DSSD model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT DSSD will make all the class names in lower case and sort them in alphabetical order.For example, if the dataset_config is:

Copy
Copied!
            

dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 }

Here’s an example of the corresponding dssd_labels.txt file:

Copy
Copied!
            

bicycle car person


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.jpg

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. See the Generating an engine using tlt-converter section above for detailed instructions.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!
            

parse-bbox-func-name=NvDsInferParseCustomDSSDTLT custom-lib-path=<PATH to libnvds_infercustomparser_dssd_tlt.so>

Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to ssd_labels.txt> tlt-encoded-model=<Path to DSSD TLT model> tlt-model-key=<Key to decrypt model> uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path=<Path to libnvds_infercustomparser_dssd_tlt.so> [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating a RetinaNet Model

To run a RetinaNet model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and RetinaNet bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained DSSD models from TLT is provided here.

Prerequisite for RetinaNet Model

RetinaNet requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

RetinaNet requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for DSSD is available here. The following instructions can be used to build bounding box parser:

Step 1: Install git-lfs (git >= 1.8.2):

Note

git-lfs are needed to support downloading model files >5MB.

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with HTTPS:

Step 3: Build:

Copy
Copied!
            

export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_retinanet_tlt make

This will generate libnvds_infercustomparser_retinanet_tlt.so in the directory.

Label File

The label file is a text file containing the names of the classes that the RetinaNet model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT RetinaNet will specify all class names in lower case and sort them in alphabetical order. For example, if the dataset_config is:

Copy
Copied!
            

dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 }

Then the corresponding retinanet_labels.txt file would look like this:

Copy
Copied!
            

bicycle car person


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.jpg

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tlt-converter. See the Generating an engine using tlt-converter section above for detailed instructions.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!
            

parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path=<PATH to libnvds_infercustomparser_retinanet_tlt.so>

Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to the DeepStream Development Guide.

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to retinanet_labels.txt> tlt-encoded-model=<Path to RetinaNet TLT model> tlt-model-key=<Key to decrypt model> uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path=<Path to libnvds_infercustomparser_retinanet_tlt.so> [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Integrating Purpose-Built Models

Integrating purpose-built models is very straightforward in DeepStream. The configuration file and label file for these models are provided in the SDK. These files can be used with the provided pruned model as well as your own trained model. For the provided pruned models, the config and label file should work out of the box. For your custom model, minor modification might be required.

Download and install DeepStream SDK. The installation instructions for DeepStream are provided in the DeepStream Development Guide. The config files for the purpose-built models are located in:

Copy
Copied!
            

/opt/nvidia/deepstream/deepstream-5:0/samples/configs/tlt_pretrained_models

/opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

There are two sets of config files: main config files and inference config files. Main config file can call one or multiple inference config files depending on number of inferences. The table below shows the models being deployed by each config file.

Model(s)

Main DeepStream Configuration

Inference Configuration(s)

Label File(s)

TrafficCamNet

deepstream _app _source1 _trafficcamnet.txt

config_infer _primary _trafficcamnet.txt

labels_trafficnet.txt

PeopleNet

deepstream_app _source1 _peoplenet.txt

config_infer _primaryn _peoplenet.txt

labels_peoplenet.txt

DashCamNetVehicleMake NetVechicleTypeNet

deepstream_app_source1 _dashcamnet _vehiclemakenet _vehicletypenet.txt

config_infer _primary _dashcamnet.txt config_infer _secondary _vehiclemakenet.txt config_infer _secondary _vehicletypenet.txt

labels_dashcamnet.txt labels_vehiclemakenet.txt labels_vehicletypenet.txt

FaceDetect-IR

deepstream_app _source1_faceirnet.txt

config_infer _primary _faceirnet.txt

labels_faceirnet.txt

The main configuration file is to be used with deepstream-app, DeepStream reference application. In the deepstream-app, the primary detector will detect the objects and send the cropped frame to secondary classifiers. For more information, refer to the DeepStream Development Guide for more details.

The deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt configures three models: DashCamNet as primary detector, and VehicleMakeNet and VehicleTypeNet as secondary classifiers. The classifier models are typically used after initial object detection. The other configuration files use single detection models.

Key Parameters in config_infer_*.txt:

Copy
Copied!
            

tlt-model-key=<tlt_encode or TLT Key used during model export> tlt-encoded-model=<Path to TLT model> labelfile-path=<Path to label file> int8-calib-file=<Path to optional INT8 calibration cache> input-dims=<Inference resolution if different than provided> num-detected-classes=<# of classes if different than default>

Run deepstream-app:

Copy
Copied!
            

deepstream-app -c <DS config file>


Integrating a MaskRCNN Model

Integrating a MaskRCNN model is very straightforward in DeepStream since DS 5.0 can support instance segmentation network type out of the box. The configuration file and label file for the model are provided in the SDK. These files can be used with the provided model as well as your own trained model. For the provided MaskRCNN model, the config and label file should work out of the box. For your custom model, minor modification might be required.

Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream Development Guide. You need to follow the README under /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models to download the model and int8 calibration file. The config files for the Mask RCNN model are located in:

Copy
Copied!
            

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models

/opt/nvidia/deepstreamis the default DeepStream installation directory. This path will be different if you are installing in a different directory.

deepstream-app Config File

deepstream-app config file is used by deepstream-app, see the Deepstream Configuration Guide for more details, you need to enable the display-mask under osd group to see the mask visual view:

Copy
Copied!
            

[osd] enable=1 gpu-id=0 border-width=3 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif display-mask=1 display-bbox=0 display-text=0 Nvinfer config file

Nvinfer configure file is used in nvinfer plugin, see the Deepstream plugin manual for more details, following is key parameters to run the MaskRCNN model:

Copy
Copied!
            

tlt-model-key=<tlt_encode or TLT Key used during model export> tlt-encoded-model=<Path to TLT model> parse-bbox-instance-mask-func-name=<post process parser name> custom-lib-path=<path to post process parser lib> network-type=3 ## 3 is for instance segmentation network output-instance-mask=1 labelfile-path=<Path to label file> int8-calib-file=<Path to optional INT8 calibration cache> infer-dims=<Inference resolution if different than provided> num-detected-classes=<# of classes if different than default>

Here’s an example:

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=0.017507 offsets=123.675;116.280;103.53 model-color-format=0 tlt-model-key=<tlt_encode or TLT Key used during model export> tlt-encoded-model=<Path to TLT model> parse-bbox-instance-mask-func-name=<post process parser name> custom-lib-path=<path to post process parser lib> network-type=3 ## 3 is for instance segmentation network labelfile-path=<Path to label file> int8-calib-file=<Path to optional INT8 calibration cache> infer-dims=<Inference resolution if different than provided> num-detected-classes=<# of classes if different than default> uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 interval=0 gie-unique-id=1 #no cluster ## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering) ## MRCNN supports only cluster-mode=4; Clustering is done by the model itself cluster-mode=4 output-instance-mask=1 [class-attrs-all] pre-cluster-threshold=0.8


Label File

If the COCO annotation file has the following in “categories”:

Copy
Copied!
            

[{'supercategory': 'person', 'id': 1, 'name': 'person'}, {'supercategory': 'car', 'id': 2, 'name': 'car'}]

Then, the corresponding maskrcnn_labels.txt file is:

Copy
Copied!
            

BG person car

Run deepstream-app:

Copy
Copied!
            

deepstream-app -c <deepstream-app config file>

Also you can use deepstream-mrcnn-test to run the Mask RCNN model, see the README under $DS_TOP/source/apps/sample_apps/deepstream-mrcnn-test/.

© Copyright 2020, NVIDIA. Last updated on Nov 18, 2020.