Deploying to Deepstream ======================= .. _deploying_to_deepstream: The deep learning and computer vision models that you trained can be deployed on edge devices, such as a Jetson Xavier, Jetson Nano or a Tesla or in the cloud with NVIDIA GPUs. TLT has been designed to integrate with DeepStream SDK, so models trained with TLT will work out of the box with `DeepStream SDK`_. .. _Deepstream SDK: https://developer.nvidia.com/deepstream-sdk DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of Classification and DetectNet_v2 exported models into the deepstream sample app. The documentation for the DeepStream SDK is provided `here`_. For other models such as YOLOv3, FasterRCNN, SSD, DSSD, RetinaNet, and MaskRCNN there are few extra steps that are required which are covered in this chapter. .. _here: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html To deploy a model trained by TLT to DeepStream you can run multiple options: * **Option 1**: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by :code:`tlt-export`. * **Option 2**: Generate a device specific optimized TensorRT engine, using :code:`tlt-converter`. The TensorRT engine file can also be ingested by DeepStream. Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the inference environment's TensorRT or CUDA libraries are updated – including minor version updates or if a new model is generated– new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether. This image shows DeepStream deployment method for all the models plus the two deployment options. Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate TensorRT engine file and then run inference. The generation of TensorRT engine can take some time depending on size of the model and type of Hardware. The generation of TensorRT engine can be done ahead of time with Option 2. With option 2, use tlt-converter to convert the .etlt file to TensorRT engine and then provide the engine file directly to DeepStream. .. image:: ../content/dstream_deploy_options.png Running TLT models on DeepStream for DetectNet_v2 based detection and image classification, shown on the top half of the table is very straightforward. All that is required is the encrypted tlt model (:code:`.etlt`), optional INT8 calibration cache and DeepStream config file. Go to :ref:`Integrating a DetectNet_v2 model` to see the DeepStream config file. For other detection models such as FasterRCNN, YOLOv3, RetinaNet, SSD, and DSSD, and segmentation model such as MaskRCNN there are extra steps that need to be completed before the models will work with DeepStream. Here are the steps with detailed instructions in the following sections. * **Step 1**: Build TensorRT Open source software (OSS). This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. For more information and instructions, see the TensorRT Open Source Software section. * **Step 2**: Build custom parsers for DeepStream. The parsers are required to convert the raw Tensor data from the inference to (x,y) location of bounding boxes around the detected object. This post-processing algorithm will vary based on the detection architecture. For DetectNet_v2, the custom parsers are not required because the parsers are built-in with DeepStream SDK. For other detectors, DeepStream provides flexibility to add your own custom bounding box parser and that will be used for these 5 models. TensorRT Open Source Software (OSS) ----------------------------------- TensorRT OSS build is required for FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, and MaskRCNN models. This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. The table below shows the plugins that are required by each network. +-------------+---------------------------------------------------------------------------------------------------+ | **Network** | **Plugins required** | +-------------+---------------------------------------------------------------------------------------------------+ | SSD | batchTilePlugin and NMSPlugin | +-------------+---------------------------------------------------------------------------------------------------+ | FasterRCNN | cropAndResizePlugin and proposalPlugin | +-------------+---------------------------------------------------------------------------------------------------+ | YOLOV3 | batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin | +-------------+---------------------------------------------------------------------------------------------------+ | DSSD | batchTilePlugin and NMSPlugin | +-------------+---------------------------------------------------------------------------------------------------+ | RetinaNet | batchTilePlugin and NMSPlugin | +-------------+---------------------------------------------------------------------------------------------------+ | MaskRCNN | generateDetectionPlugin, multilevelProposeROI, multilevelCropAndResizePlugin, resizeNearestPlugin | +-------------+---------------------------------------------------------------------------------------------------+ If the deployment platform is x86 with NVIDIA GPU, follow instructions for x86 and if your deployment is on NVIDIA Jetson platform, follow instructions for Jetson. TensorRT OSS on x86 ^^^^^^^^^^^^^^^^^^^ .. _tensorrt_oss_on_x86: Building TensorRT OSS on x86: 1. Install Cmake (>=3.13). .. Note:: TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c .. code:: sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake 2. Get GPU Arch. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample: .. code:: cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery If the "/usr/local/cuda/samples" doesn’t exist in your system, you could download deviceQuery.cpp from this repo. Compile and run deviceQuery. .. code:: nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery This command will output something like this, which indicates the "GPU_ARCHS" is 75 based on CUDA Capability major/minor version. .. code:: Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5 3. Build TensorRT OSS: .. code:: git clone -b release/7.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build .. Note:: Make sure your GPU_ARCHS from step 2 is in TensorRT OSS :code:`CMakeLists.txt`. If GPU_ARCHS is not in TensorRT OSS :code:`CMakeLists.txt`, add :code:`-DGPU_ARCHS=` as below, where :code:`` represents GPU_ARCHS from step 2. .. code:: /usr/local/bin/cmake .. -DGPU_ARCHS=xy -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc) After building ends successfully, :code:`libnvinfer_plugin.so*` will be generated under :code:`\`pwd\`/out/.` 4. Replace the original "libnvinfer_plugin.so*": .. code:: sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp $TRT_SOURCE/`pwd`/out/libnvinfer_plugin.so.7.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig TensorRT OSS on Jetson (ARM64) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _tensorrt_oss_on_jetson_arm64: 1. Install Cmake (>=3.13) .. Note:: TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/UBuntu 18.04 is cmake 3.10.2. Upgrade TensorRT OSS using: .. code:: sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake 2. Get GPU Arch based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table. +----------------------+---------------+ | **Jetson Platform** | **GPU_ARCHS** | +----------------------+---------------+ | Nano/Tx1 | 53 | +----------------------+---------------+ | Tx2 | 62 | +----------------------+---------------+ | AGX Xavier/Xavier NX | 72 | +----------------------+---------------+ 3. Build TensorRT OSS: .. code:: git clone -b release/7.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build .. Note:: The :code:`-DGPU_ARCHS=72` below is for Xavier or NX, for other Jetson platform, please change "72" referring to "GPU_ARCH" from step 2. .. code:: /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc) After building ends successfully, :code:`libnvinfer_plugin.so*` will be generated under :code:`‘pwd’/out/.` 4. Replace :code:`"libnvinfer_plugin.so*"` with the newly generated. .. code:: sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp `pwd`/out/libnvinfer_plugin.so.7.m.n /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig Generating an Engine Using tlt-converter ---------------------------------------- .. _generating_an_engine_using_tlt-converter: This is part of option 2 from the DeepStream deployment table above. The :code:`tlt-converter` is a tool that is provided with the Transfer Learning Toolkit to facilitate the deployment of TLT trained models on TensorRT and/or Deepstream. For deployment platforms with an x86 based CPU and discrete GPU's, the :code:`tlt-converter` is distributed within the TLT docker. Therefore, it is suggested to use the docker to generate the engine. However, this requires that the user adhere to the same minor version of TensorRT as distributed with the docker. The TLT docker includes TensorRT version 5.1 for JetPack 4.2.2 and TensorRT version 6.0.1 for JetPack 4.2.3 / 4.3. In order to use the engine with a different minor version of TensorRT, copy the converter from :code:`/opt/nvidia/tools/tlt-converter` to the target machine and follow the instructions for x86 to run it and generate a TensorRT engine. Instructions for x86 ^^^^^^^^^^^^^^^^^^^^ 1. Copy :code:`/opt/nvidia/tools/tlt-converter` to the target machine. 2. Install `TensorRT 7.0+`_ for the respective target machine. 3. If you are deploying FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, or MaskRCNN model, you need to build `TensorRT Open source software`_ on the machine. If you are using DetectNet_v2 or image classification, you can skip this step. Instructions to build TensorRT OSS on x86 can be found in :ref:`TensorRT OSS on x86` section above or in this `GitHub repo`_. 4. Run :code:`tlt-converter` using the sample command below and generate the engine. .. _TensorRT 7.0+: https://developer.nvidia.com/tensorrt .. _TensorRT Open source software: https://github.com/NVIDIA/TensorRT .. _GitHub repo: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps Instructions for Jetson ^^^^^^^^^^^^^^^^^^^^^^^ For the Jetson platform, the :code:`tlt-converter` is available to download in the `dev zone`_. Once the :code:`tlt-converter` is downloaded, please follow the instructions below to generate a TensorRT engine. .. _dev zone: https://developer.nvidia.com/tlt-converter-trt71 1. Unzip :code:`tlt-converter-trt7.1.zip` on the target machine. 2. Install the open ssl package using the command: .. code:: sudo apt-get install libssl-dev 3. Export the following environment variables: .. code:: $ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” $ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu” 4. For Jetson devices, TensorRT 7.1 comes pre-installed with `Jetpack`_. If you are using older JetPack, upgrade to JetPack 4.4. 5. If you are deploying FasterRCNN, SSD, DSSD, YOLOv3, or RetinaNet model, you need to build `TensorRT Open source software`_ on the machine. If you are using DetectNet_v2 or image classification, you can skip this step. Instructions to build TensorRT OSS on Jetson can be found in :ref:`TensorRT OSS on Jetson (ARM64) ` section above or in this `GitHub repo`_. 6. Run the :code:`tlt-converter` using the sample command below and generate the engine. .. Note:: Make sure to follow the output node names as mentioned in :ref:`Exporting the Model`. .. _Jetpack: https://developer.nvidia.com/embedded/jetpack Using the tlt-converter ^^^^^^^^^^^^^^^^^^^^^^^ .. code:: tlt-converter [-h] -k -d -o [-c ] [-e ] [-b ] [-m ] [-t ] [-w ] [-i ] input_file Required Arguments ****************** * :code:`input_file`: Path to the model exported using :code:`tlt-export`. * :code:`-k`: The API key used to configure the ngc cli to download the models. * :code:`-d`: Comma-separated list of input dimensions that should match the dimensions used for :code:`tlt-export`. Unlike :code:`tlt-export` this cannot be inferred from calibration data. * :code:`-o`: Comma-separated list of output blob names that should match the output configuration used for :code:`tlt-export`. * For classification use: predictions/Softmax. * For DetectNet_v2: output_bbox/BiasAdd,output_cov/Sigmoid * For FasterRCNN: dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal * For SSD, DSSD, RetinaNet: NMS * For YOLOv3: BatchedNMS * For MaskRCNN: generate_detections, mask_head/mask_fcn_logits/BiasAdd Optional Arguments ****************** * :code:`-e`: Path to save the engine to. (default: ./saved.engine) * :code:`-t`: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32.The options are {fp32, fp16, int8} * :code:`-w`: Maximum workspace size for the TensorRT engine. The default value is :code:`1<<30`. * :code:`-i`: Input dimension ordering, all other tlt commands use NCHW. The default value is nchw. The options are {nchw, nhwc, nc}. INT8 Mode Arguments ******************* * :code:`-c`: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin. * :code:`-b`: Batch size used during the tlt-export step for INT8 calibration cache generation. (default: 8). * :code:`-m`: Maximum batch size of TensorRT engine. The default value is 16. Sample Output Log ***************** Sample log for exporting a resnet10 detectnet_v2 model. Here's a sample: .. code:: export API_KEY= export OUTPUT_NODES=output_bbox/BiasAdd,output_cov/Sigmoid export INPUT_DIMS=3,384,124 export D_TYPE=fp32 export ENGINE_PATH=resnet10_kitti_multiclass_v1.engine export MODEL_PATH=resnet10_kitti_multiclass_v1.etlt tlt-converter -k $API_KEY \ -o $OUTPUT_NODES \ -d $INPUT_DIMS \ -e $ENGINE_PATH \ $MODEL_PATH [INFO] UFFParser: parsing input_1 [INFO] UFFParser: parsing conv1/kernel [INFO] UFFParser: parsing conv1/convolution [INFO] UFFParser: parsing conv1/bias [INFO] UFFParser: parsing conv1/BiasAdd [INFO] UFFParser: parsing bn_conv1/moving_variance .. .. .. [INFO] Tactic 4 scratch requested: 1908801536, available: 16 [INFO] Tactic 5 scratch requested: 55567168, available: 16 [INFO] --------------- Chose 1 (0) [INFO] Formats and tactics selection completed in 5.0141 seconds. [INFO] After reformat layers: 16 layers [INFO] Block size 490733568 [INFO] Block size 122683392 [INFO] Block size 122683392 [INFO] Block size 30670848 [INFO] Block size 16 [INFO] Total Activation Memory: 766771216 [INFO] Data initialization and engine generation completed in 0.0412826 seconds Integrating the model to DeepStream ----------------------------------- There are 2 options to integrate models from TLT with DeepStream: * **Option 1**: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export. * **Option 2**: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream. .. image:: ../content/dstream_deploy_options.png As shown in the lower half of the table, for models such as YOLOv3, FasterRCNN, SSD, DSSD, RetinaNet, and MaskRCNN, you will need to build TensorRT Open source plugins and custom bounding box parsing. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this `GitHub repo`_. .. _GitHub repo: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps In order to integrate the models with DeepStream, you need the following: 1. Download_ and install DeepStream SDK. The installation instructions for DeepStream are provided in the `DeepStream Development Guide`_. 2. An exported .etlt model file and optional calibration cache for INT8 precision. 3. `TensorRT 7+ OSS Plugins`_ (Required for FasterRCNN, SSD, DSSD, YOLOv3, RetinaNet, MaskRCNN). 4. A labels.txt file containing the labels for classes in the order in which the networks produces outputs. 5. A sample :code:`config_infer_*.txt` file to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream. .. _Download: https://developer.nvidia.com/deepstream-download .. _DeepStream Development Guide: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html .. _TensorRT 7+ OSS Plugins : https://github.com/NVIDIA/TensorRT/tree/release/7.0 DeepStream SDK ships with an end-to-end reference application which is fully configurable. Users can configure input sources, inference model and output sinks. The app requires a primary object detection model, followed by an optional secondary classification model. The reference application is installed as :code:`deepstream-app`. The graphic below shows the architecture of the reference application. .. image:: ../content/arch_ref_appl.png There are typically 2 or more configuration files that are used with this app. In the install directory, the config files are located in :code:`samples/configs/deepstream-app` or :code:`sample/configs/tlt_pretrained_models`. The main config file configures all the high level parameters in the pipeline above. This would set input source and resolution, number of inferences, tracker and output sinks. The other supporting config files are for each individual inference engine. The inference specific config files are used to specify models, inference resolution, batch size, number of classes and other customization. The main config file will call all the supporting config files. Here are some config files in :code:`samples/configs/deepstream-app` for your reference. * :code:`source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt`: Main config file * :code:`config_infer_primary.txt`: Supporting config file for primary detector in the pipeline above * :code:`config_infer_secondary_*.txt`: Supporting config file for secondary classifier in the pipeline above The :code:`deepstream-app` will only work with the main config file. This file will most likely remain the same for all models and can be used directly from the DeepStream SDK will little to no change. User will only have to modify or create :code:`config_infer_primary.txt` and :code:`config_infer_secondary_*.txt`. Integrating a Classification model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See Exporting the model for more details on how to export a TLT model. Once the model has been generated two extra files are required: 1. Label file 2. DeepStream configuration file Label File ********** The label file is a text file, containing the names of the classes that the TLT model is trained to classify against. The order in which the classes are listed must match the order in which the model predicts the output. This order may be deduced from the :code:`classmap.json` file that is generated by TLT. This file is a simple dictionary containing the 'class_name' to 'index map'. For example, in the sample classification sample notebook file included with the tlt-docker, the :code:`classmap.json` file generated for pascal voc would look like this: .. code:: {"sheep": 16,"horse": 12,"bicycle": 1, "aeroplane": 0, "cow": 9, "sofa": 17, "bus": 5, "dog": 11, "cat": 7, "person": 14, "train": 18, "diningtable": 10, "bottle": 4, "car": 6, "pottedplant": 15, "tvmonitor": 19, "chair": 8, "bird": 2, "boat": 3, "motorbike": 13} The 0th index corresponds to :code:`aeroplane`, the 1st index corresponds to :code:`bicycle`, etc. up to 19 which corresponds to :code:`tvmonitor`. Here is a sample :code:`label.txt` file, :code:`classification_labels.txt`, arranged in the order of index. .. code:: aeroplane bicycle bird boat bottle bus .. .. tvmonitor DeepStream Configuration File ***************************** A typical use case for video analytic is first to do an object detection and then crop the detected object and send it further for classification. This is supported by :code:`deepstream-app` and the app architecture can be seen above. For example, to classify models of cars on the road, first you will need to detect all the cars in a frame. Once you do detection, you do classification on the cropped image of the car. So in the sample DeepStream app, the classifier is configured as a secondary inference engine after the primary detection. If configured appropriately, :code:`deepstream-app` will automatically crop the detected image and send the frame to the secondary classifier. The :code:`config_infer_secondary_*.txt` is used to configure the classification model. .. image:: ../content/dstream_deploy_options2.png **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. Detail instructions are provided in the :ref:`Generating an Engine Using tlt-converter ` section. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: model-engine-file= All other parameters are common between the 2 approaches. Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. .. code:: [property] gpu-id=0 # preprocessing parameters: These are the same for all classification models generated by TLT. net-scale-factor=1.0 offsets=123.67;116.28;103.53 model-color-format=1 batch-size=30 # Model specific paths. These need to be updated for every classification model. int8-calib-file= labelfile-path= tlt-encoded-model= tlt-model-key= input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format. uff-input-blob-name=input_1 output-blob-names=predictions/Softmax #output node name for classification ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 # process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame process-mode=2 interval=0 network-type=1 # defines that the model is a classifier. gie-unique-id=1 classifier-threshold=0.2 Integrating a DetectNet_v2 model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _integrating_a_detectnet_v2_model: See :ref:`Exporting the Model ` for more details on how to export a TLT model. Once the model has been generated two extra files are required: 1. Label file 2. DS configuration file Label File ********** The label file is a text file, containing the names of the classes that the DetectNet_v2 model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the order the objects are instantiated in the :code:`cost_function_config` field of the :code:`DetectNet_v2` experiment config file. Here's an example, of the DetectNet_v2 sample notebook file included with the TLT docker, the :code:`cost_function_config` parameter looks like this: .. code:: cost_function_config { target_classes { name: "sheep" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "bottle" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "horse" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } .. .. target_classes { name: "boat" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } target_classes { name: "car" class_weight: 1.0 coverage_foreground_weight: 0.05 objectives { name: "cov" initial_weight: 1.0 weight_target: 1.0 } objectives { name: "bbox" initial_weight: 10.0 weight_target: 1.0 } } enable_autoweighting: False max_objective_weight: 0.9999 min_objective_weight: 0.0001 } Here's an example of the corresponding, :code:`detectnet_v2_labels.txt`. The order in the :code:`labels.txt` should match the order in the :code:`cost_function_config`: .. code:: sheep bottle horse .. .. boat car DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model. .. image:: ../content/dstream_deploy_options2.png **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The :code:`tlt-encoded-model` parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. Option 2: Integrate TensorRT engine file with DeepStream app. Step 1: Generate TensorRT engine using tlt-converter. Detail instructions are provided in the :ref:`Generating an engine using tlt-converter ` section above. Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: model-engine-file= All other parameters are common between the two approaches. Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. .. code:: [property] gpu-id=0 # preprocessing parameters. net-scale-factor=0.0039215697906911373 model-color-format=0 # model paths. int8-calib-file= labelfile-path= tlt-encoded-model= tlt-model-key= input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format. uff-input-blob-name=input_1 batch-size=4 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd #enable_dbscan=0 [class-attrs-all] threshold=0.2 group-threshold=1 ## Set eps=0.7 and minBoxes for enable-dbscan=1 eps=0.2 #minBoxes=3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating an SSD Model ^^^^^^^^^^^^^^^^^^^^^^^^ To run an SSD model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and SSD bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained SSD models from TLT is provided on github here_. Prerequisites for SSD Model *************************** SSD requires batchTilePlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in `TensorRT Open Source Software (OSS)`_. .. _TensorRT Open Source Software (OSS): https://github.com/NVIDIA/TensorRT SSD requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for SSD is available here_. The following instructions can be used to build bounding box parser: **Step 1**: Install git-lfs_ (git >= 1.8.2): .. _git-lfs: https://github.com/git-lfs/git-lfs/wiki/Installation .. Note:: git-lfs are needed to support downloading model files >5MB. .. code:: curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install **Step 2**: Download Source Code with HTTPS: .. code:: git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps **Step 3**: Build: .. code:: export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_ssd_tlt make This generates :code:`libnvds_infercustomparser_ssd_tlt.so` in the directory. Label File ********** The label file is a text file containing the names of the classes that the SSD model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT SSD will specify all class names in lower case and sort them in alphabetical order. For example, if the :code:`dataset_config` is: .. code:: dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 } Then the corresponding :code:`classification_lables.txt` file would look like this: .. code:: bicycle car person DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model as well as the custom parser. .. image:: ../content/dstream_deploy_options3.jpg **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The tlt-encoded-model parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. See the :ref:`Generating an engine using tlt-converter` for detailed instructions. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. :code:`model-engine-file=` All other parameters are common between the two approaches. Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. .. code:: [property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path= tlt-encoded-model= tlt-model-key= uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path= [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating a FasterRCNN Model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To run a FasterRCNN model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and FasterRCNN bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained FasterRCNN models from TLT is provided on github here_. Prerequisite for FasterRCNN Model ********************************* 1. FasterRCNN requires the cropAndResizePlugin and the proposalPlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS). 2. FasterRCNN requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for FasterRCNN is available here https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps. The following instructions can be used to build bounding box parser: .. _cropAndResizePlugin: https://github.com/NVIDIA/TensorRT/tree/release/5.1/plugin/cropAndResizePlugin .. _proposalPlugin: https://github.com/NVIDIA/TensorRT/tree/release/5.1/plugin/proposalPlugin .. _TensorRT Open Source Software (OSS): https://github.com/NVIDIA/TensorRT **Step1**: Install git-lfs_ (git >= 1.8.2) .. code:: curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install **Step 2**: Download Source Code with SSH or HTTPS .. code:: git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps Step 3: Build .. code:: export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_frcnn_tlt make This generates :code:`libnvds_infercustomparser_frcnn_tlt.so` in the directory. Label File ********** The label file is a text file, containing the names of the classes that the FasterRCNN model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the order the objects are instantiated in the :code:`target_class_mapping` field of the FasterRCNN experiment specification file. During the training, TLT FasterRCNN will make all the class names in lower case and sort them in alphabetical order. For example, if the :code:`target_class_mapping` label file is: .. code:: target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } The actual class name list is :code:`bicycle`, :code:`car`, :code:`person`. The example of the corresponding :code:`label_file_frcnn.txt` file is: .. code:: bicycle car person DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model as well as the custom parser. .. image:: ../content/dstream_deploy_options3.jpg **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The :code:`tlt-encoded-model` parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. See the :ref:`Generating an engine using tlt-converter ` section above for detailed instructions. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: model-engine-file= All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file: .. code:: parse-bbox-func-name=NvDsInferParseCustomFrcnnUff custom-lib-path= Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. Here's a sample config file, :code:`config_infer_primary.txt`: .. code:: [property] gpu-id=0 net-scale-factor=1.0 offsets= # e.g.: 103.939;116.779;123.68 model-color-format=1 labelfile-path= tlt-encoded-model= tlt-model-key= uff-input-dims= # 3;272;480;0. Where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format uff-input-blob-name= # e.g.: input_image batch-size= e.g.: 1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes= # e.g.: 5 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names= e.g.: dense_class_td/Softmax,dense_regress_td/BiasAdd, proposal parse-bbox-func-name=NvDsInferParseCustomFrcnnTLT custom-lib-path= [class-attrs-all] roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating a YOLOv3 Model ^^^^^^^^^^^^^^^^^^^^^^^^^^ To run a YOLOv3 model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and YOLOv3 bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained YOLOv3 models from TLT is provided here_. Prerequisite for YOLOv3 model ***************************** 1. YOLOv3 requires batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS). 2. YOLOv3 requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for YOLOv3 is available here_. The following instructions can be used to build bounding box parser: **Step1**: Install `git-lfs`_ (git >= 1.8.2): .. Note:: git-lfs are needed to support downloading model files >5MB. .. code:: curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install **Step 2**: Download Source Code with HTTPS: .. code:: git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps **Step 3**: Build .. code:: export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_yolov3_tlt make This will generate :code:`libnvds_infercustomparser_yolov3_tlt.so` in the directory. Label File ********** The label file is a text file, containing the names of the classes that the YOLOv3 model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT YOLOv3 will specify all class names in lower case and sort them in alphabetical order. For example, if the :code:`dataset_config` is: .. code:: dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 } Then the corresponding :code:`yolov3_labels.txt` would look like this: .. code:: bicycle car person DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model as well as the custom parser. .. image:: ../content/dstream_deploy_options3.jpg **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The :code:`tlt-encoded-model` parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. See the :ref:`Generating an engine using tlt-converter ` section above for detailed instructions. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: codemodel-engine-file= All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file: .. code:: parse-bbox-func-name=NvDsInferParseCustomYOLO3TLT custom-lib-path= Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. Here's a sample config file, :code:`pgie_yolov3_config.txt`: .. code:: [property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path= tlt-encoded-model= tlt-model-key= uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=BatchedNMS parse-bbox-func-name=NvDsInferParseCustomYOLOV3TLT custom-lib-path= [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating a DSSD Model ^^^^^^^^^^^^^^^^^^^^^^^^ To run a DSSD model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and DSSD bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained DSSD models from TLT is provided here_. Prerequisite for DSSD model *************************** DSSD requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS). DSSD requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for DSSD is here_. The following instructions can be used to build bounding box parser: **Step1**: Install git-lfs (git >= 1.8.2) .. Note:: git-lfs are needed to support downloading model files >5MB. .. code:: curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install **Step 2**: Download Source Code with HTTPS .. code:: git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps **Step 3**: Build .. code:: export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_dssd_tlt make This will generate code:`libnvds_infercustomparser_dssd_tlt.so` in the directory. Label File ********** The label file is a text file, containing the names of the classes that the DSSD model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT DSSD will make all the class names in lower case and sort them in alphabetical order.For example, if the :code:`dataset_config` is: .. code:: dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 } Here's an example of the corresponding :code:`dssd_labels.txt` file: .. code:: bicycle car person DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model as well as the custom parser. .. image:: ../content/dstream_deploy_options3.jpg **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The :code:`tlt-encoded-model` parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. See the :ref:`Generating an engine using tlt-converter ` section above for detailed instructions. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: model-engine-file= All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file: .. code:: parse-bbox-func-name=NvDsInferParseCustomDSSDTLT custom-lib-path= Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to `DeepStream Development Guide`_. .. code:: [property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path= tlt-encoded-model= tlt-model-key= uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path= [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating a RetinaNet Model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To run a RetinaNet model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and RetinaNet bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained DSSD models from TLT is provided here_. Prerequisite for RetinaNet Model ******************************** RetinaNet requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS). RetinaNet requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for DSSD is available here_. The following instructions can be used to build bounding box parser: **Step 1**: Install `git-lfs`_ (git >= 1.8.2): .. Note:: git-lfs are needed to support downloading model files >5MB. .. code:: curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install **Step 2**: Download Source Code with HTTPS: git clone -b release/tlt2.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps **Step 3**: Build: .. code:: export DS_SRC_PATH=/opt/nvidia/deepstream/deepstream-5.0 // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 cd nvdsinfer_customparser_retinanet_tlt make This will generate :code:`libnvds_infercustomparser_retinanet_tlt.so` in the directory. Label File ********** The label file is a text file containing the names of the classes that the RetinaNet model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TLT RetinaNet will specify all class names in lower case and sort them in alphabetical order. For example, if the :code:`dataset_config` is: .. code:: dataset_config { data_sources: { tfrecords_path: "/workspace/tlt-experiments/tfrecords/pascal_voc/pascal_voc*" image_directory_path: "/workspace/tlt-experiments/data/VOCdevkit/VOC2012" } image_extension: "jpg" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_fold: 0 } Then the corresponding :code:`retinanet_labels.txt` file would look like this: .. code:: bicycle car person DeepStream Configuration File ***************************** The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample :code:`deepstream-app`, you must modify the existing :code:`config_infer_primary.txt` file to point to this model as well as the custom parser. .. image:: ../content/dstream_deploy_options3.jpg **Option 1**: Integrate the model (:code:`.etlt`) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The :code:`int8-calib-file` is only required for INT8 precision. .. code:: tlt-encoded-model= tlt-model-key= int8-calib-file= The :code:`tlt-encoded-model` parameter points to the exported model (:code:`.etlt`) from TLT. The :code:`tlt-model-key` is the encryption key used during model export. **Option 2**: Integrate TensorRT engine file with DeepStream app. **Step 1**: Generate TensorRT engine using tlt-converter. See the :ref:`Generating an engine using tlt-converter section ` above for detailed instructions. **Step 2**: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. .. code:: model-engine-file= All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file: .. code:: parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path= Add the label file generated above using: .. code:: labelfile-path= For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to the `DeepStream Development Guide`_. .. code:: [property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path= tlt-encoded-model= tlt-model-key= uff-input-dims=3;384;1248;0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=3 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomSSDTLT custom-lib-path= [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 Integrating Purpose-Built Models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Integrating purpose-built models is very straightforward in DeepStream. The configuration file and label file for these models are provided in the SDK. These files can be used with the provided pruned model as well as your own trained model. For the provided pruned models, the config and label file should work out of the box. For your custom model, minor modification might be required. `Download`_ and install DeepStream SDK. The installation instructions for DeepStream are provided in the `DeepStream Development Guide`_. The config files for the purpose-built models are located in: .. code:: /opt/nvidia/deepstream/deepstream-5:0/samples/configs/tlt_pretrained_models :code:`/opt/nvidia/deepstream` is the default DeepStream installation directory. This path will be different if you are installing in a different directory. There are two sets of config files: main config files and inference config files. Main config file can call one or multiple inference config files depending on number of inferences. The table below shows the models being deployed by each config file. +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ | **Model(s)** | **Main DeepStream Configuration** | **Inference Configuration(s)** | **Label File(s)** | +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ | TrafficCamNet | deepstream _app _source1 _trafficcamnet.txt | config_infer _primary _trafficcamnet.txt | labels_trafficnet.txt | +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ | PeopleNet | deepstream_app _source1 _peoplenet.txt | config_infer _primaryn _peoplenet.txt | labels_peoplenet.txt | +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ | DashCamNetVehicleMake NetVechicleTypeNet | deepstream_app_source1 _dashcamnet _vehiclemakenet _vehicletypenet.txt | config_infer _primary _dashcamnet.txt | labels_dashcamnet.txt | | | | config_infer _secondary _vehiclemakenet.txt | labels_vehiclemakenet.txt | | | | config_infer _secondary _vehicletypenet.txt | labels_vehicletypenet.txt | +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ | FaceDetect-IR | deepstream_app _source1_faceirnet.txt | config_infer _primary _faceirnet.txt | labels_faceirnet.txt | +------------------------------------------+------------------------------------------------------------------------+---------------------------------------------+---------------------------+ The main configuration file is to be used with :code:`deepstream-app`, DeepStream reference application. In the :code:`deepstream-app`, the primary detector will detect the objects and send the cropped frame to secondary classifiers. For more information, refer to the `DeepStream Development Guide`_ for more details. The :code:`deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt` configures three models: DashCamNet as primary detector, and VehicleMakeNet and VehicleTypeNet as secondary classifiers. The classifier models are typically used after initial object detection. The other configuration files use single detection models. Key Parameters in :code:`config_infer_*.txt`: .. code:: tlt-model-key= tlt-encoded-model= labelfile-path= int8-calib-file= input-dims= num-detected-classes=<# of classes if different than default> Run deepstream-app: .. code:: deepstream-app -c Integrating a MaskRCNN Model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Integrating a MaskRCNN model is very straightforward in DeepStream since DS 5.0 can support instance segmentation network type out of the box. The configuration file and label file for the model are provided in the SDK. These files can be used with the provided model as well as your own trained model. For the provided MaskRCNN model, the config and label file should work out of the box. For your custom model, minor modification might be required. `Download`_ and install DeepStream SDK. The installation instructions for DeepStream are provided in `DeepStream Development Guide`_. You need to follow the README under :code:`/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models` to download the model and int8 calibration file. The config files for the Mask RCNN model are located in: .. code:: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models :code:`/opt/nvidia/deepstreamis` the default DeepStream installation directory. This path will be different if you are installing in a different directory. deepstream-app Config File ************************** deepstream-app config file is used by deepstream-app, see the `Deepstream Configuration Guide`_ for more details, you need to enable the :code:`display-mask` under :code:`osd` group to see the mask visual view: .. _Deepstream Configuration Guide: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_app_config.3.1.html%23 .. code:: [osd] enable=1 gpu-id=0 border-width=3 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif display-mask=1 display-bbox=0 display-text=0 Nvinfer config file Nvinfer configure file is used in nvinfer plugin, see the `Deepstream plugin manual`_ for more details, following is key parameters to run the MaskRCNN model: .. _Deepstream plugin manual: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html .. code:: tlt-model-key= tlt-encoded-model= parse-bbox-instance-mask-func-name= custom-lib-path= network-type=3 ## 3 is for instance segmentation network output-instance-mask=1 labelfile-path= int8-calib-file= infer-dims= num-detected-classes=<# of classes if different than default> Here's an example: .. code:: [property] gpu-id=0 net-scale-factor=0.017507 offsets=123.675;116.280;103.53 model-color-format=0 tlt-model-key= tlt-encoded-model= parse-bbox-instance-mask-func-name= custom-lib-path= network-type=3 ## 3 is for instance segmentation network labelfile-path= int8-calib-file= infer-dims= num-detected-classes=<# of classes if different than default> uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 interval=0 gie-unique-id=1 #no cluster ## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering) ## MRCNN supports only cluster-mode=4; Clustering is done by the model itself cluster-mode=4 output-instance-mask=1 [class-attrs-all] pre-cluster-threshold=0.8 Label File ********** If the COCO annotation file has the following in “categories”: .. code:: [{'supercategory': 'person', 'id': 1, 'name': 'person'}, {'supercategory': 'car', 'id': 2, 'name': 'car'}] Then, the corresponding :code:`maskrcnn_labels.txt` file is: .. code:: BG person car Run deepstream-app: .. code:: deepstream-app -c Also you can use deepstream-mrcnn-test to run the Mask RCNN model, see the README under :code:`$DS_TOP/source/apps/sample_apps/deepstream-mrcnn-test/`.