• train

• evaluate

• inference

• export

These tasks may be invoked from the TAO Toolkit Launcher using the following convention on the command line:

Copy
Copied!

tao mask_rcnn <sub_task> <args_per_subtask>


where args_per_subtask are the command-line arguments required for a given subtask. Each of these subtasks are explained in detail below.

Creating a Configuration File

Below is a sample MaskRCNN spec file. It has three major components: top level experiment configs, data_config, and maskrcnn_config, explained below in detail. The format of the spec file is a protobuf text (prototxt) message and each of its fields can be either a basic data type or a nested message. The top level structure of the spec file is summarized in the table below.

Here’s a sample of the MaskRCNN spec file:

Copy
Copied!

seed: 123
use_amp: False
warmup_steps: 0
learning_rate_steps: "[60000, 80000, 100000]"
learning_rate_decay_levels: "[0.1, 0.02, 0.002]"
total_steps: 120000
train_batch_size: 2
eval_batch_size: 4
num_steps_per_eval: 10000
momentum: 0.9
l2_weight_decay: 0.0001
l1_weight_decay: 0.0
warmup_learning_rate: 0.0001
init_learning_rate: 0.02

data_config{
image_size: "(832, 1344)"
augment_input_data: True
eval_samples: 500
training_file_pattern: "/workspace/tao-experiments/data/train*.tfrecord"
validation_file_pattern: "/workspace/tao-experiments/data/val*.tfrecord"
val_json_file: "/workspace/tao-experiments/data/annotations/instances_val2017.json"

# dataset specific parameters
num_classes: 91
skip_crowd_during_training: True
max_num_instances: 200
}

nlayers: 50
arch: "resnet"
freeze_bn: True
freeze_blocks: "[0,1]"

# Region Proposal Network
rpn_positive_overlap: 0.7
rpn_negative_overlap: 0.3
rpn_batch_size_per_im: 256
rpn_fg_fraction: 0.5
rpn_min_size: 0.

# Proposal layer.
batch_size_per_im: 512
fg_fraction: 0.25
fg_thresh: 0.5
bg_thresh_hi: 0.5
bg_thresh_lo: 0.

bbox_reg_weights: "(10., 10., 5., 5.)"

mrcnn_resolution: 28

# training
train_rpn_pre_nms_topn: 2000
train_rpn_post_nms_topn: 1000
train_rpn_nms_threshold: 0.7

# evaluation
test_detections_per_image: 100
test_nms: 0.5
test_rpn_pre_nms_topn: 1000
test_rpn_post_nms_topn: 1000
test_rpn_nms_thresh: 0.7

# model architecture
min_level: 2
max_level: 6
num_scales: 1
aspect_ratios: "[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]"
anchor_scale: 8

# localization loss
rpn_box_loss_weight: 1.0
fast_rcnn_box_loss_weight: 1.0
}


 Field Description Data Type and Constraints Recommended/Typical Value seed The random seed for the experiment Unsigned int 123 warmup_steps The steps taken for learning rate to ramp up to the init_learning_rate Unsigned int – warmup_learning_rate The initial learning rate during the warmup phase float – learning_rate_steps A list of steps at which the learning rate decays by the factor specified in learning_rate_decay_levels string – learning_rate_decay_levels A list of decay factors. The length should match the length of learning_rate_steps. string – total_steps The total number of training iterations Unsigned int – train_batch_size The batch size during training Unsigned int 4 eval_batch_size The batch size during validation or evaluation Unsigned int 8 num_steps_per_eval Save a checkpoint and run evaluation every N steps. Unsigned int – momentum Momentum of the SGD optimizer float 0.9 l1_weight_decay L1 weight decay float 0.0001 l2_weight_decay L2 weight decay float 0.0001 use_amp Specifies whether to use Automatic Mixed Precision training boolean False checkpoint The path to a pretrained model string – maskrcnn_config The architecture of the model message – data_config The input data configuration message – skip_checkpoint_variables If specified, the weights of the layers with matching regular expressions will not be loaded. This is especially helpful for transfer learning. string – pruned_model_path The path to a pruned MaskRCNN model string –
Note

When using skip_checkpoint_variables, you can first find the model structure in the training log (Part of the MaskRCNN+ResNet50 model structure is shown below). If, for example, you want to retrain all prediction heads, you can set skip_checkpoint_variables to “head”. TAO Toolkit uses the Python re library to check whether “head” matches any layer name or re.search($skip_checkpoint_variables,$layer_name).

Copy
Copied!

[MaskRCNN] INFO    : ================ TRAINABLE VARIABLES ==================
[MaskRCNN] INFO    : [#0001] conv1/kernel:0                                               => (7, 7, 3, 64)
[MaskRCNN] INFO    : [#0002] bn_conv1/gamma:0                                             => (64,)
[MaskRCNN] INFO    : [#0003] bn_conv1/beta:0                                              => (64,)
[MaskRCNN] INFO    : [#0004] block_1a_conv_1/kernel:0                                     => (1, 1, 64, 64)
[MaskRCNN] INFO    : [#0005] block_1a_bn_1/gamma:0                                        => (64,)
[MaskRCNN] INFO    : [#0006] block_1a_bn_1/beta:0                                         => (64,)
[MaskRCNN] INFO    : [#0007] block_1a_conv_2/kernel:0                                     => (3, 3, 64, 64)
[MaskRCNN] INFO    : [#0008] block_1a_bn_2/gamma:0                                        => (64,)
[MaskRCNN] INFO    : [#0009] block_1a_bn_2/beta:0                                         => (64,)
[MaskRCNN] INFO    : [#0010] block_1a_conv_3/kernel:0                                     => (1, 1, 64, 256)
[MaskRCNN] INFO    : [#0011] block_1a_bn_3/gamma:0                                        => (256,)
[MaskRCNN] INFO    : [#0012] block_1a_bn_3/beta:0                                         => (256,)
[MaskRCNN] INFO    : [#0110] block_3d_bn_3/gamma:0                                        => (1024,)
[MaskRCNN] INFO    : [#0111] block_3d_bn_3/beta:0                                         => (1024,)
[MaskRCNN] INFO    : [#0112] block_3e_conv_1/kernel:0                                     => (1, 1, 1024, [MaskRCNN] INFO    : [#0144] block_4b_bn_1/beta:0                                         => (512,)
…     …       …    …                                                                       ...
[MaskRCNN] INFO    : [#0174] post_hoc_d5/kernel:0                                     => (3, 3, 256, 256)
[MaskRCNN] INFO    : [#0175] post_hoc_d5/bias:0                                       => (256,)
[MaskRCNN] INFO    : [#0176] rpn/kernel:0                                        => (3, 3, 256, 256)
[MaskRCNN] INFO    : [#0177] rpn/bias:0                                          => (256,)
[MaskRCNN] INFO    : [#0178] rpn-class/kernel:0                                  => (1, 1, 256, 3)
[MaskRCNN] INFO    : [#0179] rpn-class/bias:0                                    => (3,)
[MaskRCNN] INFO    : [#0180] rpn-box/kernel:0                                    => (1, 1, 256, 12)
[MaskRCNN] INFO    : [#0181] rpn-box/bias:0                                      => (12,)
[MaskRCNN] INFO    : [#0182] fc6/kernel:0                                        => (12544, 1024)
[MaskRCNN] INFO    : [#0183] fc6/bias:0                                          => (1024,)
[MaskRCNN] INFO    : [#0184] fc7/kernel:0                                        => (1024, 1024)
[MaskRCNN] INFO    : [#0185] fc7/bias:0                                          => (1024,)
[MaskRCNN] INFO    : [#0186] class-predict/kernel:0                              => (1024, 91)
[MaskRCNN] INFO    : [#0187] class-predict/bias:0                                => (91,)
[MaskRCNN] INFO    : [#0188] box-predict/kernel:0                                => (1024, 364)
[MaskRCNN] INFO    : [#0189] box-predict/bias:0                                  => (364,)
[MaskRCNN] INFO    : [#0201] mask_fcn_logits/bias:0                             => (91,)


The MaskRCNN configuration (maskrcnn_config) defines the model structure. This model is used for training, evaluation, and inference. A detailed description is included in the table below. Currently, MaskRCNN only supports ResNet10/18/34/50/101 as its backbone.

Note

The min_level, max_level, num_scales, aspect_ratios, and anchor_scale are used to determine anchor generation for MaskRCNN. anchor_scale is the base anchor scale, while min_level and max_level set the range of the scales on different feature maps. For example, the actual anchor scale for the feature map at min_level will be anchor_scale * 2^min_level and the actual anchor scale for the feature map at max_level will be anchor_scale * 2^max_level. And it will generate anchors of different aspect_ratios based on the actual anchor scale.

Data Config

The data configuration (data_config) specifies the input data source and format. This is used for training, evaluation, and inference. A detailed description is summarized in the table below.

 Field Description Data Type and Constraints Recommended/Typical Value image_size The image dimension as a tuple within quote marks. “(height, width)” indicates the dimension of the resized and padded input. string “(832, 1344)” augment_input_data Specifies whether to augment the data boolean True eval_samples The number of samples for evaluation Unsigned int – training_file_pattern The TFRecord path for training string – validation_file_pattern The TFRecord path for validation string – val_json_file The annotation file path for validation string – num_classes The number of classes. If there are N categories in the annotation, num_classes should be N+1 (background class) Unsigned int – skip_crowd_during_training Specifies whether to skip crowd during training boolean True prefetch_buffer_size The prefetch buffer size used by tf.data.Dataset (default: AUTOTUNE) Unsigned int – shuffle_buffer_size The shuffle buffer size used by tf.data.Dataset (default: 4096) Unsigned int 4096 n_workers The number of workers to parse and preprocess data (default: 16) Unsigned int 16 max_num_instances The maximum number of object instances to parse (default: 200) Unsigned int 200
Note

If an out-of-memory error occurs during training, try to set a smaller image_size or batch_size first. If the error persists, try reducing the n_workers, shuffle_buffer_size, and prefetch_buffer_size values. Lastly, if the original images have a very large resolution, resize the images offline and create new tfrecords to avoid loading large images to GPU memory.

Training the Model

Train the MaskRCNN model using this command:

Copy
Copied!

tao mask_rcnn train [-h] -e <experiment_spec>
-d <output_dir>
-k <key>
[--gpus <num_gpus>]
[--gpu_index <gpu_index>]
[--log_file <log_file_path>]


Required Arguments

• -d, --model_dir: The path to the folder where the experiment output is written.

• -k, --key: The encryption key to decrypt the model.

• -e, --experiment_spec_file: The experiment specification file to set up the evaluation. experiment. This should be the same as the training specification file.

Optional Arguments

• --gpus num_gpus: The number of GPUs to use and processes to launch for training. The default value is 1.

• --gpu_index: The index of the (discrete) GPU for exporting the model if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

• --log_file: The path to the log file. The default path is stdout.

• -h, --help: Show this help message and exit.

Input Requirement

• Input size: C * W * H (where C = 3, W >= 128, H >= 128 and W, H are multiples of 2^ max_level)

• Image format: JPG

• Label format: COCO detection

Sample Usage

Here’s an example of using the train command on a MaskRCNN model:

Copy
Copied!

tao mask_rcnn train --gpus 2 -e /path/to/spec.txt -d /path/to/result -k $KEY  Evaluating the Model To run evaluation for a MaskRCNN model, use this command: Copy Copied!  tao mask_rcnn evaluate [-h] -e <experiment_spec_file> -m <model_file> -k <key> [--gpu_index <gpu_index>] [--log_file <log_file_path>]  Required Arguments • -e, --experiment_spec_file: The experiment spec file to set up the evaluation experiment. This should be the same as the training spec file. • -m, --model: The path to the model file to use for evaluation • -k, --key: The key to load the model. This argument is not required if -m is followed by a TensorRT engine. Optional Arguments • --gpu_index: The index of the (discrete) GPU for exporting the model if the machine has multiple GPUs installed. Note that export can only run on a single GPU. • --log_file: The path to the log file. The default path is stdout. • -h, --help: Show this help message and exit. Pruning the Model Pruning removes parameters from the model to reduce the model size. Retraining is necessary to regain the performance of the unpruned model. The prune command includes these parameters: Copy Copied!  tao mask_rcnn prune [-h] -m <pretrained_model> -o <output_dir> -k <key> [-n <normalizer>] [-eq <equalization_criterion>] [-pg <pruning_granularity>] [-pth <pruning threshold>] [-nf <min_num_filters>] [-el [<excluded_list>] [--gpu_index <gpu_index>] [--log_file <log_file>]  Required Arguments • -m, --pretrained_model: The path to the pretrained model. • -o, --output_dir: The output directory which contains the pruned model, named as model.tlt. • -k, --key: The key to load a .tlt model. Optional Arguments • -h, --help: Show this help message and exit. • -n, --normalizer: max to normalize by dividing each norm by the maximum norm within a layer; L2 to normalize by dividing by the L2 norm of the vector comprising all kernel norms. (default: max) • -eq, --equalization_criterion: Criteria to equalize the stats of inputs to an element wise op layer, or depth-wise convolutional layer. This parameter is useful for resnets and mobilenets. Options are arithmetic_mean, geometric_mean, union, and intersection. (default: union) • -pg, --pruning_granularity: Number of filters to remove at a time. (default:8) • -pth: Threshold to compare normalized norm against. (default:0.1) • -nf, --min_num_filters: Minimum number of filters to keep per layer (default:16) • -el, --excluded_layers: List of excluded_layers. Examples: -i item1 item2 (default: []) • --gpu_index: The index of the GPU to run evaluation (useful when the machine has multiple GPUs installed). Note that evaluation can only run on a single GPU. • --log_file: The path to the log file. Defaults to stdout. Here’s an example of using the prune command: Copy Copied!  tao mask_rcnn prune -m /workspace/model.step-100.tlt -o /workspace/output -eq union -pth 0.7 -k$KEY


After pruning, the model needs to be retrained first before it can be used for inference or evaluation.

Re-training the Pruned Model

Once the model has been pruned, there might be a decrease in accuracy. This happens because some previously useful weights may have been removed. To regain accuracy, NVIDIA recommends that you retrain this pruned model over the same dataset. To do this, run the tao mask_rcnn train command with an updated spec file that points to the newly pruned model by setting pruned_model_path.

Users are advised to turn off the regularizer during retraining. You may do this by setting the regularizer weights to 0 for both l1_weight_decay and l2_weight_decay. The other parameters may be retained in the spec file from the previous training. train_batch_size and eval_batch_size must be kept unchanged.

Running Inference on the Model

The inference tool for MaskRCNN networks can be used to visualize bboxes or generate frame-by-frame COCO-format labels on a directory of images. Here’s an example of using this tool:

Copy
Copied!

tao mask_rcnn inference [-h] -i <input directory>
-o <output annotated image directory>
-e <experiment spec file>
-m <model file>
-k <key>
[-l <label file>]
[-t <bbox confidence threshold>]
[--gpu_index <gpu_index>]
[--log_file <log_file_path>]


Required Arguments

• -m, --model_path: The path to the trained MaskRCNN model (either a .tlt model or a converted TensorRT engine).

• -i, --image_dir: The directory of input images for inference. Supported image formats include PNG, JPG and JPEG.

• -e, --config_path: The path to an experiment spec file for training.

• -o, --out_image_path: The directory path to output annotated images.

• -k, --key: The key to load a .tlt model (not needed if TensorRT engine is used).

Optional Arguments

• -t, --threshold: The threshold for drawing a bbox (default: 0.6)

• -l, --out_label_path: The directory of predicted labels in COCO format (https://cocodataset.org/#format-results). This argument is only supported with the TensorRT engine.

• --include_mask: Specifies whether to draw masks on the annotated output.

• --gpu_index: The index of the (discrete) GPU for exporting the model if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

• --log_file: The path to the log file. The default path is stdout.

• -h, --help: Show this help message and exit.

Exporting the Model

Exporting the model decouples the training process from inference and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. The exported model may be used universally across training and deployment hardware.

The exported model format is referred to as .etlt. Like the .tlt model format, .etlt is an encrypted model format, and it uses the same key as the .tlt model that it is exported from. This key is required when deploying this model.

INT8 Mode Overview

TensorRT engines can be generated in INT8 mode to improve performance, but require a calibration cache at engine creation-time. The calibration cache is generated using a calibration tensor file, if export is run with the --data_type flag set to int8. Pre-generating the calibration information and caching it removes the need for calibrating the model on the inference machine. Moving the calibration cache is usually much more convenient than moving the calibration tensorfile, since it is a much smaller file and can be moved with the exported model. Using the calibration cache also speeds up engine creation as building the cache can take several minutes to generate depending on the size of the Tensorfile and the model itself.

The export tool can generate the INT8 calibration cache by ingesting training data using one of these options:

• Option 1: Use the training data loader to load the training images for INT8 calibration. This option is now the recommended approach to support multiple image directories by leveraging the training dataset loader. This also ensures two important aspects of data during calibration:

• Data pre-processing in the INT8 calibration step is the same as in the training process.

• The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the INT8 model.

• Option 2: Point the tool to a directory of images that you want to use to calibrate the model. For this option, make sure to create a sub-sampled directory of random images that best represent your training dataset.

FP16/FP32 Model

The calibration.bin is only required if you need to run inference at INT8 precision. For FP16/FP32-based inference, the export step is much simpler: All you need to do is provide a .tlt model from the training/retraining step to be converted into .etlt format.

Here’s an example of the command line arguments of the tao mask_rcnn export command:

Copy
Copied!

tao mask_rcnn export [-h] -m <path to the .tlt model file generated by tao train>
-k <key>
[-o <path to output file>]
[--cal_data_file <path to tensor file>]
[--cal_image_dir <path to the directory images to calibrate the model]
[--cal_cache_file <path to output calibration file>]
[--data_type <Data type for the TensorRT backend during export>]
[--batches <Number of batches to calibrate over>]
[--max_batch_size <maximum trt batch size>]
[--max_workspace_size <maximum workspace size]
[--batch_size <batch size to TensorRT engine>]
[--experiment_spec <path to experiment spec file>]
[--engine_file <path to the TensorRT engine file>]
[--gen_ds_config <Flag to generate ds config and label file>]
[--verbose <Verbosity of the logger>]
[--force_ptq <Flag to force PTQ>]
[--strict_type_constraints <Flag to apply strict type constraints>]
[--gpu_index <gpu_index>]
[--log_file <log_file_path>]


Required Arguments

• -m, --model: The path to the .tlt model file to be exported using export.

• -k, --key: The key used to save the .tlt model file.

• -e, --experiment_spec: The path to the spec file.

Optional Arguments

• -o, --output_file: The path to save the exported model to. The default path is ./<input_file>.etlt.

• --data_type: The desired engine data type. The options are fp32, fp16, and int8. The default value is fp32. A calibration cache will be generated in INT8 mode. If using INT8, the following INT8 arguments are required.

• -s, --strict_type_constraints: A Boolean flag indicating whether to apply the TensorRT strict type constraints when building the TensorRT engine.

• --gen_ds_config: A Boolean flag indicating whether to generate the template DeepStream related configuration (“nvinfer_config.txt”) as well as a label file (“labels.txt”) in the same directory as the output_file. Note that the config file is NOT a complete configuration file and requires the user to update the sample config files in DeepStream with the parameters generated.

• --gpu_index: The index of the (discrete) GPU for exporting the model if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

• --log_file: The path to the log file. The default path is stdout.

• -h, --help: Show this help message and exit.

INT8 Export Mode Required Arguments

• --cal_data_file: The tensorfile generated for calibrating the engine. This can also be an output file if used with --cal_image_dir.

• --cal_image_dir: A directory of images to use for calibration.

Note

The --cal_image_dir parameter applies the necessary preprocessing to generate a tensorfile at the path mentioned in the --cal_data_file parameter, which is in turn used for calibration. The number of batches in the tensorfile generated is obtained from the value set to the --batches parameter, and the batch_size is obtained from the value set to the --batch_size parameter. Ensure that the directory mentioned in --cal_image_dir has at least batch_size * batches number of images in it. The valid image extensions are .jpg, .jpeg, and .png. In this case, the input_dimensions of the calibration tensors are derived from the input layer of the .tlt model.

INT8 Export Optional Arguments

• --cal_cache_file: The path to save the calibration cache file to. The default value is ./cal.bin.

• --batches: The number of batches to use for calibration and inference testing. The default value is 10.

• --batch_size: The batch size to use for calibration. The default value is 8.

• --max_batch_size: The maximum batch size of the TensorRT engine. The default value is 16.

• --max_workspace_size: Maximum workspace size of the TensorRT engine. The default value is 1073741824 = 1<<30

• --engine_file: The path to the serialized TensorRT engine file. Note that this file is hardware specific and cannot be generalized across GPUs. It is useful to quickly test your model accuracy using TensorRT on the host. As the TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to the training GPU.

• --force_ptq: A Boolean flag to force post training quantization on the exported etlt model

Note

Sample usage

Here’s a sample command to export a MaskRCNN model in INT8 mode:

Copy
Copied!

tao mask_rcnn export -m /ws/model.step-25000.tlt \
-k nvidia_tlt \
--batch_size 1 \
--data_type int8 \
--cal_image_dir /raw-data/val2017 \
--batches 10 \
--cal_data_file /export/maskrcnn.tensorfile


Deploying to DeepStream

The deep learning and computer vision models that you’ve trained can be deployed on edge devices, such as a Jetson Xavier or Jetson Nano, a discrete GPU, or in the cloud with NVIDIA GPUs. TAO Toolkit has been designed to integrate with DeepStream SDK, so models trained with TAO Toolkit will work out of the box with DeepStream SDK.

DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK.

To deploy a model trained by TAO Toolkit to DeepStream we have two options:

• Option 1: Integrate the .etlt model directly in the DeepStream app. The model file is generated by export.

• Option 2: Generate a device specific optimized TensorRT engine using tao-converter. The generated TensorRT engine file can also be ingested by DeepStream.

Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-converter is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream.

See the Exporting the Model section for more details on how to export a TAO model.

TensorRT Open Source Software (OSS)

For MaskRCNN, we need the generateDetectionPlugin, multilevelCropAndResizePlugin, resizeNearestPlugin and multilevelProposeROI plugins from the TensorRT OSS build.

If the deployment platform is x86 with an NVIDIA GPU, follow the TensorRT OSS on x86 instructions. On the other hand, if your deployment is on NVIDIA Jetson platform, follow the TensorRT OSS on Jetson (ARM64) instructions.

TensorRT OSS on x86

Building TensorRT OSS on x86:

1. Install Cmake (>=3.13).

Note

TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c

Copy
Copied!

sudo apt remove --purge --auto-remove cmake
tar xvf cmake-3.13.5.tar.gz
cd cmake-3.13.5/
./configure
make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake  2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample: Copy Copied!  cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery  If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery. Copy Copied!  nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery  This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version. Copy Copied!  Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5  3. Build TensorRT OSS: Copy Copied!  git clone -b 21.08 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd$TRT_SOURCE
mkdir -p build && cd build


Note

Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2.

Copy
Copied!

/usr/local/bin/cmake .. -DGPU_ARCHS=xy  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
make nvinfer_plugin -j$(nproc)  After building ends successfully, libnvinfer_plugin.so* will be generated under \pwd\/out/. 4. Replace the original libnvinfer_plugin.so*: Copy Copied!  sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
sudo cp $TRT_SOURCE/pwd/out/libnvinfer_plugin.so.8.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y sudo ldconfig  TensorRT OSS on Jetson (ARM64) 1. Install Cmake (>=3.13) Note TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2. Upgrade TensorRT OSS using: Copy Copied!  sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc)
sudo make install
sudo ln -s /usr/local/bin/cmake /usr/bin/cmake


2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

 Jetson Platform GPU_ARCHS Nano/Tx1 53 Tx2 62 AGX Xavier/Xavier NX 72
3. Build TensorRT OSS:

Copy
Copied!

git clone -b 21.03 https://github.com/nvidia/TensorRT
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=pwd
cd $TRT_SOURCE mkdir -p build && cd build  Note The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2. Copy Copied!  /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc)


After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

4. Replace "libnvinfer_plugin.so*" with the newly generated.

Copy
Copied!


DeepStream Configuration File

The configuration file is used by deepstream-app (see the Deepstream Configuration Guide for more details). You need to enable the display-mask under the osd group to see the mask visual view:

Copy
Copied!

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
display-bbox=0
display-text=0
Nvinfer config file


The Nvinfer configuration file is used in the nvinfer plugin; see the Deepstream plugin manual for more details. The following are key parameters for running the MaskRCNN model:

Copy
Copied!

tlt-model-key=<tlt_encode or TLT Key used during model export>
tlt-encoded-model=<Path to TLT model>
custom-lib-path=<path to post process parser lib>
network-type=3 ## 3 is for instance segmentation network
labelfile-path=<Path to label file>
int8-calib-file=<Path to optional INT8 calibration cache>
infer-dims=<Inference resolution if different than provided>
num-detected-classes=<# of classes if different than default>


Here’s an example:

Copy
Copied!

[property]
gpu-id=0
net-scale-factor=0.017507
offsets=123.675;116.280;103.53
model-color-format=0
tlt-model-key=<tlt_encode or TLT Key used during model export>
tlt-encoded-model=<Path to TLT model>
custom-lib-path=<path to post process parser lib>
network-type=3 ## 3 is for instance segmentation network
int8-calib-file=<Path to optional INT8 calibration cache>
infer-dims=<Inference resolution if different than provided>
num-detected-classes=3
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
gie-unique-id=1
#no cluster
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
## MRCNN supports only cluster-mode=4; Clustering is done by the model itself
cluster-mode=4
pre-cluster-threshold=0.8