# RetinaNet

With RetinaNet, the following tasks are supported:

• dataset_convert

• train

• evaluate

• prune

• inference

• export

These tasks may be invoked from the TAO Toolkit Launcher by following the below mentioned convention from command line:

Copy
Copied!

tao retinanet <sub_task> <args_per_subtask>


where, args_per_subtask are the command line arguments required for a given subtask. Each of these sub-tasks are explained in detail below.

## Data Input for Object Detection

The object detection apps in TAO Toolkit expect data in KITTI format for training and evaluation.

## Pre-processing the Dataset

The RetinaNet dataloader supports the raw KITTI formatted data as well as TFrecords.

To use TFRecords for optimized iteration across the data batches, the KITTI formatted data need to be converted to TFRecords format first. This can be done using the dataset_convert subtask.

The dataset_convert tool requires a configuration file as input. Details of the configuration file and examples are included in the following sections.

### Configuration File for Dataset Converter

The dataset_convert tool provides several configurable parameters. The parameters are encapsulated in a spec file to convert data from the KITTI format to the TFRecords format. This is a prototxt format file with 3 global parameters:

• kitti_config: A nested prototxt configuration with multiple input parameters

• image_directory_path: The path to the dataset root. The image_dir_name is appended to this path to get the input images and must be the same path specified in the experiment spec file.

• target_class_mapping: The prototxt dictionary that maps the class names in the tfrecords to the target class to be trained in the network.

Here are descriptions of the configurable parameters for the kitti_config field:

Parameter

Datatype

Default

Description

Supported Values

root_directory_path

string

The path to the dataset root directory

image_dir_name

string

The relative path to the directory containing images from the path in root_directory_path.

label_dir_name

string

The relative path to the directory containing labels from the path in root_directory_path.

partition_mode

string

The method employed when partitioning the data to multiple folds. Two methods are supported:

• Random partitioning: The data is divided in to 2 folds, train and val. This mode requires that the val_split parameter be set.

• Sequence-wise partitioning: The data is divided into n partitions (defined by the num_partitions parameter) based on the number of sequences available.

• random

• sequence

num_partitions

int

2 (if partition_mode is random)

The number of partitions to use to split the data (N folds). This field is ignored when the partition model is set to random, as by default only two partitions are generated: val and train. In sequence mode, the data is split into n-folds. The number of partitions is ideally fewer than the total number of sequences in the kitti_sequence_to_frames file.

n=2 for random partition n< number of sequences in the kitti_sequence_to_frames_file

image_extension

str

.png

The extension of the images in the image_dir_name parameter.

.png .jpg .jpeg

val_split

float

20

The percentage of data to be separated for validation. This only works under “random” partition mode. This partition is available in fold 0 of the TFrecords generated.

0 <= x < 100

kitti_sequence_to_frames_file

str

The name of the KITTI sequence to frame mapping file. This file must be present within the dataset root as mentioned in the root_directory_path.

num_shards

int

10

The number of shards per fold.

1-20

The sample configuration file shown below converts the 100% KITTI dataset to the training set.

Copy
Copied!

kitti_config {
root_directory_path: "/workspace/tao-experiments/data/"
image_dir_name: "training/image_2"
label_dir_name: "training/label_2"
image_extension: ".png"
partition_mode: "random"
num_partitions: 2
val_split: 0
num_shards: 10
}
image_directory_path: "/workspace/tao-experiments/data/"
target_class_mapping {
key: "car"
value: "car"
}
target_class_mapping {
key: "pedestrian"
value: "pedestrian"
}
target_class_mapping {
key: "cyclist"
value: "cyclist"
}
target_class_mapping {
key: "van"
value: "car"
}
target_class_mapping {
key: "person_sitting"
value: "pedestrian"
}
target_class_mapping {
key: "truck"
value: "car"
}


### Sample Usage of the Dataset Converter Tool

The dataset_convert tool is described below:

Copy
Copied!

tao retinanet dataset-convert [-h] -d DATASET_EXPORT_SPEC
-o OUTPUT_FILENAME
[-v]


You can use the following arguments:

• -h, --help: Show this help message and exit

• -d, --dataset-export-spec: The path to the detection dataset spec containing the config for exporting .tfrecord files

• -o, --output_filename: The output filename

• -v: Enable verbose mode to show debug messages

The following example shows how to use the command with the dataset:

Copy
Copied!

tao retinanet dataset_convert -d /path/to/spec.txt
-o /path/to/tfrecords/train


## Creating a Configuration File

Below is a sample for the RetinaNet spec file. It has 6 major components: retinanet_config, training_config, eval_config, nms_config, augmentation_config and dataset_config. The format of the spec file is a protobuf text (prototxt) message and each of its fields can be either a basic data type or a nested message. The top level structure of the spec file is summarized in the table below:

Copy
Copied!

random_seed: 42
retinanet_config {
aspect_ratios_global: "[1.0, 2.0, 0.5]"
scales: "[0.045, 0.09, 0.2, 0.4, 0.55, 0.7]"
two_boxes_for_ar1: false
clip_boxes: false
loss_loc_weight: 0.8
focal_loss_alpha: 0.25
focal_loss_gamma: 2.0
variances: "[0.1, 0.1, 0.2, 0.2]"
arch: "resnet"
nlayers: 18
n_kernels: 1
n_anchor_levels: 1
feature_size: 256
freeze_bn: false
freeze_blocks: 0
}
training_config {
enable_qat: False
batch_size_per_gpu: 24
num_epochs: 100
pretrain_model_path: "YOUR_PRETRAINED_MODEL"
optimizer {
sgd {
momentum: 0.9
nesterov: True
}
}
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 4e-5
max_learning_rate: 1.5e-2
soft_start: 0.15
annealing: 0.5
}
}
regularizer {
type: L1
weight: 2e-5
}
}
eval_config {
validation_period_during_training: 10
average_precision_mode: SAMPLE
batch_size: 24
matching_iou_threshold: 0.5
}
nms_config {
confidence_threshold: 0.01
clustering_iou_threshold: 0.6
top_k: 200
}
augmentation_config {
output_width: 384
output_height: 1248
output_channel: 3
image_mean {
key: 'b'
value: 103.9
}
image_mean {
key: 'g'
value: 116.8
}
image_mean {
key: 'r'
value: 123.7
}
}
dataset_config {
data_sources: {
# option 1
tfrecords_path: "/workspace/tao-experiments/data/tfrecords/kitti_train*"

# option 2
# label_directory_path: "/workspace/tao-experiments/data/training/label_2"
# image_directory_path: "/workspace/tao-experiments/data/training/image_2"
}
target_class_mapping {
key: "car"
value: "car"
}
target_class_mapping {
key: "pedestrian"
value: "pedestrian"
}
target_class_mapping {
key: "cyclist"
value: "cyclist"
}
target_class_mapping {
key: "van"
value: "car"
}
target_class_mapping {
key: "person_sitting"
value: "pedestrian"
}
validation_data_sources: {
label_directory_path: "/workspace/tao-experiments/data/val/label"
image_directory_path: "/workspace/tao-experiments/data/val/image"
}
}


### Training Config

The training configuration(training_config) defines the parameters needed for the training, evaluation and inference. Details are summarized in the table below.

 Field Description Data Type and Constraints Recommended/Typical Value batch_size_per_gpu The batch size for each GPU, so the effective batch size is batch_size_per_gpu * num_gpus. Unsigned int, positive num_epochs The number of epochs to train the network Unsigned int, positive. – enable_qat Whether to use quantization aware training. RetinaNet does not support loading a pruned non-QAT model and retraining it with QAT enabled, or vice versa. To get a pruned QAT model, perform the initial training with QAT enabled or enable_qat=True. Boolean learning_rate Only soft_start_annealing_schedule with the following nested parameters is supported: min_learning_rate: The minimum learning rate during the entire experiment max_learning_rate: The maximum learning rate during the entire experiment soft_start: The time to lapse before warm up (expressed as a percentage of progress between 0 and 1) annealing: The time to start annealing the learning rate Message type. – regularizer This parameter configures the regularizer to be used while training and contains the following nested parameters. type: The type of regularizer to use. NVIDIA supports NO_REG, L1 or L2 weight: The floating point value for regularizer weight Message type. L1 (Note: NVIDIA suggests using L1 regularizer when training a network before pruning as L1 regularization helps making the network weights more prunable.) optimizer This parameter can be either “adam”, “sgd”, or “rmsprop”. Each type has following parameters: adam: epsilon, beta1, beta2, amsgrad sgd: momentum, nesterov rmsprop: rho, momentum, epsilon, centered The definition of the above parameters is the same as those in Keras (keras.io/api/optimizers) Message type. – pretrain_model_path The path to the pretrained model, if any. At most one of pretrain_model_path, resume_model_path, pruned_model_path may present. String – resume_model_path The path to the TAO checkpoint model to resume training, if any. At most one of pretrain_model_path, resume_model_path, pruned_model_path may present. String – pruned_model_path The path to a TAO pruned model for re-training, if any. At most one of pretrain_model_path, resume_model_path, pruned_model_path may present. String – checkpoint_interval The number of training epochs that should run per one model checkpoint/validation Unsigned int, positive 10 max_queue_size The number of prefetch batches in data loading Unsigned int, positive – n_workers The number of workers for data loading (set to less than 4 when using tfrecords as data ingestiion) Unsigned int, positive – use_multiprocessing Whether to use multiprocessing mode of keras sequence data loader Boolean –
Note

The learning rate is automatically scaled with the number of GPUs used during training, or the effective learning rate is learning_rate * n_gpu.

### Evaluation Config

The evaluation configuration (eval_config) defines the parameters needed for the evaluation either during training or standalone. Details are summarized in the table below.

 Field Description Data Type and Constraints Recommended/Typical Value validation_period_during_training The number of training epochs per which one validation should run. Unsigned int, positive 10 average_precision_mode Average Precision (AP) calculation mode can be either SAMPLE or INTEGRATE. SAMPLE is used as VOC metrics for VOC 2009 or before. INTEGRATE is used for VOC 2010 or after that. ENUM type ( SAMPLE or INTEGRATE) SAMPLE matching_iou_threshold The lowest IoU of predicted box and ground truth box that can be considered a match. Boolean 0.5

### NMS Config

The NMS configuration (nms_config) defines the parameters needed for the NMS postprocessing. NMS config applies to the NMS layer of the model in training, validation, evaluation, inference and export. Details are summarized in the table below.

 Field Description Data Type and Constraints Recommended/Typical Value confidence_threshold Boxes with a confidence score less than confidence_threshold are discarded before applying NMS. float 0.01 cluster_iou_threshold The IoU threshold below which boxes will go through the NMS process. float 0.6 top_k top_k boxes will be output after the NMS keras layer. If the number of valid boxes is less than k, the returned array will be padded with boxes whose confidence score is 0. Unsigned int 200 infer_nms_score_bits The number of bits to represent the score values in NMS plugin in TensorRT OSS. The valid range is integers in [1, 10]. Setting it to any other values will make it fall back to ordinary NMS. Currently this optimized NMS plugin is only avaible in FP16 but it should also be selected by INT8 data type as there is no INT8 NMS in TensorRT OSS and hence this fastest implementation in FP16 will be selected. If falling back to ordinary NMS, the actual data type when building the engine will decide the exact precision(FP16 or FP32) to run at. int. In the interval [1, 10]. 0

### Augmentation Config

The augmentation_config parameter defines the image size after preprocessing. The augmentation methods in the SSD paper will be performed during training, including random flip, zoom-in, zoom-out and color jittering. And the augmented images will be resized to the output shape defined in augmentation_config. In evaluation process, only the resize will be performed.

Note

The details of augmentation methods can be found in setcion 2.2 and 3.6 of the paper.

 Field Description Data Type and Constraints Recommended/Typical Value output_channel Output image channel of augmentation pipeline. integer – output_width The width of preprocessed images and the network input. integer, multiple of 32 – output_height The height of preprocessed images and the network input. integer, multiple of 32 – random_crop_min_scale Minimum patch scale of RandomCrop augmentation. Default:0.3 float <= 1.0 – random_crop_max_scale Maximum patch scale of RandomCrop augmentation. Default:1.0 float >= 1.0 – random_crop_min_ar Minimum aspect ratio of RandomCrop augmentation. Default:0.5 float > 0 – random_crop_max_ar Maximum aspect ratio of RandomCrop augmentation. Default:2.0 float > 0 – zoom_out_min_scale Minimum scale of ZoomOut augmentation. Default:1.0 float >= 1.0 – zoom_out_max_scale Maximum scale of ZoomOut augmentation. Default:4.0 float >= 1.0 – brightness Brightness delta in color jittering augmentation. Default:32 integer >= 0 – contrast Contrast delta factor in color jitter augmentation. Default:0.5 float of [0, 1) – saturation Saturation delta factor in color jitter augmentation. Default:0.5 float of [0, 1) – hue Hue delta in color jittering augmentation. Default:18 integer >= 0 – random_flip Probablity of performing random horizontal flip. Default:0.5 float of [0, 1) – image_mean A key/value pair to specify image mean values. If omitted, ImageNet mean will be used for image preprocessing. If set, depending on output_channel, either ‘r/g/b’ or ‘l’ key/value pair must be configured. dict –
Note

If set random_crop_min_scale = random_crop_max_scale = 1.0, RandomCrop augmentation will be disabled. Similarly, set zoom_out_min_scale = zoom_out_max_scale = 1, ZoomOut augmentation will be disabled. And all color jitter delta values are set to 0, color jittering augmentation will be disabled.

### Dataset Config

The RetinaNet dataloader assumes data are prepared in KITTI format (images and labels in two separate folders where each image in image folder has a txt label file with same filename in label folder. The label file content follows KITTI format) and training/validation split is already done.

The parameters in dataset_config are defined as follows:

• data_sources: Captures the path to datasets to train on. If you have multiple data sources for training, you may use multiple data_sources. This field contains 3 parameters: * label_directory_path: Path to the data source label folder * image_directory_path: Path to the data source image folder * tfrecords_path: Path to the TFRecords

When using raw KTTTI formatted data as input, only label_directory_path and image_directory_path are required. When using TFRecords as data ingestion, only tfrecords_path is required.

• include_difficult_in_training: Whether to include difficult boxes in training. If set to false, difficult boxes will be ignored. Difficult boxes are those with occlusion level 2 in KITTI labels. (only applicable with raw KITTI formmatted data)

• target_class_mapping: This parameter maps the class names in the labels to the target class to be trained in the network. An element is defined for every source class to target class mapping. This field was included with the intention of grouping similar class objects under one umbrella. For example: car, van, heavy_truck etc. may be grouped under automobile. The “key” field is the value of the class name in the tfrecords file, and the “value” field corresponds to the value that the network is expected to learn.

• validation_data_sources: Captures the path to datasets to validate on. If you have multiple data sources for validation, you may use multiple validation_data_sources. This field contains 2 parameters:

• label_directory_path: Path to the data source label folder

• image_directory_path: Path to the data source image folder

Note

The class names key in the target_class_mapping must be identical to the one shown in the KITTI labels so that the correct classes are picked up for training.

### RetinaNet Config

The RetinaNet configuration (retinanet_config) defines the parameters needed for building the RetinaNet model. Details are summarized in the table below.

Focal loss is calculated as follows:

Variances:

## Training the Model

Train the RetinaNet model using this command:

Copy
Copied!

tao retinanet train [-h] -e <experiment_spec>
-r <output_dir>
-k <key>
[--gpus <num_gpus>]
[--gpu_index <gpu_index>]
[--use_amp]
[--log_file <log_file_path>]


### Required Arguments

• -r, --results_dir: Path to the folder where the experiment output is written.

• -k, --key: Provide the encryption key to decrypt the model.

• -e, --experiment_spec_file: Experiment specification file to set up the evaluation experiment. This should be the same as the training specification file.

### Optional Arguments

• --gpus: The number of GPUs to be used in the training in a multi-GPU scenario (default: 1).

• --gpu_index: The GPU indices used to run the training. We can specify the GPU indices used to run training when the machine has multiple GPUs installed.

• --use_amp: A flag to enable AMP training.

• --log_file: Path to the log file. Defaults to stdout.

• -h, --help: Show this help message and exit.

### Input Requirement

• Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128, W, H are multiples of 32)

• Image format: JPG, JPEG, PNG

• Label format: KITTI detection

### Sample Usage

Here’s an example of using the train command on a RetinaNet model:

Copy
Copied!



## Running Inference on a RetinaNet Model

The inference tool for RetinaNet networks can be used to visualize bboxes, or generate frame by frame KITTI format labels on a directory of images. Two modes are supported, namely TAO model model and TensorRT engine mode. You can execute the TAO model mode using the following command:

Copy
Copied!

tao retinanet inference [-h] -i <input directory>
-o <output annotated image directory>
-e <experiment spec file>
-m <model file>
-k <key>
[-l <output label directory>]
[-t <visualization threshold>]
[--gpu_index <gpu_index>]
[--log_file <log_file_path>]


### Required Arguments

• -m, --model: Path to the pretrained model (supports both the TAO model and TensorRT engine).

• -i, --in_image_dir: The directory of input images for inference.

• -o, --out_image_dir: The directory path to output annotated images.

• -k, --key: Key to load a TAO model (it’s not needed if a TensorRT engine is used).

• -e, --config_path: Path to an experiment spec file for training.

### Optional Arguments

• -t, --threshold: Threshold for drawing a bbox. default: 0.3

• -l, --out_label_dir: The directory to output KITTI labels.

• --gpu_index: The GPU index to run inference on. We can specify the GPU index used to run inference if the machine has multiple GPUs installed. Note that inference can only run on a single GPU.

• --log_file: Path to the log file. Defaults to stdout.

• -h, --help: Show this help message and exit

### Sample Usage

Here’s an example of using the inference command on a RetinaNet model:

Copy
Copied!



## Re-training the Pruned Model

Once the model has been pruned, there might be a slight decrease in accuracy. This happens because some previously useful weights may have been removed. In order to regain the accuracy, NVIDIA recommends that you retrain this pruned model over the same dataset. To do this, use the tao retinanet train command as documented in Training the model, with an updated spec file that points to the newly pruned model as the pretrained model file.

Users are advised to turn off the regularizer in the training_config for RetinaNet to recover the accuracy when retraining a pruned model. You may do this by setting the regularizer type to NO_REG as mentioned Training config. All the other parameters may be retained in the spec file from the previous training.

Note

RetinaNet does not support loading a pruned non-QAT model and retraining it with QAT enabled, or vice versa. For example, to get a pruned QAT model, perform the initial training with QAT enabled or enable_qat=True.

## Exporting the Model

Exporting the model decouples the training process from inference and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. The exported model may be used universally across training and deployment hardware. The exported model format is referred to as .etlt. Like .tlt, the .etlt model format is also a encrypted model format with the same key of the .tlt model that it is exported from. This key is required when deploying this model.

### INT8 Mode Overview

TensorRT engines can be generated in INT8 mode to improve performance, but require a calibration cache at engine creation-time. The calibration cache is generated using a calibration tensor file, if tao retinanet export is run with the --data_type flag set to int8. Pre-generating the calibration information and caching it removes the need for calibrating the model on the inference machine. Moving the calibration cache is usually much more convenient than moving the calibration tensorfile, since it is a much smaller file and can be moved with the exported model. Using the calibration cache also speeds up engine creation as building the cache can take several minutes to generate depending on the size of the Tensorfile and the model itself.

The export tool can generate INT8 calibration cache by ingesting training data using either of these options:

• Option 1: Using the training data loader to load the training images for INT8 calibration. This option is now the recommended approach to support multiple image directories by leveraging the training dataset loader. This also ensures two important aspects of data during calibration:

• Data pre-processing in the INT8 calibration step is the same as in the training process.

• The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the INT8 model.

• Option 2: Pointing the tool to a directory of images that you want to use to calibrate the model. For this option, make sure to create a sub-sampled directory of random images that best represent your training dataset.

### FP16/FP32 Model

The calibration.bin is only required if you need to run inference at INT8 precision. For FP16/FP32 based inference, the export step is much simpler. All that is required is to provide a .tlt model from the training/retraining step to be converted into an .etlt.

### Exporting the RetinaNet Model

Here’s an example of the command line arguments of the tao retinanet export command:

Copy
Copied!

tao retinanet export [-h] -m <path to the .tlt model file>
--experiment_spec <path to experiment spec file>
-k <key>
[-o <path to output file>]
[--cal_data_file <path to tensor file>]
[--cal_image_dir <path to the directory images to calibrate the model]
[--cal_cache_file <path to output calibration file>]
[--data_type <Data type for the TensorRT backend during export>]
[--batches <Number of batches to calibrate over>]
[--max_batch_size <maximum trt batch size>]
[--max_workspace_size <maximum workspace size]
[--batch_size <batch size to TensorRT engine>]
[--engine_file <path to the TensorRT engine file>]
[--gen_ds_config]
[--strict_type_constraints]
[--force_ptq]
[--gpu_index <gpu_index>]
[--log_file <log_file_path>]
[--verbose]


#### Required Arguments

• -m, --model: Path to the .tlt model file to be exported.

• -k, --key: Key used to save the .tlt model file.

• -e, --experiment_spec: Path to the spec file.

#### Optional Arguments

• -o, --output_file: Path to save the exported model to. The default is ./<input_file>.etlt.

• --data_type: Desired engine data type, generates calibration cache if in INT8 mode. The options are: {fp32, fp16, int8} The default value is fp32. If using INT8, the following INT8 arguments are required.

• -s, --strict_type_constraints: A Boolean flag to indicate whether or not to apply the TensorRT strict type constraints when building the TensorRT engine.

• --gen_ds_config: A Boolean flag indicating whether to generate the template DeepStream related configuration (“nvinfer_config.txt”) as well as a label file (“labels.txt”) in the same directory as the output_file. Note that the config file is NOT a complete configuration file and requires the user to update the sample config files in DeepStream with the parameters generated.

• --gpu_index: The index of (discrete) GPUs used for exporting the model. We can specify the GPU index to run export if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

• --log_file: Path to the log file. Defaults to stdout.

• -h, --help: Show this help message and exit.

### INT8 Export Mode Required Arguments

• --cal_data_file: tensorfile generated for calibrating the engine. This can also be an output file if used with --cal_image_dir.

• --cal_image_dir: Directory of images to use for calibration.

Note

--cal_image_dir parameter for images and applies the necessary preprocessing to generate a tensorfile at the path mentioned in the --cal_data_file parameter, which is in turn used for calibration. The number of batches in the tensorfile generated is obtained from the value set to the --batches parameter, and the batch_size is obtained from the value set to the --batch_size parameter. Be sure that the directory mentioned in --cal_image_dir has at least batch_size * batches number of images in it. The valid image extensions are .jpg, .jpeg, and .png. In this case, the input_dimensions of the calibration tensors are derived from the input layer of the .tlt model.

### INT8 Export Optional Arguments

• --cal_cache_file: Path to save the calibration cache file. The default value is ./cal.bin.

• --batches: Number of batches to use for calibration and inference testing. The default value is 10.

• --batch_size: Batch size to use for calibration. The default value is 8.

• --max_batch_size: Maximum batch size of TensorRT engine. The default value is 16.

• --max_workspace_size: Maximum workspace size of TensorRT engine. The default value is: 1073741824 = 1<<30)

• --engine_file: Path to the serialized TensorRT engine file. Note that this file is hardware specific, and cannot be generalized across GPUs. Useful to quickly test your model accuracy using TensorRT on the host. As TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to training GPU.

• --force_ptq: A boolean flag to force post training quantization on the exported etlt model.

Note

When exporting a model trained with QAT enabled, the tensor scale factors to calibrate the activations are peeled out of the model and serialized to a TensorRT readable cache file defined by the cal_cache_file argument. However, note that the current version of QAT doesn’t natively support DLA int8 deployment in the Jetson. In order to deploy this model on a Jetson with DLA int8, use the --force_ptq flag to use TensorRT post training quantization to generate the calibration cache file.

### Sample usage

Here’s a sample command to export a RetinaNet model in INT8 mode.

Copy
Copied!

tao retinanet export -m /ws/retinanet_resnet18_epoch_100.tlt  \
-o /ws/retinanet_resnet18_epoch_100_int8.etlt \
-e /ws/retinanet_retrain_resnet18_kitti.txt \
-k $KEY \ --cal_image_dir /ws/data/training/image_2 \ --data_type int8 \ --batch_size 1 \ --batches 10 \ --cal_cache_file /export/cal.bin \ --cal_data_file /export/cal.tensorfile  ## Deploying to DeepStream The deep learning and computer vision models that you’ve trained can be deployed on edge devices, such as a Jetson Xavier or Jetson Nano, a discrete GPU, or in the cloud with NVIDIA GPUs. TAO Toolkit has been designed to integrate with DeepStream SDK, so models trained with TAO Toolkit will work out of the box with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK. To deploy a model trained by TAO Toolkit to DeepStream we have two options: • Option 1: Integrate the .etlt model directly in the DeepStream app. The model file is generated by export. • Option 2: Generate a device specific optimized TensorRT engine using tao-converter. The generated TensorRT engine file can also be ingested by DeepStream. Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether. Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-converter is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream. See the Exporting the Model section for more details on how to export a TAO model. ### TensorRT Open Source Software (OSS) TensorRT OSS build is required for RetinaNet models. This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. Specifically, for RetinaNet, we need the batchTilePlugin and NMSPlugin. If the deployment platform is x86 with NVIDIA GPU, follow instructions for x86; if your deployment is on NVIDIA Jetson platform, follow instructions for Jetson. #### TensorRT OSS on x86 Building TensorRT OSS on x86: 1. Install Cmake (>=3.13). Note TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c Copy Copied!  sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc)
sudo make install
sudo ln -s /usr/local/bin/cmake /usr/bin/cmake


2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample:

Copy
Copied!

cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery


If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery.

Copy
Copied!

nvcc deviceQuery.cpp -o deviceQuery
./deviceQuery


This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version.

Copy
Copied!

Detected 2 CUDA Capable device(s)

Device 0: "Tesla T4"
CUDA Driver Version / Runtime Version          10.2 / 10.2
CUDA Capability Major/Minor version number:    7.5


3. Build TensorRT OSS:

Copy
Copied!

git clone -b 21.08 https://github.com/nvidia/TensorRT
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=pwd
cd $TRT_SOURCE mkdir -p build && cd build  Note Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2. Copy Copied!  /usr/local/bin/cmake .. -DGPU_ARCHS=xy -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc)


After building ends successfully, libnvinfer_plugin.so* will be generated under \pwd\/out/.

4. Replace the original libnvinfer_plugin.so*:

Copy
Copied!

sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp$TRT_SOURCE/pwd/out/libnvinfer_plugin.so.8.m.n  /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y
sudo ldconfig


#### TensorRT OSS on Jetson (ARM64)

1. Install Cmake (>=3.13)

Note

TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2.

Copy
Copied!

sudo apt remove --purge --auto-remove cmake
tar xvf cmake-3.13.5.tar.gz
cd cmake-3.13.5/
./configure
make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake  2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.  Jetson Platform GPU_ARCHS Nano/Tx1 53 Tx2 62 AGX Xavier/Xavier NX 72 3. Build TensorRT OSS: Copy Copied!  git clone -b 21.03 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd$TRT_SOURCE
mkdir -p build && cd build


Note

The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2.

Copy
Copied!

/usr/local/bin/cmake .. -DGPU_ARCHS=72  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
make nvinfer_plugin -j$(nproc)  After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/. 4. Replace "libnvinfer_plugin.so*" with the newly generated. Copy Copied!  sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
sudo cp pwd/out/libnvinfer_plugin.so.8.m.n  /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y
sudo ldconfig


### Generating an Engine Using tao-converter

The tao-converter tool is provided with the TAO Toolkit to facilitate the deployment of TAO trained models on TensorRT and/or Deepstream. This section elaborates on how to generate a TensorRT engine using tao-converter.

For deployment platforms with an x86-based CPU and discrete GPUs, the tao-converter is distributed within the TAO docker. Therefore, we suggest using the docker to generate the engine. However, this requires that the user adhere to the same minor version of TensorRT as distributed with the docker. The TAO docker includes TensorRT version 8.0.

#### Instructions for x86

For an x86 platform with discrete GPUs, the default TAO package includes the tao-converter built for TensorRT 8.0 with CUDA 11.3 and CUDNN 8.2. However, for any other version of CUDA and TensorRT, please refer to the overview section for download. Once the tao-converter is downloaded, follow the instructions below to generate a TensorRT engine.

1. Unzip the zip file on the target machine.

2. Install the OpenSSL package using the command:

Copy
Copied!

sudo apt-get install libssl-dev


3. Export the following environment variables:

Copy
Copied!

$export TRT_LIB_PATH=”/usr/lib/x86_64-linux-gnu”$ export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”


1. Run the tao-converter using the sample command below and generate the engine.

2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo.

Note

Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model.

#### Instructions for Jetson

For the Jetson platform, the tao-converter is available to download in the NVIDIA developer zone. You may choose the version you wish to download as listed in the overview section. Once the tao-converter is downloaded, please follow the instructions below to generate a TensorRT engine.

1. Unzip the zip file on the target machine.

2. Install the OpenSSL package using the command:

Copy
Copied!

sudo apt-get install libssl-dev


3. Export the following environment variables:

Copy
Copied!

$export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”$ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”


1. For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack 4.5 or 4.6.

2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.

3. Run the tao-converter using the sample command below and generate the engine.

Note

Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model.

#### Using the tao-converter

Copy
Copied!

tao-converter [-h] -k <encryption_key>
-d <input_dimensions>
-o <comma separated output nodes>
[-c <path to calibration cache file>]
[-e <path to output engine>]
[-b <calibration batch size>]
[-m <maximum batch size of the TRT engine>]
[-t <engine datatype>]
[-w <maximum workspace size of the TRT Engine>]
[-i <input dimension ordering>]
[-p <optimization_profiles>]
[-s]
[-u <DLA_core>]
input_file


##### Required Arguments
• input_file: Path to the .etlt model exported using tao retinanet export.

• -k: The key used to encode the .tlt model when doing the training.

• -d: Comma-separated list of input dimensions that should match the dimensions used for tao retinanet export.

• -o: Comma-separated list of output blob names that should match the output configuration used for tao retinanet export. For RetinaNet, set this argument to NMS.

##### Optional Arguments
• -e: Path to save the engine to. (default: ./saved.engine)

• -t: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32. The options are {fp32, fp16, int8}.

• -w: Maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30).

• -i: Input dimension ordering, all other TAO commands use NCHW. The default value is nchw. The options are {nchw, nhwc, nc}. For RetinaNet, we can omit it(defaults to nchw).

• -p: Optimization profiles for .etlt models with dynamic shape. Comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced since version 3.0. This parameter is not required for models that are already existed in version 2.0.

• -s: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine.

• -u: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.

##### INT8 Mode Arguments
• -c: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin.

• -b: Batch size used during the export step for INT8 calibration cache generation. (default: 8).

• -m: Maximum batch size for TensorRT engine.(default: 16). If meet with out-of-memory issue, decrease the batch size accordingly. This parameter is not required for .etlt models generated with dynamic shape. (This is only possible for new models introduced since version 3.0.)

##### Sample Output Log

Here is a sample log for exporting a RetinaNet model.

Copy
Copied!

tao-converter -k \$KEY  \
-d 3,384,1248 \
-o NMS \
-e /export/trt.fp16.engine \
-t fp16 \
-i nchw \
-m 1 \
/ws/retinanet_resnet18_epoch_100.etlt
..
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 2 output network tensors.


### Integrating the model to DeepStream

There are 2 options to integrate models from TAO with DeepStream:

• Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tao retinanet export.

• Option 2: Generate a device specific optimized TensorRT engine, using tao-converter. The TensorRT engine file can also be ingested by DeepStream.

For RetinaNet, we will need to build TensorRT Open source plugins and custom bounding box parser. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this GitHub repo.

In order to integrate the models with DeepStream, you need the following:

1. Download and install DeepStream SDK. The installation instructions for DeepStream are provided in the DeepStream Development Guide.

2. An exported .etlt model file and optional calibration cache for INT8 precision.

3. A labels.txt file containing the labels for classes in the order in which the networks produces outputs.

4. A sample config_infer_*.txt file to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream.

DeepStream SDK ships with an end-to-end reference application which is fully configurable. Users can configure input sources, inference model and output sinks. The app requires a primary object detection model, followed by an optional secondary classification model. The reference application is installed as deepstream-app. The graphic below shows the architecture of the reference application.

There are typically 2 or more configuration files that are used with this app. In the install directory, the config files are located in samples/configs/deepstream-app or sample/configs/tlt_pretrained_models. The main config file configures all the high level parameters in the pipeline above. This would set input source and resolution, number of inferences, tracker and output sinks. The other supporting config files are for each individual inference engine. The inference specific config files are used to specify models, inference resolution, batch size, number of classes and other customization. The main config file will call all the supporting config files. Here are some config files in samples/configs/deepstream-app for your reference.

• source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt: Main config file

• config_infer_primary.txt: Supporting config file for primary detector in the pipeline above

• config_infer_secondary_*.txt: Supporting config file for secondary classifier in the pipeline above

The deepstream-app will only work with the main config file. This file will most likely remain the same for all models and can be used directly from the DeepStream SDK will little to no change. User will only have to modify or create config_infer_primary.txt and config_infer_secondary_*.txt.

#### Integrating a RetinaNet Model

To run a RetinaNet model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and SSD bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained RetinaNet models from TAO Toolkit is provided on GitHub here.

##### Prerequisite for RetinaNet Model
1. RetinaNet requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

2. RetinaNet requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for RetinaNet is available here. The following instructions can be used to build bounding box parser:

Step1: Install git-lfs (git >= 1.8.2)

Copy
Copied!

curl -s https://packagecloud.io/install/repositories/github/git-lfs/
script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install


Copy
Copied!

git clone -b release/tlt3.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps


Step 3: Build

Copy
Copied!

// or Path for DS installation
export CUDA_VER=10.2         // CUDA version, e.g. 10.2
make


This generates libnvds_infercustomparser_tlt.so in the directory post_processor.

### Label File

The label file is a text file containing the names of the classes that the RetinaNet model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TAO RetinaNet will specify all class names in lower case and sort them in alphabetical order. For example, if the dataset_config is:

Copy
Copied!

dataset_config {
data_sources: {
label_directory_path: "/workspace/tao-experiments/data/training/label_2"
image_directory_path: "/workspace/tao-experiments/data/training/image_2"
}
target_class_mapping {
key: "car"
value: "car"
}
target_class_mapping {
key: "person"
value: "person"
}
target_class_mapping {
key: "bicycle"
value: "bicycle"
}
validation_data_sources: {
label_directory_path: "/workspace/tao-experiments/data/val/label"
image_directory_path: "/workspace/tao-experiments/data/val/image"
}
}


Then the corresponding retinanet_labels.txt file would be:

Copy
Copied!

background
bicycle
car
person


### DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model.

Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!

tlt-encoded-model=<TLT exported .etlt>
tlt-model-key=<Model export key>
int8-calib-file=<Calibration cache file>


The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tao-converter. Detailed instructions are provided in the Generating an engine using tao-converter section above.

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!

model-engine-file=<PATH to generated TensorRT engine>


All other parameters are common between the two approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!

parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=<PATH to libnvds_infercustomparser_tlt.so>


Add the label file generated above using:

Copy
Copied!

labelfile-path=<retinanet labels>


For all the options, see the sample configuration file below. To learn about what all the parameters are used for, refer to the DeepStream Development Guide.

Copy
Copied!

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=<Path to retinanet_labels.txt>
tlt-encoded-model=<Path to RetinaNet etlt model>
tlt-model-key=<Key to decrypt model>
infer-dims=3;384;1248
uff-input-order=0
maintain-aspect-ratio=1
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=<Path to libnvds_infercustomparser_tlt.so>

[class-attrs-all]
threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0