# DetectNet_v2

DetectNet_v2 is an NVIDIA-developed object-detection model that is included in the TAO Toolkit. DetectNet_v2 supports the following tasks:

• dataset_convert

• train

• evaluate

• inference

• prune

• calibration_tensorfile

• export

These tasks can be invoked from the TAO Toolkit Launcher using the following convention on the command-line:

Copy
Copied!

tao detectnet_v2 <sub_task> <args_per_subtask>


where, args_per_subtask are the command-line arguments required for a given subtask. Each subtask is explained in detail in the following sections.

NVIDIA recommends following the workflow in the diagram below to generate a trained and optimized DetectNet_v2 model.

## Data Input for Object Detection

The object detection apps in TAO Toolkit expect data in KITTI format for training and evaluation.

## Pre-processing the Dataset

The DetectNet_v2 app requires the raw input data to be converted to TFRecords for optimized iteration across the data batches. This can be done using the dataset_convert subtask under DetectNet_v2. Currently, the KITTI and COCO formats are supported.

The dataset_convert tool requires a configuration file as input. Details of the configuration file and examples are included in the following sections.

### Configuration File for Dataset Converter

The dataset_convert tool provides several configurable parameters. The parameters are encapsulated in a spec file to convert data from the original annotation format to the TFRecords format which the trainer can ingest. KITTI and COCO formats can be configured by using either kitti_config or coco_config respectively. You may use only one of the two in a single spec file. The spec file is a prototxt format file with following global parameters:

• kitti_config: A nested prototxt configuration with multiple input parameters

• coco_config: A nested prototxt configuration with multiple input parameters

• image_directory_path: The path to the dataset root. The image_dir_name is appended to this path to get the input images and must be the same path specified in the experiment spec file.

• target_class_mapping: The prototxt dictionary that maps the class names in the tfrecords to the target class to be trained in the network.

#### kitti_config

Here are descriptions of the configurable parameters for the kitti_config field:

Parameter

Datatype

Default

Description

Supported Values

root_directory_path

string

The path to the dataset root directory

image_dir_name

string

The relative path to the directory containing images from the path in root_directory_path.

label_dir_name

string

The relative path to the directory containing labels from the path in root_directory_path.

partition_mode

string

The method employed when partitioning the data to multiple folds. Two methods are supported:

• Random partitioning: The data is divided in to 2 folds, train and val. This mode requires that the val_split parameter be set.

• Sequence-wise partitioning: The data is divided into n partitions (defined by the num_partitions parameter) based on the number of sequences available.

• random

• sequence

num_partitions

int

2 (if partition_mode is random)

The number of partitions to use to split the data (N folds). This field is ignored when the partition model is set to random, as by default only two partitions are generated: val and train. In sequence mode, the data is split into n-folds. The number of partitions is ideally fewer than the total number of sequences in the kitti_sequence_to_frames file.

n=2 for random partition n< number of sequences in the kitti_sequence_to_frames_file

image_extension

str

.png

The extension of the images in the image_dir_name parameter.

.png .jpg .jpeg

val_split

float

20

The percentage of data to be separated for validation. This only works under “random” partition mode. This partition is available in fold 0 of the TFrecords generated. Set the validation fold to 0 in the dataset_config.

0-100

kitti_sequence_to_frames_file

str

The name of the KITTI sequence to frame mapping file. This file must be present within the dataset root as mentioned in the root_directory_path.

num_shards

int

10

The number of shards per fold.

1-20

The sample configuration file shown below converts the 100% KITTI dataset to the training set.

Copy
Copied!

kitti_config {
root_directory_path: "/workspace/tao-experiments/data/"
image_dir_name: "training/image_2"
label_dir_name: "training/label_2"
image_extension: ".png"
partition_mode: "random"
num_partitions: 2
val_split: 0
num_shards: 10
}
image_directory_path: "/workspace/tao-experiments/data/"
target_class_mapping {
key: "car"
value: "car"
}
target_class_mapping {
key: "pedestrian"
value: "pedestrian"
}
target_class_mapping {
key: "cyclist"
value: "cyclist"
}
target_class_mapping {
key: "van"
value: "car"
}
target_class_mapping {
key: "person_sitting"
value: "pedestrian"
}
target_class_mapping {
key: "truck"
value: "car"
}


#### coco_config

Here are descriptions of the configurable parameters for the coco_config field:

Parameter

Datatype

Default

Description

Supported Values

root_directory_path

string

The path to the dataset root directory

image_dir_names

string (repated)

The relative path to the directory containing images from the path in root_directory_path for each partition.

annotation_files

string (repated)

The relative path to the directory containing JSON file from the path in root_directory_path for each partition.

num_partitions

int

2

The number of partitions in the data. The number of partition must match the length of the list for image_dir_names and annotation_files. By default, two partitions are generated: val and train.

n==len(annotation_files)

num_shards

int (repeated)

[10]

The number of shards per partitions. If only one value is provided, same number of shards is applied in all partitions

The sample configuration file shown below converts the COCO dataset with training and validation data where number of shard is 32 for validation and 256 for training.

Copy
Copied!

coco_config {
root_directory_path: "/workspace/tao-experiments/data/coco"
img_dir_names: ["val2017", "train2017"]
annotation_files: ["annotations/instances_val2017.json", "annotations/instances_train2017.json"]
num_partitions: 2
num_shards: [32, 256]
}
image_directory_path: "/workspace/tao-experiments/data/coco"


### Sample Usage of the Dataset Converter Tool

While KITTI is the accepted dataset format for object detection, the DetectNet_v2 trainer requires this data to be converted to TFRecord files for ingestion. The dataset_convert tool is described below:

Copy
Copied!

tao detectnet_v2 dataset-convert [-h] -d DATASET_EXPORT_SPEC -o OUTPUT_FILENAME
[-f VALIDATION_FOLD]


You can use the following optional arguments:

• -h, --help: Show this help message and exit

• -d, --dataset-export-spec: The path to the detection dataset spec containing the config for exporting .tfrecord files

• -o output_filename: The output filename

• -f, –validation-fold: The validation fold in 0-based indexing. This is required when modifying the training set, but otherwise optional.

The following example shows how to use the command with the dataset:

Copy
Copied!

tao detectnet_v2 dataset_convert  [-h] -d <path_to_tfrecords_conversion_spec>
-o <path_to_output_tfrecords>


The following is the output log from executing tao detectnet_v2 dataset_convert:

Copy
Copied!

Using TensorFlow backend.
2019-07-16 01:30:59,073 - iva.detectnet_v2.dataio.build_converter - INFO - Instantiating a kitti converter
2019-07-16 01:30:59,243 - iva.detectnet_v2.dataio.kitti_converter_lib - INFO - Num images in
Train: 10786    Val: 2696
2019-07-16 01:30:59,243 - iva.detectnet_v2.dataio.kitti_converter_lib - INFO - Validation data in partition 0. Hence, while choosing the validation set during training choose validation_fold 0.
2019-07-16 01:30:59,251 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Writing partition 0, shard 0
/usr/local/lib/python2.7/dist-packages/iva/detectnet_v2/dataio/kitti_converter_lib.py:265: VisibleDeprecationWarning: Reading unicode strings without specifying the encoding argument is deprecated. Set the encoding, use None for the system default.
2019-07-16 01:31:01,226 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Writing partition 0, shard 1
. .
sheep: 242
bottle: 205
..
boat: 171
car: 418
2019-07-16 01:31:20,772 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Writing partition 1, shard 0
..
2019-07-16 01:32:40,338 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Writing partition 1, shard 9
2019-07-16 01:32:49,063 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO -
Wrote the following numbers of objects:
sheep: 695
..
car: 1770

2019-07-16 01:32:49,064 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Cumulative object statistics
2019-07-16 01:32:49,064 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO -
Wrote the following numbers of objects:
sheep: 937
..
car: 2188
2019-07-16 01:32:49,064 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Class map.
Label in GT: Label in tfrecords file
sheep: sheep
..

boat: boat
For the dataset_config in the experiment_spec, please use labels in the tfrecords file, while writing the classmap.

2019-07-16 01:32:49,064 - iva.detectnet_v2.dataio.dataset_converter_lib - INFO - Tfrecords generation complete.


Note

The dataset_convert tool converts the class names in the KITTI-formatted data files to lowercase characters. Therefore, when configuring a training experiment, ensure that lowercase class names are used in the dataset_config section under target class mapping. Using incorrect class names in the dataset_config section can cause invalid training experiments with 0 mAP.

Note

When using the dataset_convert tool to create separate TFRecords for evaluation, which may be defined under dataset_config using the parameter validation_data_source, we recommend setting the partition_mode to random with 2 partitions and an arbitrary val_split (1-100). The dataloader takes care of traversing through all the folds and generating the mAP accordingly.

## Creating a Configuration File

To perform training, evaluation, and inference for DetectNet_v2, you need to configure several components, each with their own parameters. The train and evaluate tasks for a DetectNet_v2 experiment share the same configuration file. The inference task uses a separate configuration file.

The specification file for DetectNet_v2 training configures these components of the training pipe:

• Model

• BBox ground truth generation

• Post processing module

• Cost function configuration

• Trainer

• Augmentation model

• Evaluator

### Model Config

The core object-detection model can be configured using the model_config option in the spec file.

The following is a sample model config to instantiate a ResNet-18 model with pretrained weights and freeze blocks 0 and 1 with all shortcuts set to projection layers.

Copy
Copied!

# Sample model config for to instantiate a resnet18 model with pretrained weights and freeze blocks 0, 1
# with all shortcuts having projection layers.
model_config {
arch: "resnet"
pretrained_model_file: <path_to_model_file>
freeze_blocks: 0
freeze_blocks: 1
all_projections: True
num_layers: 18
use_pooling: False
use_batch_norm: True
dropout_rate: 0.0
objective_set: {
cov {}
bbox {
scale: 35.0
offset: 0.5
}
}
}


The following table describes the model_config parameters:

Parameter

Datatype

Default

Description

Supported Values

all_projections

bool

False

For templates with shortcut connections, this parameter defines whether or not all shortcuts should be instantiated with 1x1 projection layers, irrespective of whether there is a change in stride across the input and output.

True or False (only to be used in ResNet templates)

arch

string

resnet

The architecture of the backbone feature extractor to be used for training.

• resnet

• vgg

• mobilenet_v1

• mobilenet_v2

num_layers

int

18

The depth of the feature extractor for scalable templates.

• resnet: 10, 18, 34, 50, 101

• vgg: 16, 19

pretrained model file

string

This parameter defines the path to a pretrained TAO model file. If the load_graph flag is set to false, it is assumed that only the weights of the pretrained model file is to be used. In this case, TAO train constructs the feature extractor graph in the experiment and loads the weights from the pretrained model file that has matching layer names. Thus, transfer learning across different resolutions and domains are supported. For layers that may be absent in the pretrained model, the tool initializes them with random weights and skips the import for that layer.

Unix path

use_pooling

Boolean

False

Choose between using strided convolutions or MaxPooling while downsampling. When True, MaxPooling is used to downsample; however, for the object-detection network, NVIDIA recommends setting this to False and using strided convolutions.

True or False

use_batch_norm

Boolean

False

A flag to determine whether to use Batch Normalization layers or not.

True or False

objective_set

Proto Dictionary

The objectives for training the network. For object-detection networks, set it to learn cov and bbox. These parameters should not be altered for the current training pipeline.

cov {} bbox { scale: 35.0 offset: 0.5 }

dropout_rate

Float

0.0

Probability for drop out,

0.0-0.1

Boolean

False

A flag to determine whether or not to load the graph from the pretrained model file, or just the weights. For a pruned model, set this parameter to True. Pruning modifies the original graph, so the pruned model graph and the weights need to be imported.

True or False

freeze_blocks

float (repeated)

This parameter defines which blocks may be frozen from the instantiated feature extractor template, and is different for different feature extractor templates.

• ResNet series: For the ResNet series, the block ID’s valid for freezing is any subset of [0, 1, 2, 3, 4](inclusive).

• VGG series: For the VGG series, the block ID’s valid for freezing is any subset of [1, 2, 3, 4, 5](inclusive).

• MobileNet V1: For the MobileNet V1, the block ID’s valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11](inclusive).

• MobileNet V2: For the MobileNet V2, the block ID’s valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13](inclusive).

• GoogLeNet: For the GoogLeNet, the block ID’s valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7](inclusive).

freeze_bn

Boolean

False

A flag to determine whether to freeze the Batch Normalization layers in the model during training.

True or False

### BBox Ground Truth Generator

DetectNet_v2 generates 2 tensors, cov and bbox. The image is divided into 16x16 grid cells. The cov tensor (short for “coverage” tensor) defines the number of grid cells that are covered by an object. The bbox tensor defines the normalized image coordinates of the object top left (x1, y1) and bottom right (x2, y2) with respect to the grid cell. For best results, you can assume the coverage area to be an ellipse within the bbox label with the maximum confidence assigned to the cells in the center and reducing coverage outwards. Each class has its own coverage and bbox tensor, thus the shape of the tensors are as follows:

• cov: Batch_size, Num_classes, image_height/16, image_width/16

• bbox: Batch_size, Num_classes * 4, image_height/16, image_width/16 (where 4 is the number of coordinates per cell)

Here is a sample rasterizer config for a 3 class detector:

Copy
Copied!

# Sample rasterizer configs to instantiate a 3 class bbox rasterizer
bbox_rasterizer_config {
target_class_config {
key: "car"
value: {
cov_center_x: 0.5
cov_center_y: 0.5
}
}
target_class_config {
key: "cyclist"
value: {
cov_center_x: 0.5
cov_center_y: 0.5
}
}
target_class_config {
key: "pedestrian"
value: {
cov_center_x: 0.5
cov_center_y: 0.5
}
}
}


The bbox_rasterizer has the following parameters that are configurable:

Parameter

Datatype

Default

Description

Supported Values

float

0.67

The area to be considered dormant (or area of no bbox) around the ellipse of an object. This is particularly useful in cases of overlapping objects so that foreground objects and background objects are not confused.

0-1.0

target_class_config

proto dictionary

This is a nested configuration field that defines the coverage region for an object of a given class. For each class, this field is repeated. The following are configurable parameters for the target_class_config:

• cov_center_x (float): x-coordinate of the center of the object

• cov_center_y (float): y-coordinate of the center of the object

• bbox_min_radius (float): The minimum radius of the coverage region to be drawn for boxes

• cov_center_x: 0.0 - 1.0

• cov_center_y: 0.0 - 1.0

### Post-Processor

The post-processor module generates renderable bounding boxes from the raw detection output. The process includes the following:

• Filtering out valid detections by thresholding objects using the confidence value in the coverage tensor.

• Clustering the raw filtered predictions using DBSCAN to produce the final rendered bounding boxes.

• Filtering out weaker clusters based on the final confidence threshold derived from the candidate boxes that get grouped into a cluster.

Here is an example of the definition of the post-processor for a 3-class network learning for car, cyclist, and pedestrian:

Copy
Copied!

postprocessing_config {
target_class_config {
key: "car"
value: {
clustering_config {
coverage_threshold: 0.005
dbscan_eps: 0.15
dbscan_min_samples: 0.05
minimum_bounding_box_height: 20
}
}
}
target_class_config {
key: "cyclist"
value: {
clustering_config {
coverage_threshold: 0.005
dbscan_eps: 0.15
dbscan_min_samples: 0.05
minimum_bounding_box_height: 20
}
}
}
target_class_config {
key: "pedestrian"
value: {
clustering_config {
coverage_threshold: 0.005
dbscan_eps: 0.15
dbscan_min_samples: 0.05
minimum_bounding_box_height: 20
}
}
}
}


This section defines parameters that configure the post-processor. For each class that you can train for, the postprocessing_config has a target_class_config element that defines the clustering parameters for this class. The parameters for each target class include the following:

Parameter

Datatype

Default

Description

Supported Values

key

string

The name of the class for which the post processor module is being configured

The network object class name, which is mentioned in the cost_function_config.

value

clustering _config proto

The nested clustering-config proto parameter that configures the postprocessor module. The parameters for this module are defined in the next table.

Encapsulated object with parameters defined below.

The clustering_config element configures the clustering block for this class. Here are the parameters for this element:

 Parameter Datatype Default Description Supported Values coverage_threshold float – The minimum threshold of the coverage tensor output to be considered a valid candidate box for clustering. The four coordinates from the bbox tensor at the corresponding indices are passed for clustering. 0.0 - 1.0 dbscan_eps float – The maximum distance between two samples for one to be considered in the neighborhood of the other. This is not a maximum bound on the distances of points within a cluster. The greater the dbscan_eps value, the more boxes are grouped together. 0.0 - 1.0 dbscan_min_samples float – The total weight in a neighborhood for a point to be considered as a core point. This includes the point itself. 0.0 - 1.0 minimum_bounding_box_height int – The minimum height in pixels to consider as a valid detection post clustering. 0 - input image height clustering_algorithm enum DBSCAN Defines the post-processing algorithm to cluter raw detections to the final bbox render. When using HYBRID mode, ensure both DBSCAN and NMS configuration parameters are defined. DBSCAN, NMS, HYBRID dbscan_confidence_threshold float 0.1 The confidence threshold used to filter out the clustered bounding box output from DBSCAN. > 0.0 nms_iou_threshold float 0.2 The Intersection Over Union (IOU) threshold to filter out redundant boxes from raw detections to form final clustered outputs. (0.0 - 1.0) nms_confidence_threshold float 0. The confidence threshold to filter out clustered bounding boxes from NMS. 0.0 - 1.0

In TAO Toolkit 3.21.08, DetectNet_v2 supports three methods for clustering raw detections for the network in final rendered bounding boxes.

• DBSCAN: Density Based Spatial Clustering of Application

• NMS: Non-Maximal suppression

• HYDRID: DBSCAN + NMS

Under HYBRID clustering, DetectNet_v2 post-processing first passes the raw network outputs to the DBSCAN clustering and uses the candidate boxes per cluster from DBSCAN as input to NMS. The NMS clustering generates the final rendered boxes.

Note

For HYBRID clustering, ensure both DBSCAN and NMS related parameters are defined in the post-processing config.

### Cost Function

This section describes how to configure the cost function to include the classes that you are training for. For each class you want to train, add a new entry for the target classes to the spec file. We recommend not changing the parameters within the spec file for best performance with these classes. The other parameters here should remain unchanged.

Copy
Copied!

cost_function_config {
target_classes {
name: "car"
class_weight: 1.0
coverage_foreground_weight: 0.05
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 10.0
}
}
target_classes {
name: "cyclist"
class_weight: 1.0
coverage_foreground_weight: 0.05
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 1.0
}
}
target_classes {
name: "pedestrian"
class_weight: 1.0
coverage_foreground_weight: 0.05
objectives {
name: "cov"
initial_weight: 1.0
weight_target: 1.0
}
objectives {
name: "bbox"
initial_weight: 10.0
weight_target: 10.0
}
}
enable_autoweighting: True
max_objective_weight: 0.9999
min_objective_weight: 0.0001
}


### Trainer

The following is a sample training_config block to configure a DetectNet_v2 trainer:

Copy
Copied!

training_config {
batch_size_per_gpu: 16
num_epochs: 80
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 5e-6
max_learning_rate: 5e-4
soft_start: 0.1
annealing: 0.7
}
}
regularizer {
type: L1
weight: 3e-9
}
optimizer {
epsilon: 1e-08
beta1: 0.9
beta2: 0.999
}
}
cost_scaling {
enabled: False
initial_exponent: 20.0
increment: 0.005
decrement: 1.0
}
visualizer {
enabled: true
num_images: 3
scalar_logging_frequency: 10
infrequent_logging_frequency: 1
target_class_config {
key: "car"
value: {
coverage_threshold: 0.005
}
}
target_class_config {
key: "pedestrian"
value: {
coverage_threshold: 0.005
}
}
}
}


The following table describes the parameters used to configure the trainer:

 Parameter Datatype Default Description Supported Values batch_size_per_gpu int 32 The number of images per batch per GPU. >1 num_epochs int 120 The total number of epochs to run the experiment. enable_qat bool False Enables model training using Quantization Aware Training (QAT). For more information about QAT, see the Quantization Aware Training section. True or False learning rate learning rate scheduler proto soft_start _annealing _schedule Configures the learning rate schedule for the trainer. Currently, DetectNet_v2 only supports the soft_start annealing learning rate schedule, which may be configured using the following parameters: soft_start (float): The time to ramp up the learning rate from minimum learning rate to maximum learning rate. annealing (float): The time to cool down the learning rate from maximum learning rate to minimum learning rate. minimum_learning_rate (float): The minimum learning rate in the learning rate schedule. maximum_learning_rate (float): The maximum learning rate in the learning rate schedule. annealing: 0.0-1.0 and greater than soft_start Soft_start: 0.0 - 1.0 A sample lr plot for a soft_start of 0.3 and annealing of 0.1 is shown in the figure below. regularizer regularizer proto config The type and the weight of the regularizer to be used during training. There are two parameters: type: The type of the regularizer being used. weight: The floating point weight of the regularizer. The supported values for type are: NO_REG L1 L2 optimizer optimizer proto config The optimizer to use for training and the parameters to configure it: epsilon (float): A very small number to prevent any division by zero in the implementation. beta1 (float) beta2 (float) cost_scaling costscaling _config Enables cost scaling during training. Leave this parameter untouched currently for the DetectNet_v2 training pipe. cost_scaling { enabled: False initial_exponent: 20.0 increment: 0.005 decrement: 1.0 } checkpoint interval float 0/10 The interval (in epochs) at which train saves intermediate models. 0 to num_epochs visualizer visualizer proto config Configurable elements of the visualizer. DetectNetv2’s visualizer interfaces with TensorBoard. Please refer to this section for explanation of the configurable elements.

DetectNet_v2 currently supports the soft_start annealing learning rate schedule. The learning rate when plotted as a function of the training progress (0.0, 1.0) results in the following curve:

In this experiment, the soft_start was set as 0.3 and annealing as 0.7, with the minimum learning rate as 5e-6 and maximum learning rate, or base_lr, as 5e-4.

Note

We suggest using an L1 regularizer when training a network before pruning, as L1 regularization makes pruning the network weights easier. After pruning, when retraining the networks, we recommend turning regularization off by setting the regularization type to NO_REG.

Visualizer

DetectNet_v2 supports visualization of important metrics, weight histograms and intermediate images via tensorboard. The visualized collaterals are broadly divided into 2 categories

1. Frequently plotted collaterals: These are scalar values that are plotted as a function on time. These values are plotted more frequently so that you may see a continuous behaviour.

2. Infrequently plotted collaterals: These include histograms and intermediate images, which consume more resources to plot and are therefore plotted less frequently.

The metrics of the network are plotted as scalar plots which include

1. Bounding box loss (mean_cost_${class_bbox}): Cost component that computes the accuracy of the bbox coordinates. 2. Coverage loss (mean_cost_${class_cov}): Cost of the coverage blob that yields the confidence of an object.

3. Task Cost (task_cost): This is computed as (Coverage loss + Bounding Box loss).

4. Regularization cost (regularization_cost): Sum of all the regularizer losses in the model.

5. Total Cost (total_cost): This is computed as the task cost + (regularizer weight * regularization cost).

6. Validation Cost (validation_cost): This is the total cost computed during evaluation.

7. Mean Average Precision (mAP): The mean average precision of the network across all classes, as computed during training.

8. Learning rate (lr): The learning rate applied to the optimizer.

The plotting intervals of the frequent and infrequent collaterals are configurable via the visualizer element of the training config.

 Parameter Datatype Default Description Supported Values enabled bool ‘false’ Flag to enable tensorboard visualization. true or false num_images int 3 Number of images to be plotted per step 1 < num_images < batch_size scalar_logging_frequency int 10 Number of points per epoch 1 to num_steps_per_epoch infrequent_logging_frequency int 1 Interval to plot infrequent visualization collaterals (in number of epochs) 1 to num_epochs target_class_config proto dictionary This is a nested configuration field that defines the post-processing threshold for the coverage blob to render the raw bounding boxes before clustering. The configurable parameter is coverage_threshold (float): Raw threshold to filter candidate bounding boxes before clustering coverage_threshold: 0.0 - 1.0

The scalar plots of total_cost and validation_cost, are a good indication of how the model training is proceeding. If the 2 plots are converging and decreasing over time, it implies that the network is still learning. However, if the validation_cost starts diverging and rises, while the training_cost plateaus or decreases, it indicates that the network may be overfitting to the training dataset.

Under the images tab, the DetectNet_v2 app renders number of images. The key images of interest are:

1. images: This shows the input image currently under training

2. ${class_name}_rectangle_bbox_preds: This image shows the raw predictions of the network before applying NMS/DBSCAN clustering. The outputs seen here are a result of filtering per class by the coverage threshold. 3. ${class_name}_cov_norm: This is the normalized coverage output, which is a heatmap of the confidence with which the network says that an object exists.



## Re-training the Pruned Model

Once the model has been pruned, there might be a slight decrease in accuracy because some previously useful weights may have been removed. To regain the accuracy, we recommend that you retrain this pruned model over the same dataset using the train task, as documented in the Training the model section, with an updated spec file that points to the newly pruned model as the pretrained model file.

You should turn off the regularizer in the training_config for detectnet to recover the accuracy when retraining a pruned model. You may do this by setting the regularizer type to NO_REG as mentioned here. All other parameters may be retained in the spec file from the previous training.

To load the pretrained model, set the load_graph flag under model_config to true.

## Exporting the Model

Exporting the model decouples the training process from deployment and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. This may be interchangeably referred to as a .trt or .engine file. The same exported TAO model may be used universally across training and deployment hardware. This is referred to as the .etlt file, or encrypted TAO file. During model export, the TAO model is encrypted with a private key, which is required when you deploy this model for inference.

### INT8 Mode Overview

TensorRT engines can be generated in INT8 mode to run with lower precision, and thus improve performance. This process requires a cache file that contains scale factors for the tensors to help combat quantization errors, which may arise due to low-precision arithmetic. The calibration cache is generated using a calibration tensorfile when export is run with the --data_type flag set to int8. Pre-generating the calibration information and caching it removes the need for calibrating the model on the inference machine. Moving the calibration cache is usually much more convenient than moving the calibration tensorfile since it is a much smaller file and can be moved with the exported model. Using the calibration cache also speeds up engine creation as building the cache can take several minutes to generate depending on the size of the Tensorfile and the model itself.

The export tool can generate an INT8 calibration cache by ingesting training data using one of these options:

• Option 1: Providing a calibration tensorfile generated using the calibration_tensorfile task defined in DetectNet_v2. This command uses the data generators in the training pipeline to produce a drop of preprocessed batches of input images from the training dataset. Using this gives users the opportunity to maintain a record of the exact batches of the training data used to generate the calibration scale factors in the calibration cache file. However, this is a two-step process for generating an int8 cache file.

• Option 2: Pointing the tool to a directory of images that you want to use to calibrate the model. For this option, you will need to create a sub-sampled directory of random images that best represent your training dataset.

• Option 3: Using the training data loader directly to load the training images for INT8 calibration. This option is now the recommended approach as it helps to generate multiple random samples. This also ensures two important aspects of the data during calibration:

• Data pre-processing in the INT8 calibration step is the same as in the training process.

• The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the int8 model.

• Calibration occurs as a one-step process with the data batches being generated on the fly.

NVIDIA plans to eventually deprecate Option 1 and only support Options 2 and 3.

### FP16/FP32 Model

The calibration.bin is only required if you need to run inference at INT8 precision. For FP16/FP32 based inference, the export step is much simpler. All that is required is to provide a model from the train step to export to convert it into an encrypted TAO model.

### Generating an INT8 tensorfile Using the calibration_tensorfile Command

The INT8 tensorfile is a binary file that contains the preprocessed training samples, which may be used to calibrate the model. In this release, TAO Toolkit only supports calibration tensorfile generation for SSD, DSSD, DetectNet_v2, and classification models.

The sample usage for the calibration_tensorfile command to generate a calibration tensorfile is defined below:

Copy
Copied!

tao detectnet_v2 calibration_tensorfile [-h] -e <path to training experiment spec file>
-o <path to output tensorfile>
-m <maximum number of batches to serialize>
[--use_validation_set]


#### Required Arguments

• -e, --experiment_spec_file: The path to the experiment spec file (only required for SSD and FasterRCNN).

• -o, --output_path: The path to the output tensorfile that will be created.

• -m, --max_batches: The number of batches of input data to be serialized.

#### Optional Argument

• --use_validation_set: A flag specifying whether to use the validation dataset instead of the training set.

The following is a sample command to invoke the calibration_tensorfile command for a classification model:

Copy
Copied!

tao detectnet_v2 calibration_tensorfile
-e $SPECS_DIR/classification_retrain_spec.cfg -m 10 -o$USER_EXPERIMENT_DIR/export/calibration.tensor


### Exporting the DetectNet_v2 Model

The following are command line arguments of the export command:

Copy
Copied!

tao detectnet_v2 export [-h] -m <path to the .tlt model file generated by tao train>
-k <key>
[-o <path to output file>]
[--cal_data_file <path to tensor file>]
[--cal_image_dir <path to the directory images to calibrate the model]
[--cal_cache_file <path to output calibration file>]
[--data_type <Data type for the TensorRT backend during export>]
[--batches <Number of batches to calibrate over>]
[--max_batch_size <maximum trt batch size>]
[--max_workspace_size <maximum workspace size]
[--batch_size <batch size to TensorRT engine>]
[--experiment_spec <path to experiment spec file>]
[--engine_file <path to the TensorRT engine file>]
[--verbose Verbosity of the logger]
[--force_ptq Flag to force PTQ]
[--gen_ds_config Generate DeepStream config]


#### Required Arguments

• -m, --model: The path to the .tlt model file to be exported using export.

• -k, --key: The key used to save the .tlt model file.

• -e, --experiment_spec: The path to the spec file. This argument is required for faster_rcnn, ssd, dssd, yolo, and retinanet.

#### Optional Arguments

• -o, --output_file: The path to save the exported model to. The default path is ./<input_file>.etlt.

• --gen_ds_config: A Boolean flag indicating whether to generate the template DeepStream related configuration (“nvinfer_config.txt”) as well as a label file (“labels.txt”) in the same directory as the output_file. Note that the config file is NOT a complete configuration file and requires the user to update the sample config files in DeepStream with the parameters generated.

• --gpu_index: The index of (discrete) GPUs used for exporting the model. You can specify the GPU index used to run evaluation when the machine has multiple GPUs installed. Note that export can only run on a single GPU.

• --log_file: The path to the log file. The default path is stdout.

• -h, --help: Show this help message and exit.

### QAT Export Mode Required Arguments

• --cal_json_file: The path to the JSON file containing the tensor scales for QAT models. This argument is required if the engine for the QAT model is being generated.

Note

When exporting a model trained with QAT enabled, the tensor scale factors to calibrate the activations are peeled out of the model and serialized to a JSON file defined by the cal_json_file argument.

### Sample usage for the export sub-task

The following is a sample command to export a DetectNet_v2 model in INT8 mode. This command shows option 1: using the --cal_data_file option with the calibration.tensor generated using the calibration_tensorfile sub-task.

Copy
Copied!

tao detectnet_v2 export
-e $USER_EXPERIMENT_DIR/experiment_dir_retrain/experiment_spec.txt -m$USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt
-o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt -k$KEY


The following is an example log of a successful export:

### Generating a Template DeepStream Config File

TAO Toolkit supports serializing a template config file for the nvinfer element of deepstream to consume this model. This config file contains the network specific pre-processing parameters and network graph parameters for parsing the etlt model file. It also generates a label file that contains the names of the classes that the model was trained for in the order in which the outputs are generated. To generate the deepstream config, simply run the export command using the --gen_ds_config option.

The following example shows how to generate the DeepStream config:

Copy
Copied!

tao detectnet_v2 export
-m $USER_EXPERIMENT_DIR/detectnet_v2/model.tlt -o$USER_EXPERIMENT_DIR/detectnet_v2/model.int8.etlt
-e \$SPECS_DIR/detectnet_v2_kitti_retrain_spec.txt
--gen_ds_config


The template DeepStream config is generated in the same directory as the output model file as nvinfer_config.txt, while the labels are serialized in labels.txt file. Sample output of the nvinfer_config.txt and labels.txt are as follows:

• Sample nvinfer_config.txt

Copy
Copied!

net-scale-factor=0.00392156862745098
offsets=0;0;0
infer-dims=3;544;960
tlt-model-key=tlt_encode
network-type=0
num-detected-classes=3
uff-input-order=0
uff-input-blob-name=input_1
model-color-format=0


• Sample labels.txt

Copy
Copied!

person
bag
face


Note

The nvinfer_config.txt file that is generated by export is NOT a complete config_infer_*.txt file that can be replaced into the DeepStream config file. You need to find and replace the parameters defined in this file, with the parameters in the default config_infer_*.txt file.

## TensorRT Engine Generation, Validation, and INT8 Calibration

For TensorRT engine generation, validation, and INT8 calibration, refer to the TAO Deploy documentation.

## Deploying to DeepStream

Refer to the Integrating a DetectNet_v2 Model page for more information about deploying a DetectNet_v2 model to DeepStream.