YOLOv3

YOLOv3 is an object detection model that is included in the TAO Toolkit. YOLOv3 supports the following tasks:

  • dataset_convert

  • kmeans

  • train

  • evaluate

  • inference

  • prune

  • export

These tasks can be invoked from the TAO Toolkit Launcher using the following convention on the command line:

Copy
Copied!
            

tao model yolo_v3 <sub_task> <args_per_subtask>

where args_per_subtask are the command line arguments required for a given subtask. Each subtask is explained in detail below.

The dataset structure of YOLOv3 is identical to that of DetectNet_v2. The only difference is the command line used to generate the TFRecords from KITTI text labels. To generate TFRecords for YOLOv3 training, use this command:

Copy
Copied!
            

tao model yolo_v3 dataset_convert [-h] -d <dataset_spec> -o <output_tfrecords_file> [--gpu_index <gpu_index>]

Required Arguments

  • -d, --dataset_spec: path to the dataset spec file.

  • -o, --output_filename: path to the output TFRecords file.

Optional Arguments

  • --gpu_index: The GPU index to run this command on. We can specify the GPU index used to run this command if the machine has multiple GPUs installed. Note that this command can only run on a single GPU.

Below is a sample for the YOLOv3 spec file. It has six major components: yolov3_config, training_config, eval_config, nms_config, augmentation_config, and dataset_config. The format of the spec file is a protobuf text (prototxt) message and each of its fields can be either a basic data type or a nested message. The top level structure of the spec file is summarized in the table below.

Copy
Copied!
            

random_seed: 42 yolov3_config { big_anchor_shape: "[(114.94, 60.67), (159.06, 114.59), (297.59, 176.38)]" mid_anchor_shape: "[(42.99, 31.91), (79.57, 31.75), (56.80, 56.93)]" small_anchor_shape: "[(15.60, 13.88), (30.25, 20.25), (20.67, 49.63)]" matching_neutral_box_iou: 0.7 arch: "resnet" nlayers: 18 arch_conv_blocks: 2 loss_loc_weight: 0.8 loss_neg_obj_weights: 100.0 loss_class_weights: 1.0 freeze_bn: false force_relu: false } training_config { batch_size_per_gpu: 8 num_epochs: 80 enable_qat: false checkpoint_interval: 10 learning_rate { soft_start_annealing_schedule { min_learning_rate: 1e-6 max_learning_rate: 1e-4 soft_start: 0.1 annealing: 0.5 } } regularizer { type: L1 weight: 3e-5 } optimizer { adam { epsilon: 1e-7 beta1: 0.9 beta2: 0.999 amsgrad: false } } pretrain_model_path: "EXPERIMENT_DIR/pretrained_resnet18/tlt_pretrained_object_detection_vresnet18/resnet_18.hdf5" } eval_config { average_precision_mode: SAMPLE batch_size: 8 matching_iou_threshold: 0.5 } nms_config { confidence_threshold: 0.001 clustering_iou_threshold: 0.5 top_k: 200 } augmentation_config { hue: 0.1 saturation: 1.5 exposure:1.5 vertical_flip:0 horizontal_flip: 0.5 jitter: 0.3 output_width: 1248 output_height: 384 output_channel: 3 randomize_input_shape_period: 0 } dataset_config { data_sources: { tfrecords_path: "/workspace/tao-experiments/data/tfrecords/kitti_trainval/kitti_trainval*" image_directory_path: "/workspace/tao-experiments/data/training" } include_difficult_in_training: true image_extension: "png" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } validation_fold: 0 }

Training Config

The training configuration (training_config) defines the parameters needed for the training, evaluation and inference. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
batch_size_per_gpu The batch size for each GPU, so the effective batch size is batch_size_per_gpu * num_gpus Unsigned int, positive
checkpoint_interval The number of training epochs per model checkpoint / validation that should run Unsigned int, positive 10
num_epochs The number of epochs to train the network Unsigned int, positive.
enable_qat Whether to use quantization-aware training Boolean Note: YOLOv3 does not support loading a pruned QAT model and retraining it with QAT disabled, or vice versa. For example, to get a pruned QAT model, perform the initial training with QAT enabled or enable_qat=True.
learning_rate Only “soft_start_annealing_schedule” with these nested parameters is supported:
  1. min_learning_rate: Minimum learning rate during the entire experiment
  2. max_learning_rate: Maximum learning during the entire experiment
  3. soft_start: Time to lapse before warm up (expressed in percentage of progress between 0 and 1)
  4. annealing: Time to start annealing the learning rate
Message type.
regularizer This parameter configures the regularizer to be used while training and contains the following nested parameters:
  1. type: The type or regularizer to use. NVIDIA supports NO_REG, L1, or L2
  2. weight: The regularizer weight as a floating point value
Message type. L1 (Note: NVIDIA suggests using L1 regularizer when training a network before pruning as L1 regularization helps making the network weights more prunable.)
optimizer Can be one of “adam”, “sgd”, or “rmsprop”. Each type has the following parameters:
  1. adam: epsilon, beta1, beta2, amsgrad
  2. sgd: momentum, nesterov
  3. rmsprop: rho, momentum, epsilon, centered

The parameters are same as those in Keras.

Message type.
pretrain_model_path The path to the pretrained model, if any At most one of pretrain_model_path, resume_model_path, or pruned_model_path may present. String
resume_model_path The path to a TAO checkpoint model to resume training, if any At most one of pretrain_model_path, resume_model_path, or pruned_model_path may present. String
pruned_model_path The path to a TAO pruned model for re-training, if any At most one of pretrain_model_path, resume_model_path, or pruned_model_path may present. String
max_queue_size The number of prefetch batches in data loading Unsigned int, positive
n_workers The number of workers for data loading per GPU Unsigned int, positive
use_multiprocessing Whether to use multiprocessing mode of keras sequence data loader Boolean true (in case of deadlock, restart training and use False)
Note

The learning rate is automatically scaled with the number of GPUs used during training, or the effective learning rate is learning_rate * n_gpu.

Evaluation Config

The evaluation configuration (eval_config) defines the parameters needed for evaluation either during training or as a standalone procedure. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
average_precision_mode Average Precision (AP) calculation mode can be either SAMPLE or INTEGRATE. SAMPLE is used as VOC metrics for VOC 2009 or before. INTEGRATE is used for VOC 2010 or after that. ENUM type ( SAMPLE or INTEGRATE) SAMPLE
matching_iou_threshold The lowest IoU of predicted box and ground truth box that can be considered a match. float 0.5

NMS Config

The NMS configuration (nms_config) defines the parameters needed for the NMS postprocessing. NMS config applies to the NMS layer of the model in training, validation, evaluation, inference and export. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
confidence_threshold Boxes with a confidence score less than confidence_threshold are discarded before applying NMS. float 0.01
cluster_iou_threshold The IoU threshold below which boxes will go through the NMS process. float 0.6
top_k top_k boxes will be output after the NMS Keras layer. If the number of valid boxes is less than k, the returned array will be padded with boxes whose confidence score is 0. Unsigned int 200
infer_nms_score_bits The number of bits to represent the score values in NMS plugin in TensorRT OSS. The valid range is integers in [1, 10]. Setting it to any other values will make it fall back to ordinary NMS. Currently this optimized NMS plugin is only available in FP16 but it should also be selected by INT8 data type as there is no INT8 NMS in TensorRT OSS and hence this fastest implementation in FP16 will be selected. If falling back to ordinary NMS, the actual data type when building the engine will decide the exact precision(FP16 or FP32) to run at. int. In the interval [1, 10]. 0

Augmentation Config

The augmentation configuration (augmentation_config) defines the parameters needed for online data augmentation. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
hue Image hue to be changed within [-hue, hue] * 180.0 float of [0, 1] 0.1
saturation Image saturation to be changed within [1.0 / saturation, saturation] times float >= 1.0 1.5
exposure Image exposure to be changed within [1.0 / exposure, exposure] times float >= 1.0 1.5
vertical_flip The probability of images to be vertically flipped float of [0, 1] 0
horizontal_flip The probability of images to be horizontally flipped float of [0, 1] 0.5
jitter The maximum jitter allowed in augmentation. Jitter here refers to jitter augmentation in YOLO networks float of [0, 1] 0.3
output_width The base output image width of augmentation pipeline. integer, multiple of 32
output_height The base output image height of augmentation pipeline integer, multiple of 32
output_channel The number of output channels of augmentation pipeline 1 or 3
randomize_input_shape_period The batch interval to randomly change the output width and height. For value K, the augmentation pipeline will adjust the output shape per K batches and the adjusted output width/height will be within 0.6 to 1.5 times of base width/height. Note: if K=0, the output width/height will always match the base width/height as configured and training will be much faster, but the accuracy of trained network might not be as good. non-negative integer 10
image_mean A key/value pair to specify image mean values. If omitted, ImageNet mean will be used for image preprocessing. If set, depending on output_channel, either ‘r/g/b’ or ‘l’ key/value pair must be configured. dict

Dataset Config

YOLOv3 supports two data formats: the sequence format (images folder and raw labels folder with KITTI format) and the tfrecords format (images folder and TFRecords). Training with TFRecord dataset in most cases are faster than sequence format and hence TFRecord dataset is the recommended format. However, in some cases like small input resolutions(e.g., 416x416), the sequence format is slightly faster TFRecord.

The YOLOv3 dataloader assumes the training/validation split is already done and the data is prepared in KITTI format: images and labels are in two separate folders, where each image in the image folder has a .txt label file with the same filename in the label folder, and the label file content follows KITTI format. The COCO data format is supported but only through TFRecords. Prepare the TFRecords using dataset_convert.

Below is an example dataset_config for TFRecord dataset converted from KITTI data format.

Copy
Copied!
            

dataset_config { data_sources: { tfrecords_path: "/workspace/tao-experiments/data/tfrecords/kitti_trainval/kitti_trainval*" image_directory_path: "/workspace/tao-experiments/data/training" } include_difficult_in_training: true image_extension: "png" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } validation_fold: 0 }

The following is an example dataset_config element if you want to use sequence format:

Copy
Copied!
            

dataset_config { data_sources: { label_directory_path: "/workspace/tao-experiments/data/training/label_2" image_directory_path: "/workspace/tao-experiments/data/training/image_2" } data_sources: { label_directory_path: "/workspace/tao-experiments/data/training/label_3" image_directory_path: "/workspace/tao-experiments/data/training/image_3" } include_difficult_in_training: true target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } validation_data_sources: { label_directory_path: "/workspace/tao-experiments/data/val/label_1" image_directory_path: "/workspace/tao-experiments/data/val/image_1" } }

The parameters in dataset_config are defined as follows:

  • data_sources: Captures the path to datasets to train on. If you have multiple data sources for training, you may use multiple data_sources. For sequence format, this field contains 2 parameters:

    • label_directory_path: Path to the data source label folder

    • image_directory_path: Path to the data source image folder

    For TFRecord format, this field contains 2 parameters:

    • tfrecords_path: Path to the TFRecord files, this can be a pattern to match multiple TFrecord files.

    • image_directory_path: Path to the data source image folder. Make sure this aligns with the path specified in dataset_convert command.

  • include_difficult_in_training: Specifies whether to include difficult boxes in training. If set to False, difficult boxes will be ignored. Difficult boxes are those with non-zero occlusion levels in KITTI labels.

  • image_extension: The suffix(extension) of the image file. For example, png or jpg, etc. This parameter is only useful when using TFRecord dataset.

  • target_class_mapping: This parameter maps the class names in the labels to the target class to be trained in the network. An element is defined for every source class to target class mapping. This field is included with the intention of grouping similar class objects under one umbrella. For example, “car”, “van”, “heavy_truck”, etc. may be grouped under “automobile”. The “key” field is the value of the class name in the tfrecords file, and the “value” field corresponds to the value that the network is expected to learn.

  • validation_data_sources: Captures the path to datasets to validate on. If you have multiple data sources for validation, you may use multiple validation_data_sources. Like data_sources, this field contains two same parameters. This parameter is exclusive with validation_fold.

  • validation_fold: When using TFRecord dataset for training, the validation dataset can be a split(fold) in the training dataset. This parameter is exclusive with validation_data_sources.

Note

The class names key in the target_class_mapping must be identical to the one shown in the KITTI labels so that the correct classes are picked up for training.

YOLOv3 Config

The YOLOv3 configuration (yolov3_config) defines the parameters needed for building the YOLOv3 model. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
big_anchor_shape, mid_anchor_shape, and small_anchor_shape These settings should be 1-d arrays inside quotation marks. The elements of these arrays are tuples representing the pre-defined anchor shape in the order of width, height. By default, YOLOv3 has nine predefined anchor shapes, divided into three groups corresponding to big, medium, and small objects. The detection output corresponding to different groups are from different depths in the network. Users should run the kmeans command (tao model yolo_v3 kmeans) to determine the best anchor shapes for their own dataset and put those anchor shapes in the spec file. Note that the number of anchor shapes for any field is not limited to 3. Users only need to specify at least one anchor shape in each of those three fields. string Use tao model yolo_v3 kmeans command to generate those shapes
matching_neutral_box_iou This field should be a float number between 0 and 1. Any inferred bounding box with IOU higher than this float value to any ground truth box, will not have their objectiveness loss back-propagated during training. This is to reduce false negatives. float 0.5
arch_conv_blocks Supported values are 0, 1 and 2. This value controls how many convolutional blocks are present among detection output layers. Setting this value to 2 if you want to reproduce the meta architecture of the original YOLOv3 model coming with DarkNet 53. Please note this config setting only controls the size of the YOLO meta architecture and the size of the feature extractor has nothing to do with this config field. 0, 1 or 2 2
loss_loc_weight, loss_neg_obj_weights, and loss_class_weights Those loss weights can be configured as float numbers. The YOLOv3 loss is a summation of localization loss, negative objectiveness loss, positive objectiveness loss and classification loss. The weight of positive objectiveness loss is set to 1 while the weights of other losses are read from config file. float loss_loc_weight: 5.0 loss_neg_obj_weights: 50.0 loss_class_weights: 1.0
arch Backbone for feature extraction. Currently, “resnet”, “vgg”, “darknet”, “googlenet”, “mobilenet_v1”, “mobilenet_v2” and “squeezenet” are supported. string resnet
nlayers Number of conv layers in specific arch. For “resnet”, 10, 18, 34, 50 and 101 are supported. For “vgg”, 16 and 19 are supported. For “darknet”, 19 and 53 are supported. All other networks don’t have this configuration and users should just delete this config from the config file. Unsigned int
freeze_bn Whether to freeze all batch normalization layers during training. boolean False
freeze_blocks The list of block IDs to be frozen in the model during training. You can choose to freeze some of the CNN blocks in the model to make the training more stable and/or easier to converge. The definition of a block is heuristic for a specific architecture. For example, by stride or by logical blocks in the model, etc. However, the block ID numbers identify the blocks in the model in a sequential order so you don’t have to know the exact locations of the blocks when you do training. A general principle to keep in mind is: the smaller the block ID, the closer it is to the model input; the larger the block ID, the closer it is to the model output. You can divide the whole model into several blocks and optionally freeze a subset of it. Note that for FasterRCNN you can only freeze the blocks that are before the ROI pooling layer. Any layer after the ROI pooling layer will not be frozen any way. For different backbones, the number of blocks and the block ID for each block are different. It deserves some detailed explanations on how to specify the block ID’s for each backbone. list(repeated integers)
  • ResNet series. For the ResNet series, the block IDs valid for freezing is any subset of [0, 1, 2, 3] (inclusive)
  • VGG series. For the VGG series, the block IDs valid for freezing is any subset of[1, 2, 3, 4, 5] (inclusive)
  • GoogLeNet. For the GoogLeNet, the block IDs valid for freezing is any subset of[0, 1, 2, 3, 4, 5, 6, 7] (inclusive)
  • MobileNet V1. For the MobileNet V1, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11](inclusive)
  • MobileNet V2. For the MobileNet V2, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13](inclusive)
  • DarkNet. For the DarkNet 19 and DarkNet 53, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5](inclusive)
force_relu Whether to replace all activation functions with ReLU. This is useful for training models for NVDLA. boolean False

The anchor shape should match most ground truth boxes in the dataset to help the network learn bounding boxes. You can use the kmeans algorithm to generate the anchor shapes. The algorithm is implemented in TAO Toolkit as the tao model yolo_v3 kmeans command. You can use the output of the algorithm as the anchor shape in the yolov3_config spec file.

Copy
Copied!
            

tao model yolo_v3 kmeans [-h] -l <label_folders> -i <image_folders> -x <network base input width> -y <network base input height> [-n <num_clusters>] [--max_steps <kmeans max steps>] [--min_x <ignore boxes with width less than this value>] [--min_y <ignore boxes with height less than this value>]

Required Arguments

  • -l: The paths to the training label folders. Multiple folder paths should be separated with spaces.

  • -i: The paths to corresponding training image folders. Folder counts and orders must match label folders.

  • -x: The base network input width, which should be output_width in the augmentation config section of your spec file.

  • -y: The base network input height, which should be output_height in the augmentation config section of your spec file.

Optional Arguments

  • -n: The number of shape clusters. This defines how many shape centers the command will output. The default is 9 (3 per group, with 3 groups)

  • --max_steps: The maximum number of steps the kmeans algorithm should run. If the algorithm does not converge at this step, a suboptimal result will be returned. The default value is 10000.

  • --min_x: Ignore ground-truth boxes with width less than this value in the reshaped image (images are first reshaped to the network base shape as -x, -y)

  • --min_y: Ignore ground-truth boxes with height less than this value in the reshaped image (images are first reshaped to the network base shape as -x, -y)

  • -h, --help: Show this help message and exit.

Train the YOLOv3 model using this command:

Copy
Copied!
            

tao model yolo_v3 train [-h] -e <experiment_spec> -r <output_dir> -k <key> [--gpus <num_gpus>] [--gpu_index <gpu_index>] [--use_amp] [--log_file <log_file_path>]

Required Arguments

  • -r, --results_dir: Path to the folder where the experiment output is written.

  • -k, --key: Provide the encryption key to decrypt the model.

  • -e, --experiment_spec_file: Experiment specification file to set up the evaluation experiment. This should be the same as the training specification file.

Optional Arguments

  • --gpus: The number of GPUs to be used for training in a multi-GPU scenario (the default value is 1).

  • --gpu_index: The GPU indices used to run training. You can use GPU indices to specify the GPU(s) to use for training when the machine has multiple GPUs installed.

  • --use_amp: A flag to enable AMP training.

  • --log_file: The path to the log file. The default path is “stdout”.

  • -h, --help: Show this help message and exit.

Input Requirement

  • Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128, W, H are multiples of 32)

  • Image format: JPG, JPEG, PNG

  • Label format: KITTI detection

Sample Usage

Here’s an example of using the train command on a YOLOv3 model:

Copy
Copied!
            

tao model yolo_v3 train --gpus 2 -e /path/to/spec.txt -r /path/to/result -k $KEY

To run evaluation for a YOLOv3 model use this command:

Copy
Copied!
            

tao model yolo_v3 evaluate [-h] -e <experiment_spec_file> -m <model_file> -k <key> [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -e, --experiment_spec_file: Experiment spec file to set up the evaluation experiment. This should be the same as the training specification file.

  • -m, --model: Path to the model file to use for evaluation. Model can be either .tlt model file or TensorRT engine.

  • -k, --key: Provide the key to load the model (not needed if model is a TensorRT engine).

Optional Arguments

  • -h, --help: show this help message and exit.

  • --gpu_index: The GPU index used to run the evaluation. We can specify the GPU index used to run evaluation when the machine has multiple GPUs installed. Note that evaluation can only run on a single GPU.

  • --log_file: Path to the log file. Defaults to stdout.

The inference tool for YOLOv3 networks may be used to visualize bboxes or generate frame-by-frame KITTI-format labels on a single image or a directory of images. An example of the command for this tool is shown here:

Copy
Copied!
            

tao model yolo_v3 inference [-h] -i <input directory> -o <output annotated image directory> -e <experiment spec file> -m <model file> -k <key> [-l <output label directory>] [-t <visualization threshold>] [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -m, --model: Path to the trained model (TAO model) or TensorRT engine.

  • -i, --in_image_dir: The directory of input images for inference.

  • -o, --out_image_dir: The directory path to output annotated images.

  • -k, --key: Key to load model (not needed if model is a TensorRT engine).

  • -e, --config_path: Path to an experiment spec file for training.

Optional Arguments

  • -t, --draw_conf_thres: The threshold for drawing a bbox. The default value is 0.3.

  • -h, --help: Show this help message and exit.

  • -l, --out_label_dir: The directory to output KITTI labels.

  • --gpu_index: The GPU index used to run inference. You can specify the index of the GPU to run evaluation when the machine has multiple GPUs installed. Note that evaluation can only run on a single GPU.

  • --log_file: The path to the log file. The default path is “stdout”.

Pruning removes parameters from the model to reduce the model size without compromising the integrity of the model itself using the tao model yolo_v3 prune command.

The tao model yolo_v3 prune command includes these parameters:

Copy
Copied!
            

tao model yolo_v3 prune [-h] -m <pretrained_model> -o <output_file> -k <key> [-n <normalizer>] [-eq <equalization_criterion>] [-pg <pruning_granularity>] [-pth <pruning threshold>] [-nf <min_num_filters>] [-el <excluded_list>]

Required Arguments

  • -m, --model: Path to pretrained YOLOv3 model.

  • -o, --output_file: Path to output checkpoints.

  • -k, --key: Key to load a .tlt model.

Optional Arguments

  • -h, --help: Show this help message and exit.

  • -n, –normalizer: max to normalize by dividing each norm by the maximum norm within a layer; L2 to normalize by dividing by the L2 norm of the vector comprising all kernel norms. (default: max)

  • -eq, --equalization_criterion: Criteria to equalize the stats of inputs to an element wise op layer, or depth-wise convolutional layer. This parameter is useful for resnets and mobilenets. Options are arithmetic_mean, geometric_mean, union, and intersection. (default: union)

  • -pg, -pruning_granularity: Number of filters to remove at a time (default:8)

  • -pth: Threshold to compare normalized norm against (default:0.1)

  • -nf, --min_num_filters: Minimum number of filters to keep per layer (default:16).

  • -el, --excluded_layers: List of excluded_layers. Examples: -i item1 item2 (default: []).

After pruning, the model needs to be retrained. See Re-training the Pruned Model for more details.

Using the Prune Command

Here’s an example of using the tao model yolo_v3 prune command:

Copy
Copied!
            

tao model yolo_v3 prune -m /workspace/output/weights/resnet_003.tlt \ -o /workspace/output/weights/resnet_003_pruned.tlt \ -eq union \ -pth 0.7 -k $KEY

Once the model has been pruned, there might be a slight decrease in accuracy because some previously useful weights may be removed. To regain accuracy, NVIDIA recommends that you retrain this pruned model over the same dataset. To do this, use the tao model yolo_v3 train command as documented in Training the model, with an updated spec file that points to the newly pruned model as the pruned_model_path.

We recommend turning off the regularizer in the training_config for detectnet to recover the accuracy when retraining a pruned model. To do this, set the regularizer type to NO_REG as mentioned Training config. All the other parameters may be retained in the spec file from the previous training.

Exporting the model decouples the training process from inference and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. The exported model may be used universally across training and deployment hardware.

The exported model format is referred to as .etlt. Like the .tlt model format, .etlt is an encrypted model format, and it uses the same key as the .tlt model that it is exported from. This key is required when deploying this model.

INT8 Mode Overview

TensorRT engines can be generated in INT8 mode to improve performance but require a calibration cache at engine creation-time. The calibration cache is generated using a calibration tensor file, if tao model yolo_v3 export is run with the --data_type flag set to int8. Pre-generating the calibration information and caching it removes the need for calibrating the model on the inference machine. Moving the calibration cache is usually much more convenient than moving the calibration tensorfile, since it is a much smaller file and can be moved with the exported model. Using the calibration cache also speeds up engine creation as building the cache can take several minutes to generate depending on the size of the Tensorfile and the model itself.

The export tool can generate the INT8 calibration cache by ingesting training data using one of these options:

  • Option 1: Using the training data loader to load the training images for INT8 calibration. This option is now the recommended approach to support multiple image directories by leveraging the training dataset loader. This also ensures two important aspects of data during calibration:

    • Data pre-processing in the INT8 calibration step is the same as in the training process.

    • The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the INT8 model.

  • Option 2: Pointing the tool to a directory of images that you want to use to calibrate the model. For this option, make sure to create a sub-sampled directory of random images that best represent your training dataset.

FP16/FP32 Model

The calibration.bin is only required if you need to run inference at INT8 precision. For FP16/FP32-based inference, the export step is much simpler. All you need to do is provide a .tlt model from the training/retraining step to be converted into .etlt format.

Exporting the Model

Here’s an example of the command line arguments of the tao model yolo_v3 export command:

Copy
Copied!
            

tao model yolo_v3 export [-h] -m <path to the .tlt model file generated by tao model train> -k <key> [-o <path to output file>] [--cal_json_file <path to calibration json file>] [--experiment_spec <path to experiment spec file>] [--gen_ds_config] [--verbose] [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -m, --model: The path to the .tlt model file to be exported.

  • -k, --key: The key used to save the .tlt model file.

  • -e, --experiment_spec: The path to the spec file.

Optional Arguments

  • -h, --help: Show this help message and exit.

  • -o, --output_file: The path to save the exported model to. The default path is ./<input_file>.etlt.

  • --gen_ds_config: A Boolean flag indicating whether to generate the template DeepStream related configuration (“nvinfer_config.txt”) as well as a label file (“labels.txt”) in the same directory as the output_file. Note that the config file is NOT a complete configuration file and requires the user to update the sample config files in DeepStream with the parameters generated.

  • --gpu_index: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

  • --log_file: The path to the log file. The default path is “stdout”.

QAT Export Mode Required Arguments

  • --cal_json_file: The path to the json file containing tensor scale for QAT models. This argument is required if engine for QAT model is being generated.

Note

When exporting a model trained with QAT enabled, the tensor scale factors to calibrate the activations are peeled out of the model and serialized to a JSON file defined by the cal_json_file argument.

Sample Usage

Here’s a sample command to export a YOLOv3 model:

Copy
Copied!
            

tao model yolo_v3 export -m /workspace/yolov3_resnet18_epoch_100.tlt \ -o /workspace/yolov3_resnet18_epoch_100_int8.etlt \ -e /workspace/yolov3_retrain_resnet18_kitti.txt \ -k $KEY

For TensorRT engine generation, validation, and int8 calibration, please refer to TAO Deploy documentation.

For deploying to deep stream, please refer to Deploying to DeepStream for YOLOv3.

Previous FasterRCNN
Next YOLOv4
© Copyright 2024, NVIDIA. Last updated on Mar 18, 2024.