EfficientDet (TF2)#

With EfficientDet, the following tasks are supported:

  • dataset_convert

  • train

  • evaluate

  • prune

  • inference

  • export

These tasks may be invoked from the TAO Launcher by following the below convention from command line:

tao model efficientdet_tf2 <sub_task> <args_per_subtask>

Where args_per_subtask are the command line arguments required for a given subtask. Each of these sub-tasks are explained in detail below.

Data Input for EfficientDet#

EfficientDet expects directories of images for training or validation and annotation JSON files in COCO format. See the Data Annotation Format page for more information about the data format for EfficientDet.

Pre-processing the Dataset#

The raw image data and the corresponding annotation file need to be converted to TFRecords before training and evaluation. The dataset_convert tool helps to achieve seamless conversion while providing insight on potential issues in an annotation file. The following sections detail how to use dataset_convert.

Sample Usage of the Dataset Converter Tool#

The dataset_convert tool is described below:

tao model efficientdet_tf2 dataset-convert [-h] -e <conversion spec file>

Below is a sample for the data conversion spec file. The format of the spec file is YAML, with configuration parameters under dataset_convert.

dataset_convert:
  image_dir: '/workspace/tao-experiments/data/raw-data/train2017/'
  annotations_file: '/workspace/tao-experiments/data/raw-data/annotations/instances_train2017.json'
  output_dir: '/workspace/tao-experiments/data'
  tag: 'train'
  num_shards: 256
  include_masks: True

The details of each parameter are summarized in the table below:

Field

Description

Data Type and Constraints

Recommended/Typical Value

image_dir

The path to the directory where raw images are stored

String

annotations_file

The path to the annotation JSON file

String

output_dir

The output directory where TFRecords are saved

String

tag

The number of shards for the converted TFRecords

Integer

256

num_shards

The path to a TAO pruned model for re-training, if any

String

include_mask

Whether to include segmentation groundtruth during conversion

Boolean

False

Note

A log file named <tag>_warnings.json will be generated in the output_dir if the bounding box of an object is out of bounds with respect to the image frame or if an object mask is out of bounds with respect to its bounding box. The log file records the image_id that has problematic object IDs. For example, {"200365": {"box": [918], "mask": []} means the bounding box of object 918 is out of bounds in image 200365.

The following example shows how to use the command:

tao model efficientdet_tf2 dataset_convert -i /path/to/convert.yaml

Creating a Configuration File#

Below is a sample for the EfficientDet spec file. It has 7 major components: dataset, model, train, evaluate, inference, prune and export config as well as the encryption key (encryption_key) and the results directory (results_dir).

dataset:
  loader:
    prefetch_size: 4
    shuffle_file: False
    shuffle_buffer: 10000
    cycle_length: 32
    block_length: 16
  max_instances_per_image: 100
  skip_crowd_during_training: True
  num_classes: 91
  train_tfrecords:
    - '/datasets/coco/train-*'
  val_tfrecords:
    - '/datasets/coco/val-*'
  val_json_file: '/datasets/coco/annotations/instances_val2017.json'
  augmentation:
    rand_hflip: True
    random_crop_min_scale: 0.1
    random_crop_max_scale: 2
    auto_color_distortion: False
    auto_translate_xy: False
train:
  optimizer:
    name: 'sgd'
    momentum: 0.9
  lr_schedule:
    name: 'cosine'
    warmup_epoch: 5
    warmup_init: 0.0001
    learning_rate: 0.2
  amp: True
  checkpoint: "/weights/efficientnet-b0_500.tlt"
  num_examples_per_epoch: 100
  moving_average_decay: 0.999
  batch_size: 20
  checkpoint_interval: 5
  l2_weight_decay: 0.00004
  l1_weight_decay: 0.0
  clip_gradients_norm: 10.0
  image_preview: True
  qat: False
  random_seed: 42
  pruned_model_path: ''
  num_epochs: 200
model:
  name: 'efficientdet-d0'
  aspect_ratios: '[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]'
  anchor_scale: 4
  min_level: 3
  max_level: 7
  num_scales: 3
  freeze_bn: False
  freeze_blocks: []
  input_width: 512
  input_height: 512
evaluate:
  batch_size: 8
  num_samples: 5000
  max_detections_per_image: 100
  checkpoint: ''
export:
  batch_size: 8
  dynamic_batch_size: True
  min_score_thresh: 0.4
  checkpoint: ""
  onnx_file: ""
inference:
  checkpoint: ""
  image_dir: ""
  dump_label: False
  batch_size: 1
prune:
  checkpoint: ""
  normalizer: 'max'
  output_path: ""
  equalization_criterion: 'union'
  granularity: 8
  threshold: 0.5
  min_num_filters: 16
  excluded_layers: []
encryption_key: 'nvidia_tlt'
results_dir: '/workspace/results_dir'
num_gpus: 1
gpu_ids: [0]

The format of the spec file is YAML. The top level structure of the spec file is summarized in the table below:

Field

Description

dataset

Configuration related to data sources and dataloader

model

Configuration related to model construction

train

Configuration related to the training process

evaluate

Configuration related to the standalone evaluation process

prune

Configuration for pruning a trained model

inference

Configuration for running model inference

export

Configuration for exporting a trained model

encryption_key

Global encryption key

results_dir

Directory where experiment results and status logging are saved

Training Config#

The training configuration(train) defines the parameters needed for training, evaluation. Details are summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

batch_size

The batch size for each GPU, so the effective batch size is batch_size_per_gpu * num_gpus.

Unsigned int, positive

16

num_epochs

The number of epochs to train the network

Unsigned int, positive

300

num_examples_per _epoch

Total number of images in the training set

Unsigned int, positive

checkpoint

The path to the pretrained model, if any

String

pruned_model_path

The path to a TAO pruned model for re-training, if any

String

checkpoint_interval

The number of training epochs that should run per model checkpoint/validation

Unsigned int, positive

10

amp

Whether to use mixed precision training

Boolean

moving_average_decay

Moving average decay

Float

0.9999

l2_weight_decay

L2 weight decay

Float

l1_weight_decay

L1 weight decay

Float

random_seed

Random seed

Unsigned int, positive

42

clip_gradients_norm

Clip gradients by the norm value

Float

5

qat

Enabled quantization aware training

Boolean

False

optimizer

Optimizer configuration

lr_schedule

Learning rate scheduler configuration

The optimizer configuration(train.optimizer) specifies the type and parameters of an optimizer. +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | Field | Description | Data Type and Constraints | Recommended/Typical Value | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | name | Optimizer name (only sgd is supported) | String | ‘sgd’ | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | momentum | Momentum | float | 0.9 | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+

The learning rate scheduler configuration(train.lr_schedule) specifies the type and parameters of a learning rate scheduler. +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | Field | Description | Data Type and Constraints | Recommended/Typical Value | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | name | The name of the learning rate scheduler. Available options are cosine and soft_anneal | String | ‘cosine’ | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | warmup_epoch | The number of warmup epochs in the learning rate schedule | Unsigned int, positive | – | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | warmup_init | The initial learning rate in the warmup period | Float | – | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | learning_rate | The maximum learning rate | Float | – | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+ | annealing_epoch | Start annealing to warmup_init at this point | Unsigned int, positive | – | +———————+——————————————————————————————————-+——————————-+————————————————————————————–+

Evaluation Config#

The evaluation configuration (evaluate) defines the parameters needed for the evaluation either during training or standalone evaluation. Details are summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

checkpoint

The path to the .tlt model to be evaluated

String

max_detections_per_image

The maximum number of detections to visualize

Unsigned int, positive

100

batch_size

The batch size for each GPU, so the effective batch size is batch_size_per_gpu * num_gpus

Unsigned int, positive

16

num_samples

The number of samples for evaluation

Unsigned int

label_map

YAML file that stores index to label name mapping. (Optional) If set, per class AP metric will be calculated

String

start_eval_epoch

Evaluation will not start until this epoch (Default: 1)

Unsigned int

results_dir

The directory where the evaluation result is stored (Optional)

Unsigned int

Inference Config#

The inference configuration (inference) defines the parameters needed for the standalone inference with the trained .tlt model. Details are summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

checkpoint

The path to the .tlt model to run inference with

String

image_dir

The path to the image directory

String

output_dir

The path to the output directory where annotated images will be saved

String

dump_label

Whether to dump label files in KITTI format

Boolean

batch_size

Batch size to run inference with

Unsigned int

min_score_thresh

Minimum confidence threshold to render the predicted bounding boxes

String

label_map

YAML file that stores index to label name mapping (Optional) If set, annotated images will have class labels associated with bounding boxes

String

max_boxes_to_draw

The maximum number of bounding boxes that will be rendered in the annotated images

String

results_dir

The directory where the inference result is stored (Optional)

Unsigned int

Dataset Config#

The dataset configuration (dataset) specifies the input data source and format. This is used for training, evaluation. A detailed description is summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

train_tfrecords

The TFRecord path for training

String

val_tfrecords

The TFRecord path for validation

String

val_json_file

The annotation file path for validation

String

num_classes

The number of classes. If there are N categories in the annotation, num_classes should be N+1 (background class)

Unsigned int

max_instances_per_image

The maximum number of object instances to parse (default: 100)

Unsigned int

100

skip_crowd_during_training

Specifies whether to skip crowd during training

Boolean

True

loader

Data loader configuration

augmentation

Data augmentation configuration

The dataloader configuration (dataset.loader) specifies how batches of data are fed into the model.

Field

Description

Data Type and Constraints

Recommended/Typical Value

prefetch_size

The image dimension in “WxH” format, where W and H indicates the dimension of the resized and padded input.

String

“512x512”

shuffle_file

The TFRecord path for training

String

shuffle_buffer

The image dimension in “WxH” format, where W and H indicates the dimension of the resized and padded input.

String

“512x512”

cycle_length

The TFRecord path for training

String

block_length

The TFRecord path for training

String

The dataset.augmentation configuration specifies the image augmentation methods used after preprocessing.

Field

Description

Data Type and Constraints

Recommended/Typical Value

rand_hflip

A flag specifying whether to perform random horizontal flip

Boolean

random_crop_min_scale

The minimum scale of RandomCrop augmentation (default: 0.1)

Float

0.1

random_crop_max_scale

The maximum scale of RandomCrop augmentation (default: 2.0)

Float

2.0

auto_color_distortion

A flag to enable automatic color augmentation

Boolean

False

auto_translate_xy

A flag to enable automatic image translation on the X/Y axis

Boolean

False

Model Config#

The model configuration (model) specifies the model structure. A detailed description is summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

model_name

EfficientDet model name

string

“efficientdet_d0”

min_level

The minimum level of the output feature pyramid

Unsigned int

3 (only 3 is supported)

max_level

The maximum level of the output feature pyramid

Unsigned int

7 (only 7 is supported)

num_scales

The number of anchor octave scales on each pyramid level (e.g. if set to 3, the anchor scales are [2^0, 2^(1/3), 2^(2/3)])

Unsigned int

3

max_instances_per_image

The maximum number of object instances to parse (default: 100)

Unsigned int

100

aspect_ratios

A list of tuples representing the aspect ratios of anchors on each pyramid level

string

“[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]”

anchor_scale

Scale of the base-anchor size to the feature-pyramid stride

Unsigned int

4

input_width

Input width

Unsigned int

512

input_height

Input height

Unsigned int

512

Pruning Config#

The prune configuration defines the pruning process for a trained model. A detailed description is summarized in the table below.

Field

Description

Data Type and Constraints

Recommended/Typical Value

normalizer

Normalization method. Specify max to normalize by dividing each norm by the maximum norm within a layer or L2 to normalize by dividing by the L2 norm of the vector comprising all kernel norms

String

max

equalization_criterion

The criteria to equalize the stats of inputs to an element-wise op layer or depth-wise conv layer. Options are arithmetic_mean geometric_mean,``union``, and intersection.

String

union

granularity

The number of filters to remove at a time

Integer

8

threshold

Pruning threshold

Float

min_num_filters

The minimum number of filters to keep per layer. Default: 16

Integer

16

excluded_layers

A list of layers to be excluded from pruning

List

checkpoint

The path to the .tlt model file to be pruned

String

Export Config#

The export configuration contains the parameters for exporting a .tlt model to an .onnx model, which can be used for deployment.

Field

Description

Data Type and Constraints

Recommended/Typical Value

batch_size

The maximum batch size of the .onnx model if dynamic_batch_size is set to False

Boolean

dynamic_batch_size

A flag specifying whether to use dynamic batch size in the exported .onnx model

Boolean

True

checkpoint

The path to the .tlt model file to be exported

String

onnx_file

The path to save the exported .onnx model

String

False

min_score_thresh

The confidence threshold in the NMS layer (default: 0.01)

float

Training the Model#

Train the EfficientDet model using this command:

tao model efficientdet_tf2 train [-h] -e <experiment_spec>
                           [results_dir=<global_results_dir>]
                           [model.<model_option>=<model_option_value>]
                           [dataset.<dataset_option>=<dataset_option_value>]
                           [train.<train_option>=<train_option_value>]
                           [num_gpus=<num GPUs>]
                           [gpu_ids=<gpu_index>]

Required Arguments#

  • -e, --experiment_spec: The experiment specification file to set up the training experiment.

Optional Arguments#

  • model.<model_option>: The model options.

  • dataset.<dataset_option>: The dataset options.

  • train.<train_option>: The train options.

  • num_gpus: The number of GPUs to be used for training in a multi-GPU scenario. The default value is 1.

  • gpu_ids: The indices of the GPUs to use for training. This argument can be used when the machine has multiple GPUs installed.

  • -h, --help: Show this help message and exit.

Input Requirement#

  • Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128; W, H are multiples of 32)

  • Image format: JPG

  • Label format: COCO detection

Sample Usage#

Here’s an example of the train command:

tao model efficientdet_tf2 train -e /path/to/spec.yaml num_gpus=2

Evaluating the Model#

To run evaluation with an EfficientDet model, use this command:

tao model efficientdet_tf2 evaluate [-h] -e <experiment_spec>
                           evaluate.checkpoint=<model to be evaluated>
                           [evaluate.<evaluate_option>=<evaluate_option_value>]
                           [gpu_ids=<gpu_index>]

Required Arguments#

  • -e, --experiment_spec: The experiment spec file to set up the evaluation experiment. This should be the same as the training specification file.

  • evaluate.checkpoint: The .pth model to evaluate.

Optional Arguments#

  • evaluate.<evaluate_option>: The evaluate options.

  • gpu_ids: The index of the GPU to use for evaluation. This argument can be used when the machine has multiple GPUs installed. Note that evaluation can only run on a single GPU.

  • -h, --help: Show this help message and exit.

Sample Usage#

Here’s an example of using the evaluate command:

tao model efficientdet_tf2 evaluate -e /path/to/spec.yaml

Running Inference with an EfficientDet Model#

The inference tool for EfficientDet models can be used to visualize bboxes and generate frame-by- frame KITTI format labels on a directory of images.

tao model efficientdet_tf2 inference [-h] -e <experiment spec file>
                           inference.checkpoint=<model to be inferenced>
                           [inference.<inference_option>=<inference_option_value>]
                           [gpu_ids=<gpu_index>]

Required Arguments#

  • -e, --experiment_spec: The path to an experiment spec file

  • inference.checkpoint: The .pth model to inference.

Optional Arguments#

  • inference.<inference_option>: The inference options.

  • gpu_ids: The index of the GPU to run inference on. This argument can be used when the machine has multiple GPUs installed. Note that inference can only run on a single GPU.

  • -h, --help: Show this help message and exit

Sample Usage#

Here’s an example of using the inference command:

tao model efficientdet_tf2 inference -e /path/to/spec.yaml

Pruning the Model#

Pruning removes parameters from the model to reduce the model size without compromising the integrity of the model itself using the tao model efficientdet_tf2 prune command.

The tao model efficientdet_tf2 prune command includes these parameters:

tao model efficientdet_tf2 prune [-h] -e <experiment spec file>
                           prune.checkpoint=<model to be pruned>
                           [prune.<prune_option>=<prune_option_value>]

Required Arguments#

  • -e, --experiment_spec: The path to an experiment spec file

  • prune.checkpoint: The .pth model to prune.

Optional Arguments#

  • prune.<prune_option>: The prune options.

  • gpu_ids: The index of the GPU to run pruning on. This argument can be used when the machine has multiple GPUs installed. Note that inference can only run on a single GPU.

  • -h, --help: Show this help message and exit.

After pruning, the model needs to be retrained. See Re-training the Pruned Model for more details.

Note

Due to the complexity of larger EfficientDet models, the pruning process will take significantly longer to finish. For example, pruning the EfficientDet-D5 model may take at least 25 minutes on a V100 server.

Using the Prune Command#

Here’s an example of using the tao model efficientdet_tf2 prune command:

tao model efficientdet_tf2 prune -e /path/to/spec.yaml

Re-training the Pruned Model#

Once the model has been pruned, there might be a slight decrease in accuracy because some previously useful weights may have been removed. To regain the accuracy, we recommend that you retrain this pruned model over the same dataset. To do this, use the tao model efficientdet_tf2 train command as documented in Training the model, with an updated spec file that points to the newly pruned model as the pretrained model file.

We recommend turning off the regularizer or reducing the weight decay in the training_config for EfficientDet to recover the accuracy when retraining a pruned model. To do this, set the regularizer type to NO_REG as mentioned in the Training config section. All the other parameters may be retained in the spec file from the previous training.

Exporting the Model#

Exporting the model decouples the training process from deployment and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. The exported model may be used universally across training and deployment hardware.

Exporting the EfficientDet Model#

Here’s an example of the command line arguments of the tao model efficientdet_tf2 export command:

tao model efficientdet_tf2 export [-h] -e <path to experiment spec>
                           export.checkpoint=<model to export>
                           export.onnx_file=<onnx path>
                           [export.<export_option>=<export_option_value>]
                           [gpu_ids=<gpu_index>]

Required Arguments#

  • -e, --experiment_spec: The path to the spec file

  • export.checkpoint: The .pth model to export.

  • export.onnx_file: The path where the .etlt or .onnx model is saved.

Optional Arguments#

  • export.<export_option>: The export options.

  • gpu_ids: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that export can only run on a single GPU.

  • -h, --help: Show this help message and exits.

Sample usage#

Here’s a sample command to export an EfficientDet model in INT8 mode.

tao model efficientdet_tf2 export -e /path/to/spec.yaml

TensorRT Engine Generation, Validation, and int8 Calibration#

For TensorRT engine generation, validation, and int8 calibration, refer to the TAO Deploy documentation.

Deploying to DeepStream#

Refer to the Integrating an EfficientDet (TF1/TF2) Model page to learn more about deploying an EfficientDet TF2 model to Deepsteram.