TAO Toolkit v5.3.0
NVIDIA TAO v5.3.0

EfficientDet (TF1)

With EfficientDet, the following tasks are supported:

  • dataset_convert

  • train

  • evaluate

  • prune

  • inference

  • export

These tasks may be invoked from the TAO Toolkit Launcher by following the below convention from command line:

Copy
Copied!
            

tao model efficientdet_tf1 <sub_task> <args_per_subtask>

Where args_per_subtask are the command line arguments required for a given subtask. Each of these sub-tasks are explained in detail below.

EfficientDet expects directories of images for training or validation and annotated JSON files in COCO format.

The raw image data and corresponding annotation file need to be converted to TFRecords before training and evaluation. The dataset_convert tool helps to achieve seamless conversion while providing insight on potential issues in an annotation file. The following sections detail how to use dataset_convert.

Sample Usage of the Dataset Converter Tool

The dataset_convert tool is described below:

Copy
Copied!
            

tao model efficientdet_tf1 dataset-convert [-h] -i <image_directory> -a <annotation_json_file> -o <tfrecords_output_directory> [-t <tag>] [-s <num_shards>] [--include_mask]

You can use the following arguments:

  • -i, --image_dir: The path to the directory where raw images are stored

  • -a, --annotations_file: The annotations JSON file

  • -o, --output_dir: The output directory where TFRecords are saved

  • -t, --tag: The tag for the converted TFRecords (e.g. “train”). The tag defaults to the name of the annotation file.

  • -s, --num_shards: The number of shards for the converted TFRecords. The default value is 256.

  • --include_mask: Whether to include segmentation ground truth during conversion. The default value is False.

  • -h, --help: Show this help message and exit.

    Note

    A log file named <tag>_warnings.json will be generated in the output_dir if the bounding box of an object is out of bounds with respect to the image frame or if an object mask is out of bounds with respect to its bounding box. The log file records the image_id that has problematic object IDs. For example, {"200365": {"box": [918], "mask": []} means the bounding box of object 918 is out of bounds in image 200365.

The following example shows how to use the command with a dataset:

Copy
Copied!
            

tao model efficientdet_tf1 dataset_convert -i /path/to/image_dir -a /path/to/train.json -o /path/to/output_dir


Below is a sample for the EfficientDet spec file. It has 5 major components: model_config, training_config, eval_config, augmentation_config and dataset_config. The format of the spec file is a protobuf text (.prototxt) message, and each of its fields can be either a basic data type or a nested message.

Copy
Copied!
            

training_config { train_batch_size: 16 iterations_per_loop: 10 checkpoint_period: 10 num_examples_per_epoch: 14700 num_epochs: 300 model_name: 'efficientdet-d0' profile_skip_steps: 100 tf_random_seed: 42 lr_warmup_epoch: 5 lr_warmup_init: 0.00005 learning_rate: 0.1 amp: True moving_average_decay: 0.9999 l2_weight_decay: 0.00004 l1_weight_decay: 0.0 checkpoint: "/path/to/your/pretrained_model" # pruned_model_path: "/path/to/your/pruned/model" } dataset_config { num_classes: 91 image_size: "512,512" training_file_pattern: "/path/to/coco/train-*" validation_file_pattern: "/path/to/coco/val-*" validation_json_file: "/path/to/coco/annotations/instances_val2017.json" } eval_config { eval_batch_size: 16 eval_epoch_cycle: 10 eval_after_training: True eval_samples: 5000 min_score_thresh: 0.4 max_detections_per_image: 100 } model_config { model_name: 'efficientdet-d0' min_level: 3 max_level: 7 num_scales: 3 } augmentation_config { rand_hflip: True random_crop_min_scale: 0.1 random_crop_max_scale: 2.0 }

The top level structure of the spec file is summarized in the following tables:

Training Config

The training configuration(training_config) defines the parameters needed for training, evaluation, and inference. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
train_batch_size The batch size for each GPU. The effective batch size is batch_size_per_gpu * num_gpus. Unsigned int, positive 16
num_epochs The number of epochs to train the network Unsigned int, positive 300
num_examples_per _epoch The total number of images in the training set divided by the number of GPUs Unsigned int, positive
checkpoint The path to the pretrained model, if any String
pruned_model_path The path to the TAO pruned model for re-training, if any String
checkpoint_period The number of training epochs that should run per model checkpoint/validation Unsigned int, positive 10
amp A flag specifying whether to use mixed precision training Boolean
moving_average_decay The moving average decay Float 0.9999
l2_weight_decay The L2 weight decay Float
l1_weight_decay The L1 weight decay Float
lr_warmup_epoch The number of warmup epochs in the learning rate schedule Unsigned int, positive
lr_warmup_init The initial learning rate in the warmup period Float
learning_rate The maximum learning rate Float
tf_random_seed The random seed Unsigned int, positive 42
clip_gradients_norm The clip gradients by the norm value Float 5
skip_checkpoint _variables If specified, the weights of the layers with matching regular expressions will not be loaded. This is especially helpful for transfer learning. string “-predict*”

Evaluation Config

The evaluation configuration (eval_config) defines the parameters needed for the evaluation either during training or standalone. Details are summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
eval_epoch_cycle The number of training epochs that should run per validation Unsigned int, positive 10
max_detections_per_image The maximum number of detections to visualize Unsigned int, positive 100
min_score_thresh The minimum confidence of the predicted box that can be considered a match Float 0.5
eval_batch_size The batch size for each GPU. The effective batch size is batch_size_per_gpu * num_gpus Unsigned int, positive 16
eval_samples The number of samples for evaluation Unsigned int

Dataset Config

The data configuration (data_config) specifies the input data source and format. This is used for training, evaluation, and inference. A detailed description is summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
image_size The image dimension as a tuple within quote marks: “(height, width)”. This indicates the dimension of the resized and padded input. String “(512, 512)”
training_file_pattern The TFRecord path for training String
validation_file_pattern The TFRecord path for validation String
val_json_file The annotation file path for validation String
num_classes The number of classes. If there are N categories in the annotation, num_classes should be N+1 (background class). Unsigned int
max_instances_per_image The maximum number of object instances to parse (default: 100) Unsigned int 100
skip_crowd_during_training Specifies whether to skip crowd during training Boolean True

Model Config

The model configuration (model_config) specifies the model structure. A detailed description is summarized in the table below.

Field Description Data Type and Constraints Recommended/Typical Value
model_name The EfficientDet model name string “efficientdet_d0”
min_level The minimum level of the output feature pyramid Unsigned int 3 (only 3 is supported)
max_level The maximum level of the output feature pyramid Unsigned int 7 (only 7 is supported)
num_scales The number of anchor octave scales on each pyramid level (e.g. if set to 3, the anchor scales are [2^0, 2^(1/3), 2^(2/3)]) Unsigned int 3
max_instances_per_image The maximum number of object instances to parse (default: 100) Unsigned int 100
aspect_ratios A list of tuples representing the aspect ratios of anchors on each pyramid level string “[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]”
anchor_scale The scale of the base-anchor size to the feature-pyramid stride Unsigned int 4

Augmentation Config

The augmentation_config parameter defines image augmentation after preprocessing.

Field Description Data Type and Constraints Recommended/Typical Value
rand_hflip A flag specifying whether to perform random horizontal flip Boolean
random_crop_min_scale The minimum scale of RandomCrop augmentation. The default value is 0.1. Float 0.1
random_crop_max_scale The maximum scale of RandomCrop augmentation. The default value is 2.0. Float 2.0

Train the EfficientDet model using this command:

Copy
Copied!
            

tao model efficientdet_tf1 train [-h] -e <experiment_spec> -d <output_dir> -k <key> [--gpus <num_gpus>] [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -d, --model_dir: The path to the folder where the experiment output is written

  • -k, --key: The encryption key to decrypt the model

  • -e, --experiment_spec_file: The experiment specification file to set up the evaluation experiment. This should be the same as the training specification file.

Optional Arguments

  • --gpus: The number of GPUs to be used for training in a multi-GPU scenario. The default value is 1.

  • --gpu_index: The indices of the GPUs to use for training. This argument can be used when the machine has multiple GPUs installed.

  • --log_file: The path to the log file. The default value is stdout.

  • -h, --help: Show this help message and exit.

Input Requirement

  • Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128; W, H are multiples of 32)

  • Image format: JPG

  • Label format: COCO detection

Sample Usage

Here’s an example of the train command:

Copy
Copied!
            

tao model efficientdet_tf1 train --gpus 2 -e /path/to/spec.txt -d /path/to/result -k $KEY


To run evaluation with an EfficientDet model, use this command:

Copy
Copied!
            

tao model efficientdet_tf1 evaluate [-h] -e <experiment_spec_file> -m <model_file> -k <key> [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -e, --experiment_spec_file: The experiment spec file to set up the evaluation experiment. This should be the same as the training specification file.

  • -m, --model_path: The path to the model file to use for evaluation (only TAO models are supported)

  • -k, --key: The key to load the TAO model

Optional Arguments

  • --gpu_index: The index of the GPU to use for evaluation. This argument can be used when the machine has multiple GPUs installed. Note that evaluation can only run on a single GPU.

  • --log_file: The path to the log file. The default value is stdout.

  • -h, --help: Show this help message and exit.

Sample Usage

Here’s an example of using the evaluate command:

Copy
Copied!
            

tao model efficientdet_tf1 evaluate -e /path/to/spec.txt -m /path/to/model.tlt -k $KEY


The inference tool for EfficientDet models can be used to visualize bboxes and generate frame-by- frame KITTI format labels on a directory of images.

Copy
Copied!
            

tao model efficientdet_tf1 inference [-h] -i <input directory> -o <output annotated image directory> -e <experiment spec file> -m <model file> -k <key> [-l <output label directory>] [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -m, --model_path: The path to the pretrained model (supports both the TAO model and TensorRT engine)

  • -i, --in_image_path: The directory of input images for inference

  • -o, --out_image_path: The directory path to output annotated images

  • -k, --key: The key to load a TAO model (this argument is not required if a TensorRT engine is used)

  • -e, --experiment_spec_file: The path to an experiment spec file for training

Optional Arguments

  • -l, --out_label_path: The directory to output KITTI labels

  • --label_map: The path to a text file of training labels

  • --gpu_index: The index of the GPU to run inference on. This argument can be used when the machine has multiple GPUs installed. Note that inference can only run on a single GPU.

  • --log_file: The path to the log file. The default value is stdout.

  • -h, --help: Show this help message and exit

Sample Usage

Here’s an example of using the inference command:

Copy
Copied!
            

tao model efficientdet_tf1 inference -e /path/to/spec.txt -m /path/to/model.tlt -k $KEY -o /path/to/output_dir -i /path/to/input_dir


The tao model efficientdet_tf1 prune command removes parameters from the model to reduce the model size without compromising the integrity of the model itself.

The tao model efficientdet_tf1 prune command includes these parameters:

Copy
Copied!
            

tao model efficientdet_tf1 prune [-h] -m <efficientdet model> -o <output_dir> -k <key> [-n <normalizer>] [-eq <equalization_criterion>] [-pg <pruning_granularity>] [-pth <pruning threshold>] [-nf <min_num_filters>] [-el [<excluded_list>] [--gpu_index <gpu_index>] [--log_file <log_file_path>]

Required Arguments

  • -m, --model: The path to a pretrained EfficientDet model.

  • -o, --output_dir: The path to output checkpoints.

  • -k, --key: The key to load a :code`.tlt` model.

Optional Arguments

  • -n, –normalizer: Specify max to normalize by dividing each norm by the maximum norm within a layer; specify L2 to normalize by dividing by the L2 norm of the vector comprising all kernel norms. The default value is max.

  • -eq, --equalization_criterion: The criteria to equalize the stats of inputs to an element-wise op layer or depth-wise convolutional layer. This parameter is useful for resnets and mobilenets. Options include arithmetic_mean geometric_mean, union, and intersection. The default option is union.

  • -pg, -pruning_granularity: The number of filters to remove at a time. The default value is 8.

  • -pth: The threshold to compare the normalized norm against. The default value is 0.1.

    Note

    NVIDIA recommends changing the threshold to keep the number of parameters in the model to within 10-20% of the original unpruned model.

  • -nf, --min_num_filters: The minimum number of filters to keep per layer. The default value is 16.

  • -el, --excluded_layers: A list of excluded layers (e.g. “-i item1 item2”). The default value is [].

  • --gpu_index: The index of the GPU to run pruning on. This argument can be used when the machine has multiple GPUs installed. Note that inference can only run on a single GPU.

  • --log_file: The path to the log file. The default value is stdout.

  • -h, --help: Show this help message and exit.

After pruning, the model needs to be retrained. See Re-training the Pruned Model for more details.

Note

Due to the complexity of larger EfficientDet models, the pruning process will take significantly longer to finish. For example, pruning the EfficientDet-D5 model may take at least 25 minutes on a V100 server.


Using the Prune Command

Here’s an example of using the tao model efficientdet_tf1 prune command:

Copy
Copied!
            

tao model efficientdet_tf1 prune -m /path/to/model.step-0.tlt \ -o /path/to/pruned_model/ \ -eq union \ -pth 0.7 -k $KEY


Once the model has been pruned, there might be a slight decrease in accuracy because some previously useful weights may have been removed. To regain accuracy, we recommend retraining this pruned model over the same dataset. To do this, use the tao model efficientdet_tf1 train command, as documented in Training the Model, with an updated spec file that points to the newly pruned model as the pretrained model file.

We recommend turning off the regularizer or reducing the weight decay in the training_config for EfficientDet to recover the accuracy when retraining a pruned model. To do this, set the regularizer type to NO_REG, as documented in the Training config section. The other parameters may be retained in the spec file from the previous training.

Exporting the model decouples the training process from deployment and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. The exported model may be used universally across training and deployment hardware. The exported model format is referred to as .etlt. The .etlt model format is also an encrypted model format, and it uses the same key as the .tlt model that it is exported from. This key is required when deploying this model.

Exporting the EfficientDet Model

Here’s an example of the command line arguments of the tao model efficientdet_tf1 export command:

Copy
Copied!
            

tao model efficientdet_tf1 export [-h] -m <path to the .tlt model file> -e <path to experiment spec file> -k <key> [-o <path to output file>] [--gpu_index <gpu_index>] [--log_file <log_file_path>] [--verbose]

Required Arguments

  • -m, --model_path: The path to the .tlt model file to be exported

  • -k, --key: The key used to save the .tlt model file

  • -e, --experiment_spec: The path to the spec file

  • -o, --output_path: The path to save the exported model

Sample usage

Here’s a sample command to export an EfficientDet model to a .etlt file.

Copy
Copied!
            

tao model efficientdet_tf1 export -m /path/to/model.step-0.tlt \ -o /path/to/export/model.step-0.etlt \ -e /ws/spec.txt \ -k $KEY


For TensorRT engine generation, validation, and int8 calibration, refer to the TAO Deploy documentation.

For deploying to DeepStream, refer to the Integrating an EfficientDet (TF1/TF2) Model page.

Previous DINO
Next EfficientDet (TF2)
© Copyright 2023, NVIDIA.. Last updated on Aug 26, 2024.