EfficientDet (TF2) with TAO Deploy

TF2 EfficientDet etlt file generated from tao export is taken as an input to tao-deploy to generate optimized TensorRT engine. For more information about training the TF2 EfficientDet, please refer to TF2 EfficientDet training documentation.

Same spec file can be used as the tao efficientdet_tf2 export command.

Export Config

The export configuration contains the parameters of exporting a .etlt model to TensorRT engine, which can be used for deployment.

Field

Description

Data Type and Constraints

Recommended/Typical Value

output_path

The path to save the exported .etlt model

string

False

engine_file

The path where the generated engine will be stored

string

False

cal_image_dir

The directory containing images to be used for calibration

string

False

cal_cache_file

The path to calibration cache file

string

False

batches

The size of batches to be iterated for calibration

unsigned int

10

batch_size

The batch size for each batch

unsigned int

1

data_type

The precision to be used for the TensorRT engine

string

FP32

min_batch_size

The minimum batch size used for optimization profile shape

unsigned int

1

opt_batch_size

The optimal batch size used for optimization profile shape

unsigned int

1

max_batch_size

The maximum batch size used for optimization profile shape

unsigned int

1

max_workspace_size

The maximum workspace size for the TensorRT engine

unsigned int

2

Below is a sample spec file for TF2 classification.

Copy
Copied!
            

data: loader: prefetch_size: 4 shuffle_file: False shuffle_buffer: 10000 cycle_length: 32 block_length: 16 max_instances_per_image: 100 skip_crowd_during_training: True image_size: '512x512' num_classes: 91 train_tfrecords: - '/workspace/tao-experiments/train-*' val_tfrecords: - '/workspace/tao-experiments/val-*' val_json_file: '/workspace/tao-experiments/raw-data/annotations/instances_val2017.json' evaluate: batch_size: 8 num_samples: 500 max_detections_per_image: 100 label_map: "/workspace/tao-experiments/efficientdet_tf2/specs/coco_labels.yaml" model_path: "/workspace/tao-experiments/efficientdet-d0.int8.engine" export: max_batch_size: 1 dynamic_batch_size: True min_score_thresh: 0.4 output_path: "/workspace/tao-experiments/efficientdet-d0.etlt" engine_file: "/workspace/tao-experiments/efficientdet-d0.int8.engine" data_type: "int8" max_workspace_size: 2 # in Gb cal_image_dir: "/workspace/tao-experiments/raw-data/val2017" cal_cache_file: "/workspace/tao-experiments/efficientdet-d0.cal" cal_batch_size: 16 cal_batches: 10 inference: model_path: "/workspace/tao-experiments/efficientdet-d0.int8.engine" image_dir: "/workspace/tao-experiments/test_samples" output_dir: "/workspace/tao-experiments/annotated_images" dump_label: False batch_size: 1 min_score_thresh: 0.4 label_map: "/workspace/tao-experiments/efficientdet_tf2/specs/coco_labels.yaml" key: 'nvidia_tao' results_dir: '/workspace/tao-experiments/'

Use the following command to run TF2 EfficientDet engine generation:

Copy
Copied!
            

tao-deploy efficientdet_tf2 gen_trt_engine -e /path/to/spec.yaml \ key=<key> \ export.output_path=/path/to/etlt/file \ export.engine_file=/path/to/engine/file \ export.data_type=<data_type>


Required Arguments

  • -e, --experiment_spec: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.

  • -k, --key: A user-specific encoding key to load a .etlt model.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate INT8 TensorRT engine:

Copy
Copied!
            

tao-deploy efficientdet_tf2 gen_trt_engine -e $DEFAULT_SPEC key=$YOUR_KEY \ export.output_path=$ETLT_FILE \ export.engine_file=$ENGINE_FILE \ export.data_type=int8


Same spec file as TAO evaluation spec file.

Use the following command to run TF2 EfficientDet engine evaluation:

Copy
Copied!
            

tao-deploy efficientdet_tf2 evaluate -e /path/to/spec.yaml \ evaluate.model_path=/path/to/engine/file \ results_dir=/path/to/outputs

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • evaluate.model_path: The engine file to run evaluation.

  • results_dir: The directory where evaluation results will be stored.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:

Copy
Copied!
            

tao-deploy efficientdet_tf2 evaluate -e $DEFAULT_SPEC evaluate.model_path=$ENGINE_FILE \ results_dir=$RESULTS_DIR


Same spec file as TAO inference spec file.

Use the following command to run TF2 EfficientDet engine inference:

Copy
Copied!
            

tao-deploy efficientdet_tf2 inference -e /path/to/spec.yaml \ inference.model_path=/path/to/engine/file \ results_dir=/path/to/outputs

Required Arguments

  • -e, --experiment_spec: The experiment spec file for inference. This should be the same as the tao inference specification file.

  • inference.model_path: The engine file to run inference.

  • results_dir: The directory where inference results will be stored.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao-deploy efficientdet_tf2 inference -e $DEFAULT_SPEC inference.model_path=$ENGINE_FILE \ results_dir=$RESULTS_DIR

The visualization will be stored under $RESULTS_DIR/images_annotated and KITTI format predictions will be stored under $RESULTS_DIR/labels.

© Copyright 2022, NVIDIA.. Last updated on Mar 23, 2023.