EfficientDet (TF1) with TAO Deploy
TF1 EfficientDet onnx file generated from tao model export
is taken as an input to tao deploy
to generate
optimized TensorRT engine. For more information about training the TF1 EfficientDet, please refer to TF1 EfficientDet training documentation.
Same spec file can be used as the tao model efficientdet_tf1 export
command.
Use the following command to run TF1 EfficientDet engine generation:
tao deploy efficientdet_tf1 gen_trt_engine [-h] [-v]
-m MODEL_PATH
-r RESULTS_DIR
[-k KEY]
[--data_type {fp32,fp16,int8}]
[--engine_file ENGINE_FILE]
[--cal_image_dir CAL_IMAGE_DIR]
[--cal_cache_file CAL_CACHE_FILE]
[--max_batch_size MAX_BATCH_SIZE]
[--min_batch_size MIN_BATCH_SIZE]
[--opt_batch_size OPT_BATCH_SIZE]
[--batch_size BATCH_SIZE]
[--batches BATCHES]
[--max_workspace_size MAX_WORKSPACE_SIZE]
[-s STRICT_TYPE_CONSTRAINTS]
[--force_ptq FORCE_PTQ]
[--gpu_index GPU_INDEX]
[--log_file LOG_FILE]
Required Arguments
-m, --model_path
: The.onnx
or.etlt
model to be converted-r, --results_dir
: The directory where the JSON status-log file will be dumped
Optional Arguments
-h, --help
: Show this help message and exit.-k, --key
: A user-specific encoding key to load a.etlt
model--data_type
: The desired engine data type. The options arefp32
,fp16
,int8
. The default value isfp32
. A calibration cache will be generated in INT8 mode. If using INT8, the following INT8 arguments are required.--engine_file
: Path to the serialized TensorRT engine file. Note that this file is hardware specific, and cannot be generalized across GPUs. As TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to training GPU.-s, --strict_type_constraints
: A Boolean flag indicating whether to apply the TensorRT strict type constraints when building the TensorRT engine.--gpu_index
: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that gen_trt_engine can only run on a single GPU.--log_file
: The path to the log file. The default path is “stdout”.
INT8 Engine Generation Required Arguments
--cal_image_dir
: Directory of images to use for calibration.
The number of batches is obtained from the value set to the --batches
parameter,
and the batch_size
is obtained from the value set to the --batch_size
parameter.
For EfficientDet, calibration occurs as a one-step process with the data batches being generated
on the fly. Be sure that the directory mentioned in --cal_image_dir
has at least
batch_size * batches
number of images in it. The valid image extensions are .jpg,
.jpeg, and .png. In this case, the input_dimensions
of the calibration tensors
are derived from the input layer of the .etlt
model.
INT8 Engine Generation Optional Arguments
--cal_cache_file
: The path to save the calibration cache file to. The default value is./cal.bin
.--batches
: Number of batches to use for calibration. The default value is 10.--batch_size
: Batch size to use for calibration. The default value is 1.--max_batch_size
: Maximum batch size of TensorRT engine. The default value is 1.--min_batch_size
: Minimum batch size of TensorRT engine. The default value is 1.--opt_batch_size
: Optimal batch size of TensorRT engine. The default value is 1.--max_workspace_size
: Maximum workspace size in Gb of TensorRT engine. The default value is: (2 Gb).
Sample Usage
Here’s an example of using the gen_trt_engine
command to generate INT8 TensorRT engine:
tao deploy efficientdet_tf1 gen_trt_engine -m /workspace/model.step-1000.onnx \
-r /export/ \
--data_type int8 \
--batch_size 8 \
--batches 10 \
--cal_cache_file /export/cal.bin \
--cal_cache_file /export/cal.bin \
--cal_image_dir /workspace/raw-data/val2017 \
--engine_file /export/int8.engine
Label file will be derived from dataset_config.validation_json_file
from the spec file.
Same spec file as TAO evaluation spec file. Sample spec file:
dataset_config {
num_classes: 91
image_size: "512,512"
training_file_pattern: "/workspace/tao-experiments/data/train*.tfrecord"
validation_file_pattern: "/workspace/tao-experiments/data/val*.tfrecord"
validation_json_file: "/workspace/tao-experiments/data/raw-data/annotations/instances_val2017.json"
max_instances_per_image: 100
skip_crowd_during_training: True
}
eval_config {
eval_batch_size: 16
eval_epoch_cycle: 2
eval_samples: 500
min_score_thresh: 0.4
max_detections_per_image: 100
}
Use the following command to run TF1 EfficientDet engine evaluation:
tao deploy efficientdet_tf1 evaluate [-h]
-e EXPERIMENT_SPEC
-m MODEL_PATH
-r RESULTS_DIR
[-i IMAGE_DIR]
[--gpu_index GPU_INDEX]
[--log_file LOG_FILE]
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.-m, --model_path
: The engine file to run evaluation.-r, --results_dir
: The directory where evaluation results will be stored.
Optional Arguments
-i, --image_dir
: The directory where test images are located
Sample Usage
Here’s an example of using the evaluate
command to run evaluation with the TensorRT engine:
tao deploy efficientdet_tf1 evaluate -m /export/int8.engine \
-e /workspace/efficientdet_retrain.txt \
-i /workspace/raw-data/val2017 \
-r /workspace/tao-experiments/inference
tao deploy efficientdet_tf1 inference [-h]
-e EXPERIMENT_SPEC
-m MODEL_PATH
[-i IMAGE_DIR]
[-b BATCH_SIZE]
[-r RESULTS_DIR]
[--gpu_index GPU_INDEX]
[--log_file LOG_FILE]
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.-m, --model_path
: The engine file to run evaluation.-r, --results_dir
: The directory where evaluation results will be stored.
Optional Arguments
-i, --image_dir
: The directory where test images are located-b, --batch_size
: The batch size used for evaluation. Note that this value can not be larger than--max_batch_size
used during the engine generation. If not specified,eval_config.batch_size
will be used instead.
Sample Usage
Here’s an example of using the inference
command to run inference with the TensorRT engine:
tao deploy efficientdet_tf1 inference -m /export/int8.engine \
-e /workspace/efficientdet_retrain.txt \
-i /workspace/raw-data/val2017 \
-r /workspace/tao-experiments/inference
The visualization will be stored under $RESULTS_DIR/images_annotated
and KITTI format predictions will be stored under $RESULTS_DIR/labels
.