Classification (TF2) with TAO Deploy
TF2 Classification ONNX file generated from tao export
is taken as an input to tao deploy
to generate
optimized TensorRT engine. For more information about training the TF2 Classification, please refer to
TF2 Classification training documentation.
Same spec file can be used with the tao model classification_tf2 export
command.
GenTrtEngine Config
The gen_trt_engine
configuration contains the parameters of exporting a .onnx
model to TensorRT engine, which can be used for deployment.
Field | Description | Data Type and Constraints | Recommended/Typical Value |
onnx_file | The path to the exported .onnx model | string | |
trt_engine | The path where the generated engine will be stored | string | |
results_dir | Directory to save the output log. If not specified log will be saved under global $results_dir/gen_trt_engine | string | |
tensorrt | TensorRT config | Dict |
The tensorrt
configuration contains specification of the TensorRT engine and calibration requirements.
+——————————+———————————————————————-+——————————-+——————————-+
| Field | Description | Data Type and Constraints | Recommended/Typical Value |
+——————————+———————————————————————-+——————————-+——————————-+
| data_type | The precision to be used for the TensorRT engine | string | FP32 |
+——————————+———————————————————————-+——————————-+——————————-+
| min_batch_size | The minimum batch size used for optimization profile shape | unsigned int | 1 |
+——————————+———————————————————————-+——————————-+——————————-+
| opt_batch_size | The optimal batch size used for optimization profile shape | unsigned int | 1 |
+——————————+———————————————————————-+——————————-+——————————-+
| max_batch_size | The maximum batch size used for optimization profile shape | unsigned int | 1 |
+——————————+———————————————————————-+——————————-+——————————-+
| max_workspace_size | The maximum workspace size for the TensorRT engine | unsigned int | 2 |
+——————————+———————————————————————-+——————————-+——————————-+
| calibration | Calibration config | Dict | |
+——————————+———————————————————————-+——————————-+——————————-+
The calibration
configuration specifies the location of the calibration data and where to save the calibration cache file.
+——————————+———————————————————————-+——————————-+——————————-+
| Field | Description | Data Type and Constraints | Recommended/Typical Value |
+——————————+———————————————————————-+——————————-+——————————-+
| cal_image_dir | The directory containing images to be used for calibration | string | |
+——————————+———————————————————————-+——————————-+——————————-+
| cal_cache_file | The path to calibration cache file | string | |
+——————————+———————————————————————-+——————————-+——————————-+
| cal_batches | The number of batches to be iterated for calibration | unsigned int | 10 |
+——————————+———————————————————————-+——————————-+——————————-+
| cal_batch_size | The batch size for each batch | unsigned int | 1 |
+——————————+———————————————————————-+——————————-+——————————-+
| cal_data_file | The path to calibration data file | string | |
+——————————+———————————————————————-+——————————-+——————————-+
Below is a sample spec file for TF2 classification.
results_dir: '/results'
dataset:
num_classes: 20
train_dataset_path: "/workspace/tao-experiments/data/split/train"
val_dataset_path: "/workspace/tao-experiments/data/split/val"
preprocess_mode: 'torch'
augmentation:
enable_color_augmentation: True
enable_center_crop: True
train:
qat: False
checkpoint: ''
batch_size_per_gpu: 64
num_epochs: 80
optim_config:
optimizer: 'sgd'
lr_config:
scheduler: 'cosine'
learning_rate: 0.05
soft_start: 0.05
reg_config:
type: 'L2'
scope: ['conv2d', 'dense']
weight_decay: 0.00005
model:
backbone: 'efficientnet-b0'
input_width: 256
input_height: 256
input_channels: 3
input_image_depth: 8
evaluate:
dataset_path: '/workspace/tao-experiments/data/split/test'
checkpoint: ''
trt_engine: '/results/efficientnet-b0.fp32.engine'
top_k: 3
batch_size: 256
n_workers: 8
inference:
checkpoint: ''
trt_engine: '/results/efficientnet-b0.fp32.engine'
image_dir: '/workspace/tao-experiments/data/split/test/aeroplane'
classmap: '/results/train/classmap.json'
export:
checkpoint: ''
onnx_file: '/results/efficientnet-b0.onnx'
gen_trt_engine:
onnx_file: '/results/efficientnet-b0.onnx'
trt_engine: '/results/efficientnet-b0.fp32.engine'
tensorrt:
data_type: "fp32"
max_workspace_size: 4
max_batch_size: 16
calibration:
cal_image_dir: '/workspace/tao-experiments/data/split/test'
cal_data_file: '/results/calib.tensorfile'
cal_cache_file: '/results/cal.bin'
cal_batches: 10
Use the following command to run TF2 Classification engine generation:
tao deploy classification_tf2 gen_trt_engine -e /path/to/spec.yaml \
export.onnx_file=/path/to/onnx/file \
export.trt_engine=/path/to/engine/file \
export.tensorrt.data_type=<data_type>
Required Arguments
-e, --experiment_spec
: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.
Optional Arguments
-h, --help
: Show this help message and exit.-k, --key
: A user-specific encoding key to load a.etlt
model.-r, --results_dir
: A global result directory where the experiment outputs and log would be written under<task>
subdirectory.
Sample Usage
Here’s an example of using the gen_trt_engine
command to generate FP16 TensorRT engine:
tao deploy classification_tf2 gen_trt_engine -e $DEFAULT_SPEC
export.onnx_file=$ONNX_FILE \
export.trt_engine=$ENGINE_FILE \
export.tensorrt.data_type=fp16
Same spec file as TAO evaluation spec file.
Use the following command to run TF2 Classification engine evaluation:
tao deploy classification_tf2 evaluate -e /path/to/spec.yaml \
evaluate.trt_engine=/path/to/engine/file \
evaluate.results_dir=/path/to/outputs
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.
Optional Arguments
-h, --help
: Show this help message and exit.-k, --key
: A user-specific encoding key to load a.etlt
model.-r, --results_dir
: A global result directory where the experiment outputs and log would be written under<task>
subdirectory.
Sample Usage
Here’s an example of using the evaluate
command to run evaluation with the TensorRT engine:
tao deploy classification_tf2 evaluate -e $DEFAULT_SPEC
evaluate.trt_engine=$ENGINE_FILE \
results_dir=$RESULTS_DIR
Same spec file as TAO inference spec file.
Use the following command to run TF2 Classification engine inference:
tao deploy classification_tf2 inference -e /path/to/spec.yaml \
inference.trt_engine=/path/to/engine/file \
results_dir=/path/to/outputs
Required Arguments
-e, --experiment_spec
: The experiment spec file for inference. This should be the same as the tao inference specification file.
Optional Arguments
-h, --help
: Show this help message and exit.-k, --key
: A user-specific encoding key to load a.etlt
model.-r, --results_dir
: A global result directory where the experiment outputs and log would be written under<task>
subdirectory.
Sample Usage
Here’s an example of using the inference
command to run inference with the TensorRT engine:
tao deploy classification_tf2 inference -e $DEFAULT_SPEC
inference.trt_engine=$ENGINE_FILE \
inference.results_dir=$RESULTS_DIR
The csv predictions will be stored under $RESULTS_DIR/result.csv
.