Mask2former with TAO Deploy
To generate an optimized TensorRT engine, a Mask2former .onnx
file, which is first generated using tao model mask2former export
,
is taken as an input to tao deploy mask2former gen_trt_engine
. For more information about training a Mask2former model,
refer to the Mask2former training documentation.
To convert the .onnx
file, you can reuse the spec file from the tao model mask2former export
command.
gen_trt_engine
The gen_trt_engine
parameter defines TensorRT engine generation.
gen_trt_engine:
onnx_file: /path/to/onnx_file
trt_engine: /path/to/trt_engine
input_channel: 3
input_width: 960
input_height: 544
tensorrt:
data_type: int8
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 10
max_batch_size: 10
calibration:
cal_image_dir:
- /path/to/cal/images
cal_cache_file: /path/to/cal.bin
cal_batch_size: 10
cal_batches: 1000
Parameter | Datatype | Default | Description | Supported Values |
onnx_file |
string | The precision to be used for the TensorRT engine | ||
trt_engine |
string | The maximum workspace size for the TensorRT engine | ||
input_channel |
unsigned int | 3 | The input channel size. Only the value 3 is supported. | 3 |
input_width |
unsigned int | 960 | The input width | >0 |
input_height |
unsigned int | 544 | The input height | >0 |
batch_size |
unsigned int | -1 | The batch size of the ONNX model | >=-1 |
tensorrt
The tensorrt
parameter defines TensorRT engine generation.
Parameter | Datatype | Default | Description | Supported Values |
data_type |
string | fp32 | The precision to be used for the TensorRT engine | fp32/fp16/int8 |
workspace_size |
unsigned int | 1024 | The maximum workspace size for the TensorRT engine | >1024 |
min_batch_size |
unsigned int | 1 | The minimum batch size used for the optimization profile shape | >0 |
opt_batch_size |
unsigned int | 1 | The optimal batch size used for the optimization profile shape | >0 |
max_batch_size |
unsigned int | 1 | The maximum batch size used for the optimization profile shape | >0 |
calibration
The calibration
parameter defines TensorRT engine generation with PTQ INT8 calibration.
Parameter | Datatype | Default | Description | Supported Values |
cal_image_dir |
string list | The list of paths that contain images used for calibration | ||
cal_cache_file |
string | The path to the calibration cache file to be dumped | ||
cal_batch_size |
unsigned int | 1 | The batch size per batch during calibration | >0 |
cal_batches |
unsigned int | 1 | The number of batches to calibrate | >0 |
Use the following command to run Mask2former engine generation:
tao deploy mask2former gen_trt_engine -e /path/to/spec.yaml \
results_dir=/path/to/results \
gen_trt_engine.onnx_file=/path/to/onnx/file \
gen_trt_engine.trt_engine=/path/to/engine/file \
gen_trt_engine.tensorrt.data_type=<data_type>
Required Arguments
-e, --experiment_spec
: The experiment spec file to set up TensorRT engine generation
Optional Arguments
results_dir
: The directory where the JSON status-log file will be dumpedgen_trt_engine.onnx_file
: The.onnx
model to be convertedgen_trt_engine.trt_engine
: The path where the generated engine will be storedgen_trt_engine.tensorrt.data_type
: The precision to be exported
Sample Usage
Here’s an example of using the gen_trt_engine
command to generate an FP16 TensorRT engine:
tao deploy mask2former gen_trt_engine -e $DEFAULT_SPEC
gen_trt_engine.onnx_file=$ONNX_FILE \
gen_trt_engine.trt_engine=$ENGINE_FILE \
gen_trt_engine.tensorrt.data_type=FP16
You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:
evaluate:
trt_engine: /path/to/engine/file
data:
type: 'coco_panoptic'
val:
name: "coco_2017_val_panoptic"
panoptic_json: "/datasets/coco/annotations/panoptic_val2017.json"
img_dir: "/datasets/coco/val2017"
panoptic_dir: "/datasets/coco/panoptic_val2017"
batch_size: 1
num_workers: 2
Use the following command to run Mask2former engine evaluation:
tao deploy mask2former evaluate -e /path/to/spec.yaml \
results_dir=/path/to/results \
evaluate.trt_engine=/path/to/engine/file
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation This should be the same as thetao evaluate
spec file
Optional Arguments
results_dir
: The directory where the JSON status-log file and evaluation results will be dumpedevaluate.trt_engine
: The engine file for evaluation
Sample Usage
Here’s an example of using the evaluate
command to run evaluation with a TensorRT engine:
tao deploy mask2former evaluate -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:
inference:
trt_engine: /path/to/engine/file
color_map: /path/to/colors.yaml
label_map: /path/to/labels.csv
data:
type: 'coco_panoptic'
test:
img_dir: /path/to/test_images/
batch_size: 1
Use the following command to run Mask2former engine inference:
tao deploy mask2former inference -e /path/to/spec.yaml \
results_dir=/path/to/results \
inference.trt_engine=/path/to/engine/file
Required Arguments
-e, --experiment_spec
: The experiment spec file for inference. This should be the same as thetao inference
spec file.
Optional Arguments
results_dir
: The directory where JSON status-log file and inference results will be dumpedinference.trt_engine
: The engine file for inference
Sample Usage
Here’s an example of using the inference
command to run inference with a TensorRT engine:
tao deploy mask2former inference -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
The visualization will be stored in $RESULTS_DIR/images_annotated
, and the COCO format predictions will be stored
under $RESULTS_DIR/labels
.