TAO v5.5.0
NVIDIA TAO v5.5.0

SiameseOI with TAO Deploy

To generate an optimized TensorRT engine, a SiameseOI .etlt or .onnx file, which is first generated using tao model visual_changenet export, is taken as an input to tao deploy visual_changenet gen_trt_engine. For more information about training a SiameseOI model, refer to the SiameseOI training documentation.

gen_trt_engine

The gen_trt_engine section in the experiment specification file provides options for generating a TensorRT engine from an .etlt or .onnx file. The following is an example configuration:

Copy
Copied!
            

gen_trt_engine: results_dir: "${results_dir}/gen_trt_engine" onnx_file: "${results_dir}/export/oi_model.onnx" trt_engine: "${results_dir}/gen_trt_engine/oi_model.trt.v100" input_channel: 3 input_width: 400 input_height: 100 tensorrt: data_type: fp32 workspace_size: int = 1024 min_batch_size: int = 1 opt_batch_size: int = 1 max_batch_size: int = 1

Parameter Datatype Default Description Supported Values
results_dir string – The path to the results directory –
onnx_file string – The path to the exported ETLT or ONNX model –
trt_engine string – The absolute path to the generated TensorRT engine –
input_channel unsigned int 3 The input channel size. Only a value of 3 is supported. 3
input_width unsigned int 400 The input width >0
input_height unsigned int 100 The input height >0
batch_size unsigned int -1 The batch size of the ONNX model >=-1

tensorrt

The tensorrt parameter defines TensorRT engine generation.

Parameter Datatype Default Description Supported Values
data_type string fp32 The precision to be used for the TensorRT engine fp32/fp16/int8
workspace_size unsigned int 1024 The maximum workspace size for the TensorRT engine >1024
min_batch_size unsigned int 1 The minimum batch size used for the optimization profile shape >0
opt_batch_size unsigned int 1 The optimal batch size used for the optimization profile shape >0
max_batch_size unsigned int 1 The maximum batch size used for the optimization profile shape >0

Use the following command to run SiameseOI engine generation:

Copy
Copied!
            

tao deploy optical_inspection gen_trt_engine -e /path/to/spec.yaml \ results_dir=/path/to/etlt/file \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file

  • results_dir: The global results directory. The engine generation log will be saved in the results_dir.

  • gen_trt_engine.onnx_file: The .onnx model to be converted

  • gen_trt_engine.trt_engine: The path where the generated engine will be stored

  • gen_trt_engine.tensorrt.data_type: The precision to be exported

Sample Usage

Here’s an example of using the gen_trt_engine command to generate an FP16 TensorRT engine:

Copy
Copied!
            

tao deploy optical_inspection gen_trt_engine -e $DEFAULT_SPEC results_dir=$RESULTS_DIR gen_trt_engine.onnx_file=$ONNX_FILE \ gen_trt_engine.trt_engine=$ENGINE_FILE \ gen_trt_engine.tensorrt.data_type=FP16

You can reuse the spec file that was specified for TAO inference. The following is an example inference spec:

Copy
Copied!
            

inference: gpu_id: 0 trt_engine: /path/to/engine/file results_dir: "${results_dir}/inference"

Use the following command to run SiameseOI engine inference:

Copy
Copied!
            

tao deploy optical_inspection inference -e /path/to/spec.yaml \ results_dir=$RESULTS_DIR \

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file

  • results_dir: The global results directory. The engine generation log will be saved in the results_dir.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao deploy optical_inspection inference -e $DEFAULT_SPEC results_dir=$RESULTS_DIR

Previous OCRNet with TAO Deploy
Next VisualChangeNet-Classification with TAO Deploy
© Copyright 2024, NVIDIA. Last updated on Oct 15, 2024.