VisualChangeNet-Classification with TAO Deploy

To generate an optimized TensorRT engine:

  1. Generate a VisualChangeNet .onnx file using tao model visual_changenet export.

  2. Specify the .onnx file as the input to tao deploy visual_changenet gen_trt_engine.

For more information about training a VisualChangeNet model, refer to the VisualChangeNet training documentation.

gen_trt_engine

The following is an example configuration file for generating the TensorRT Engine:

Copy
Copied!
            

task: classify gen_trt_engine: results_dir: "${results_dir}/gen_trt_engine" onnx_file: "${results_dir}/export/changenet_model.onnx" trt_engine: "${results_dir}/gen_trt_engine/changenet.trt" input_channel: 3 input_width: 128 input_height: 512 tensorrt: data_type: fp32 workspace_size: int = 1024 min_batch_size: int = 1 opt_batch_size: int = 1 max_batch_size: int = 1

The task section defines the change detection task for which the .onnx model was generated.

Parameter Data Type Default Description
task str classify A flag to indicate the change detection task. Supports two tasks: ‘segment’ and ‘classify’ for segmentation and classification.

The gen_trt_engine section in the experiment specification file provides options for generating a TensorRT engine from an .onnx file.

Parameter Datatype Default Description Supported Values
results_dir string The path to the results directory
onnx_file string The path to the exported ETLT or ONNX model
trt_engine string The absolute path to the generated TensorRT engine
input_channel unsigned int 3 The input channel size. Only a value of 3 is supported. 3
input_width unsigned int 128 The input width >0
input_height unsigned int 512 The input height >0
batch_size unsigned int -1 The batch size of the ONNX model >=-1

tensorrt

The tensorrt parameter defines TensorRT engine generation.

Parameter Datatype Default Description Supported Values
data_type string fp32 The precision to be used for the TensorRT engine fp32/fp16
workspace_size unsigned int 1024 The maximum workspace size for the TensorRT engine >1024
min_batch_size unsigned int 1 The minimum batch size used for the optimization profile shape >0
opt_batch_size unsigned int 1 The optimal batch size used for the optimization profile shape >0
max_batch_size unsigned int 1 The maximum batch size used for the optimization profile shape >0

Use the following command to run VisualChangeNet engine generation:

Copy
Copied!
            

tao deploy visual_changenet gen_trt_engine -e /path/to/spec.yaml \ -r /path/to/result_dir \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file.

  • results_dir: The global results directory. The engine generation log will be saved in the results_dir.

  • gen_trt_engine.onnx_file: The .onnx model to be converted.

  • gen_trt_engine.trt_engine: The path where the generated engine will be stored.

  • gen_trt_engine.tensorrt.data_type: The precision to be exported.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate an fp32 TensorRT engine:

Copy
Copied!
            

tao deploy visual_changenet gen_trt_engine -e $DEFAULT_SPEC -r $RESULTS_DIR gen_trt_engine.onnx_file=$ONNX_FILE \ gen_trt_engine.trt_engine=$ENGINE_FILE \ gen_trt_engine.tensorrt.data_type=fp32

You can reuse the spec file that was specified for TAO inference. The following is an example inference spec:

Copy
Copied!
            

task: classify model: classify: eval_margin: 0.5 dataset: classify: infer_dataset: csv_path: /path/to/infer.csv images_dir: /path/to/img_dir image_ext: .jpg batch_size: 16 workers: 2 num_input: 4 input_map: LowAngleLight: 0 SolderLight: 1 UniformLight: 2 WhiteLight: 3 concat_type: linear grid_map: x: 2 y: 2 output_shape: - 128 - 128 augmentation_config: rgb_input_mean: [0.485, 0.456, 0.406] rgb_input_std: [0.229, 0.224, 0.225] num_classes: 2 inference: gpu_id: 0 trt_engine: /path/to/engine/file results_dir: "${results_dir}/inference"

Use the following command to run VisualChangeNet-Classification engine inference:

Copy
Copied!
            

tao deploy visual_changenet inference -e /path/to/spec.yaml \ -r $RESULTS_DIR \ inference.trt_engine=/path/to/engine/file \ model.classify.eval_margin=0.5

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file. This should be the same as the tao inference spec file.

Optional Arguments

  • -r, --results_dir: The directory where JSON status-log file and inference results will be dumped.

  • inference.trt_engine: The engine file for inference.

  • model.classify.eval_margin: The evaluation threshold for VisualChangeNet-Classification.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao deploy visual_changenet inference -e $DEFAULT_SPEC -r $RESULTS_DIR inference.trt_engine=$ENGINE_FILE model.classify.eval_margin=$EVAL_MARGIN

You can reuse the spec file that was specified for TAO evaluation through a TensorRT engine. The following is a sample spec file:

Copy
Copied!
            

task: classify model: classify: eval_margin: 0.5 dataset: classify: infer_dataset: csv_path: /path/to/infer.csv images_dir: /path/to/img_dir image_ext: .jpg batch_size: 16 workers: 2 num_input: 4 input_map: LowAngleLight: 0 SolderLight: 1 UniformLight: 2 WhiteLight: 3 concat_type: linear grid_map: x: 2 y: 2 output_shape: - 128 - 128 augmentation_config: rgb_input_mean: [0.485, 0.456, 0.406] rgb_input_std: [0.229, 0.224, 0.225] num_classes: 2 evaluate: gpu_id: 0 trt_engine: /path/to/engine/file results_dir: "${results_dir}/inference"

Use the following command to run VisualChangeNet-Classification engine evaluation:

Copy
Copied!
            

tao deploy visual_changenet evaluate -e /path/to/spec.yaml \ -r $RESULTS_DIR \ evaluate.trt_engine=/path/to/engine/file \ model.classify.eval_margin=0.5

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate spec file.

Optional Arguments

  • -r, --results_dir: The directory where the JSON status-log file and evaluation results will be dumped.

  • evaluate.trt_engine: The engine file for evaluation.

  • model.classify.eval_margin: The evaluation threshold for VisualChangeNet-Classification.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with a TensorRT engine:

Copy
Copied!
            

tao deploy visual_changenet evaluate -e $DEFAULT_SPEC -r $RESULTS_DIR \ evaluate.trt_engine=$ENGINE_FILE model.classify.eval_margin=0=$EVAL_MARGIN

Previous SiameseOI with TAO Deploy
Next VisualChangeNet-Segmentation with TAO Deploy
© Copyright 2024, NVIDIA. Last updated on Mar 22, 2024.