VisualChangeNet-Classification with TAO Deploy#

To generate an optimized TensorRT engine:

  1. Generate a VisualChangeNet .onnx file using model visual_changenet export.

  2. Specify the .onnx file as the input to deploy visual_changenet gen_trt_engine.

For more information about training a VisualChangeNet model, refer to the VisualChangeNet training documentation.

Converting an ONNX File into TensorRT Engine#

gen_trt_engine#

The following is an example configuration file for generating the TensorRT Engine:

We first need to set the base_experiment.

FILTER_PARAMS='{"network_arch": "visual_changenet"}'

$BASE_EXPERIMENTS=$(tao-client visual_changenet list-base-experiments --filter_params "$FILTER_PARAMS")

Retrieve the PTM_ID from $BASE_EXPERIMENTS before setting base_experiment.

PTM_INFORMATION="{\"base_experiment\": [$PTM_ID]}"

tao-client visual_changenet patch-artifact-metadata --id $EXPERIMENT_ID --job_type experiment --update_info $PTM_INFORMATION

Required Arguments

  • --id: The unique identifier of the experiment from which to train the model

See also

For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.

Then retrieve the specifications.

GEN_TRT_ENGINE_SPECS=$(tao-client visual_changenet get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID --parent_job_id EXPORT_JOB_ID)

Get specifications from $GEN_TRT_ENGINE_SPECS. Set task to classify. You can override other values as needed.

task: classify
gen_trt_engine:
  results_dir: "${results_dir}/gen_trt_engine"
  onnx_file: "${results_dir}/export/changenet_model.onnx"
  trt_engine: "${results_dir}/gen_trt_engine/changenet.trt"
  input_channel: 3
  input_width: 128
  input_height: 512
  tensorrt:
    data_type: fp32
    workspace_size: int = 1024
    min_batch_size: int = 1
    opt_batch_size: int = 1
    max_batch_size: int = 1

The task section defines the change detection task for which the .onnx model was generated.

Parameter

Data Type

Default

Description

task

str

classify

A flag to indicate the change detection task. Supports two tasks: ‘segment’ and ‘classify’ for segmentation and classification.

The gen_trt_engine section in the experiment specification file provides options for generating a TensorRT engine from an .onnx file.

Parameter

Datatype

Default

Description

Supported Values

results_dir

string

The path to the results directory

onnx_file

string

The path to the exported ETLT or ONNX model

trt_engine

string

The absolute path to the generated TensorRT engine

input_channel

unsigned int

3

The input channel size. Only a value of 3 is supported.

3

input_width

unsigned int

128

The input width

>0

input_height

unsigned int

512

The input height

>0

batch_size

unsigned int

-1

The batch size of the ONNX model

>=-1

tensorrt#

The tensorrt parameter defines TensorRT engine generation.

Parameter

Datatype

Default

Description

Supported Values

data_type

string

fp32

The precision to be used for the TensorRT engine

fp32/fp16

workspace_size

unsigned int

1024

The maximum workspace size for the TensorRT engine

>1024

min_batch_size

unsigned int

1

The minimum batch size used for the optimization profile shape

>0

opt_batch_size

unsigned int

1

The optimal batch size used for the optimization profile shape

>0

max_batch_size

unsigned int

1

The maximum batch size used for the optimization profile shape

>0

Use the following command to run VisualChangeNet engine generation:

GEN_TRT_ENGINE_JOB_ID=$(tao-client visual_changenet experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$GEN_TRT_ENGINE_SPECS")
tao deploy visual_changenet gen_trt_engine -e /path/to/spec.yaml \
           results_dir=/path/to/result_dir \
           gen_trt_engine.onnx_file=/path/to/onnx/file \
           gen_trt_engine.trt_engine=/path/to/engine/file \
           gen_trt_engine.tensorrt.data_type=<data_type>

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file.

  • results_dir: The global results directory. The engine generation log will be saved in the results_dir.

  • gen_trt_engine.onnx_file: The .onnx model to be converted.

  • gen_trt_engine.trt_engine: The path where the generated engine will be stored.

  • gen_trt_engine.tensorrt.data_type: The precision to be exported.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate an fp32 TensorRT engine:

tao deploy visual_changenet gen_trt_engine -e $DEFAULT_SPEC
                            results_dir=$RESULTS_DIR
                            gen_trt_engine.onnx_file=$ONNX_FILE \
                            gen_trt_engine.trt_engine=$ENGINE_FILE \
                            gen_trt_engine.tensorrt.data_type=fp32

Running Inference through TensorRT Engine#

You can reuse the spec file that was specified for TAO inference. The following is an example inference spec:

INFER_SPECS=$(tao-client visual_changenet get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)

Get specifications from $INFER_SPECS. You can override values as needed.

task: classify
model:
  classify:
    eval_margin: 0.5
dataset:
  classify:
    infer_dataset:
      csv_path: /path/to/infer.csv
      images_dir: /path/to/img_dir
    image_ext: .jpg
    batch_size: 16
    workers: 2
    num_input: 4
    input_map:
      LowAngleLight: 0
      SolderLight: 1
      UniformLight: 2
      WhiteLight: 3
    concat_type: linear
    grid_map:
      x: 2
      y: 2
    output_shape:
      - 128
      - 128
    augmentation_config:
      rgb_input_mean: [0.485, 0.456, 0.406]
      rgb_input_std: [0.229, 0.224, 0.225]
    num_classes: 2
inference:
  gpu_id: 0
  trt_engine: /path/to/engine/file
  results_dir: "${results_dir}/inference"

Use the following command to run VisualChangeNet-Classification engine inference:

INFER_JOB_ID=$(tao-client visual_changenet experiment-run-action --action inference --id $EXPERIMENT_ID --specs "INFER_SPECS" --parent_job_id $GEN_TRT_ENGINE_JOB_ID)
tao deploy visual_changenet inference -e /path/to/spec.yaml \
                            results_dir=$RESULTS_DIR \
                            inference.trt_engine=/path/to/engine/file \
                            model.classify.eval_margin=0.5

Required Arguments

  • -e, --experiment_spec_file: The path to the experiment spec file. This should be the same as the tao inference spec file.

Optional Arguments

  • results_dir: The directory where JSON status-log file and inference results will be dumped.

  • inference.trt_engine: The engine file for inference.

  • model.classify.eval_margin: The evaluation threshold for VisualChangeNet-Classification.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

tao deploy visual_changenet inference -e $DEFAULT_SPEC
                            results_dir=$RESULTS_DIR
                            inference.trt_engine=$ENGINE_FILE
                            model.classify.eval_margin=$EVAL_MARGIN

Running Evaluation through a TensorRT Engine#

You can reuse the spec file that was specified for TAO evaluation through a TensorRT engine.

EVAL_SPECS=$(tao-client visual_changenet get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)

EVAL_JOB_ID=$(tao-client visual_changenet experiment-run-action --action evaluate --job_type experiment --id $EXPERIMENT_ID --parent_job_id GEN_TRT_ENGINE_JOB_ID)

The following is a sample spec file:

task: classify
model:
  classify:
    eval_margin: 0.5
dataset:
  classify:
    infer_dataset:
      csv_path: /path/to/infer.csv
      images_dir: /path/to/img_dir
    image_ext: .jpg
    batch_size: 16
    workers: 2
    num_input: 4
    input_map:
      LowAngleLight: 0
      SolderLight: 1
      UniformLight: 2
      WhiteLight: 3
    concat_type: linear
    grid_map:
      x: 2
      y: 2
    output_shape:
      - 128
      - 128
    augmentation_config:
      rgb_input_mean: [0.485, 0.456, 0.406]
      rgb_input_std: [0.229, 0.224, 0.225]
    num_classes: 2
evaluate:
  gpu_id: 0
  trt_engine: /path/to/engine/file
  results_dir: "${results_dir}/inference"

Use the following command to run VisualChangeNet-Classification engine evaluation:

tao deploy visual_changenet evaluate -e /path/to/spec.yaml \
                            results_dir=$RESULTS_DIR \
                            evaluate.trt_engine=/path/to/engine/file \
                            model.classify.eval_margin=0.5

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate spec file.

Optional Arguments

  • results_dir: The directory where the JSON status-log file and evaluation results will be dumped.

  • evaluate.trt_engine: The engine file for evaluation.

  • model.classify.eval_margin: The evaluation threshold for VisualChangeNet-Classification.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with a TensorRT engine:

tao deploy visual_changenet evaluate -e $DEFAULT_SPEC
                            results_dir=$RESULTS_DIR \
                            evaluate.trt_engine=$ENGINE_FILE
                            model.classify.eval_margin=0=$EVAL_MARGIN