VisualChangeNet-Segmentation with TAO Deploy#
To generate an optimized TensorRT engine:
Generate a VisualChangeNet
.onnx
file using model visual_changenet export.Specify the
.onnx
file as the input to deploy visual_changenet gen_trt_engine.
For more information about training a VisualChangeNet model, refer to the VisualChangeNet training documentation.
Converting an ONNX File into TensorRT Engine#
gen_trt_engine#
The following is an example configuration file for generating the TensorRT Engine:
We first need to set the base_experiment.
FILTER_PARAMS='{"network_arch": "visual_changenet"}'
$BASE_EXPERIMENTS=$(tao-client visual_changenet list-base-experiments --filter_params "$FILTER_PARAMS")
Retrieve the PTM_ID from $BASE_EXPERIMENTS before setting base_experiment.
PTM_INFORMATION="{\"base_experiment\": [$PTM_ID]}"
tao-client visual_changenet patch-artifact-metadata --id $EXPERIMENT_ID --job_type experiment --update_info $PTM_INFORMATION
Required Arguments
--id
: The unique identifier of the experiment from which to train the model
See also
For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.
Then retrieve the specifications.
GEN_TRT_ENGINE_SPECS=$(tao-client visual_changenet get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID --parent_job_id EXPORT_JOB_ID)
Get specifications from $GEN_TRT_ENGINE_SPECS. Set the task
to segmentation
. You can override other values as needed.
task: segment
gen_trt_engine:
results_dir: "${results_dir}/gen_trt_engine"
onnx_file: "${results_dir}/export/changenet_model.onnx"
trt_engine: "${results_dir}/gen_trt_engine/changenet.trt"
input_channel: 3
input_width: 128
input_height: 512
tensorrt:
data_type: fp32
workspace_size: int = 1024
min_batch_size: int = 1
opt_batch_size: int = 1
max_batch_size: int = 1
The task
section defines the change detection task for which the .onnx
model was generated.
Parameter |
Data Type |
Default |
Description |
|
str |
classify |
A flag to indicate the change detection task. Currently supports two tasks: ‘segment’ and ‘classify’ for segmentation and classification |
The gen_trt_engine
section in the experiment specification file provides options for generating a TensorRT engine from an
.onnx
file.
Parameter |
Datatype |
Default |
Description |
Supported Values |
|
string |
– |
The path to the results directory |
– |
|
string |
– |
The path to the exported ETLT or ONNX model |
– |
|
string |
– |
The absolute path to the generated TensorRT engine |
– |
|
unsigned int |
3 |
The input channel size. Only a value of 3 is supported. |
3 |
|
unsigned int |
256 |
The input width |
>0 |
|
unsigned int |
256 |
The input height |
>0 |
|
unsigned int |
-1 |
The batch size of the ONNX model |
>=-1 |
tensorrt#
The tensorrt
parameter defines TensorRT engine generation.
Parameter |
Datatype |
Default |
Description |
Supported Values |
|
string |
fp32 |
The precision to be used for the TensorRT engine |
fp32/fp16 |
|
unsigned int |
1024 |
The maximum workspace size for the TensorRT engine |
>1024 |
|
unsigned int |
1 |
The minimum batch size used for the optimization profile shape |
>0 |
|
unsigned int |
1 |
The optimal batch size used for the optimization profile shape |
>0 |
|
unsigned int |
1 |
The maximum batch size used for the optimization profile shape |
>0 |
Use the following command to run VisualChangeNet engine generation:
GEN_TRT_ENGINE_JOB_ID=$(tao-client visual_changenet experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$GEN_TRT_ENGINE_SPECS")
tao deploy visual_changenet gen_trt_engine -e /path/to/spec.yaml \
results_dir=/path/to/result_dir \
gen_trt_engine.onnx_file=/path/to/onnx/file \
gen_trt_engine.trt_engine=/path/to/engine/file \
gen_trt_engine.tensorrt.data_type=<data_type>
Required Arguments
-e, --experiment_spec_file
: The path to the experiment spec file.results_dir
: The global results directory. The engine generation log is saved in theresults_dir
.gen_trt_engine.onnx_file
: The.onnx
model to be converted.gen_trt_engine.trt_engine
: The path where the generated engine will be stored.gen_trt_engine.tensorrt.data_type
: The precision to be exported.
Sample Usage
Here’s an example of using the gen_trt_engine
command to generate an fp32 TensorRT engine:
tao deploy visual_changenet gen_trt_engine -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR
gen_trt_engine.onnx_file=$ONNX_FILE \
gen_trt_engine.trt_engine=$ENGINE_FILE \
gen_trt_engine.tensorrt.data_type=fp32
Running Inference through TensorRT Engine#
You can reuse the spec file that was specified for TAO inference. The following is an example inference spec:
INFER_SPECS=$(tao-client visual_changenet get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)
Get specifications from $INFER_SPECS. Set task
to segment
. You can override other values as needed.
task: segment
dataset:
segment:
dataset: "CNDataset"
root_dir: /path/to/root/dataset/dir/
data_name: "LEVIR-CD"
label_transform: "norm"
batch_size: 16
workers: 2
num_classes: 2
img_size: 256
image_folder_name: "A"
change_image_folder_name: "B"
list_folder_name: 'list'
annotation_folder_name: "label"
test_split: "test"
predict_split: 'predict'
label_suffix: .png
inference:
gpu_id: 0
trt_engine: /path/to/engine/file
results_dir: "${results_dir}/inference"
Use the following command to run VisualChangeNet-Classification engine inference:
INFER_JOB_ID=$(tao-client visual_changenet experiment-run-action --action inference --id $EXPERIMENT_ID --specs "INFER_SPECS" --parent_job_id $GEN_TRT_ENGINE_JOB_ID)
tao deploy visual_changenet inference -e /path/to/spec.yaml \
results_dir=$RESULTS_DIR \
inference.trt_engine=/path/to/engine/file
Required Arguments
-e, --experiment_spec_file
: The path to the experiment spec file. This should be the same as thetao inference
spec file.
Optional Arguments
results_dir
: The directory where JSON status-log file and inference results will be dumped.inference.trt_engine
: The engine file for inference.
Sample Usage
Here’s an example of using the inference
command to run inference with the TensorRT engine:
tao deploy visual_changenet inference -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR
inference.trt_engine=$ENGINE_FILE
The visualization will be stored in $RESULTS_DIR/trt_inference
.
Running Evaluation through a TensorRT Engine#
You can reuse the spec file that was specified for TAO evaluation through a TensorRT engine.
EVAL_SPECS=$(tao-client visual_changenet get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)
Set task
to segment.
EVAL_JOB_ID=$(tao-client visual_changenet experiment-run-action --action evaluate --job_type experiment --id $EXPERIMENT_ID --parent_job_id GEN_TRT_ENGINE_JOB_ID)
The following is a sample spec file:
task: segment
dataset:
segment:
dataset: "CNDataset"
root_dir: /path/to/root/dataset/dir/
data_name: "LEVIR-CD"
label_transform: "norm"
batch_size: 16
workers: 2
num_classes: 2
img_size: 256
image_folder_name: "A"
change_image_folder_name: "B"
list_folder_name: 'list'
annotation_folder_name: "label"
test_split: "test"
predict_split: 'predict'
label_suffix: .png
evaluate:
gpu_id: 0
trt_engine: /path/to/engine/file
results_dir: "${results_dir}/inference"
Use the following command to run VisualChangeNet-Segmentation engine evaluation:
tao deploy visual_changenet evaluate -e /path/to/spec.yaml \
results_dir=$RESULTS_DIR \
evaluate.trt_engine=/path/to/engine/file \
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This must be the same as thetao evaluate
spec file.
Optional Arguments
results_dir
: The directory where the JSON status-log file and evaluation results will be dumped.evaluate.trt_engine
: The engine file for evaluation.
Sample Usage
Here’s an example of using the evaluate
command to run evaluation with a TensorRT engine:
tao deploy visual_changenet evaluate -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
The visualization will be stored in $RESULTS_DIR/trt_evaluate
.