OCRNet with TAO Deploy
An OCRNet .etlt
or .onnx
file generated from tao export
is taken as an input to tao-deploy
to generate
an optimized TensorRT engine. For more information about training the OCRNet, please refer to
OCRNet training documentation.
gen_trt_engine
The gen_trt_engine
parameter in the experiment specification file provides options to generate the TensorRT engine from
.etlt`
or .onnx
.
gen_trt_engine:
onnx_file: "??"
results_dir: "${results_dir}/convert_dataset"
Parameter |
Datatype |
Default |
Description |
Supported Values |
---|---|---|---|---|
|
String |
– |
The absolute path to the exported |
– |
|
String |
– |
The absolute path to the generated TensorRT engine |
– |
|
Unsigned int |
0 |
The GPU device index |
Valid gpu index |
|
Unsigned int |
1 |
The input channel of the TensorRT engine |
>0 |
|
Unsigned int |
100 |
The input width of the TensorRT engine |
>0 |
|
Unsigned int |
32 |
The input height of the TensorRT engine |
>0 |
|
Unsigned int |
12 |
The ONNX opset version |
Valid ONNX opset version |
|
Unsigned int |
-1 |
The batch size of the TensorRT engine. Set it to |
-1 or >0 |
|
Bool |
False |
A flag to enable verbose information output during TensorRT engine generation |
True/False |
|
Dict config |
– |
Other options for TensorRT-engine generation |
– |
|
String |
– |
The absolute path to the |
– |
tensorrt
The tensorrt
parameter provides more options for TensorRT generation.
tensorrt:
data_type: fp16
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 1
max_batch_size: 1
Parameter |
Datatype |
Default |
Description |
Supported Values |
---|---|---|---|---|
|
String |
fp16 |
The precision of the generated TensorRT engine |
fp16,FP32 |
|
Unsigned int |
1024 |
The workspace size of the generated TensorRT engine |
>0 |
|
Unsigned int |
1 |
The minimum batch size of the generated TensorRT engine |
>0 |
|
Unsigned int |
1 |
The optimal batch size of the generated TensorRT engine |
>0 |
|
Unsigned int |
1 |
The maximum batch size of the generated TensorRT engine |
>0 |
Use the following command to generate the TensorRT engine:
tao deploy ocrnet gen_trt_engine -e <experiment_spec_file>
results_dir=<global_results_dir>
[gen_trt_engine.<gen_trt_engine_option>=<gen_trt_engine_option_value>]
Required Arguments
-e, --experiment_spec_file
: The path to the experiment spec file.results_dir
: The global results directory. The engine generation log will be saved inresults_dir
.
Optional Arguments
You can set optional arguments to override the option values in the experiment spec file:
gen_trt_engine.<gen_trt_engine_option>
: The generate TensorRT engine options.
Here’s an example for using the OCRNet evaluate
command:
tao deploy ocrnet gen_trt_engine -e $DEFAULT_SPEC \
results_dir=$RESULTS_DIR \
gen_trt_engine.onnx_file=$ONNX_TAO_MODEL \
gen_trt_engine.trt_engine=$PATH_TO_SAVED_ENGINE
The evaluate
parameter in the experiment specification file provides options to set evaluation with TensorRT engine:
evaluate:
trt_engine: "??"
test_dataset_dir: "/path/to/test_images_directory"
test_dataset_gt_file: "/path/to/gt_file_list"
input_width: 100
input_height: 32
Parameter |
Datatype |
Default |
Description |
Supported Values |
---|---|---|---|---|
|
String |
– |
The absolute path to the TensorRT engine |
– |
|
Unsigned int |
0 |
The GPU device index |
Valid gpu index |
|
String |
– |
The absolute path to the test images directory |
– |
|
String |
– |
The absolute path to the ground truth file for |
>0 |
|
Unsigned int |
100 |
The input width of the TensorRT engine |
>0 |
|
Unsigned int |
32 |
The input height of the TensorRT engine |
>0 |
|
Unsigned int |
1 |
The batch size of the inference |
>0 |
|
String |
– |
The absolute path to the |
– |
Use the following command to run evaluation with the TensorRT engine:
tao deploy ocrnet evaluate -e <experiment_spec_file>
results_dir=<global_results_dir>
[evaluate.<evaluate_option>=<evaluate_value>]
Required Arguments
-e, --experiment_spec_file
: The path to the experiment spec file.results_dir
: The global results directory. The engine generation log will be saved inresults_dir
.
Optional Arguments
You can set the optional arguments to override the options values in the experiment spec file.
evaluate.<evaluate_option>
: The evaluate options.
Here’s an example of using the OCRNet evaluate command:
tao deploy ocrnet evaluate -e $DEFAULT_SPEC \
results_dir=$RESULTS_DIR \
evaluate.test_dataset_dir=$EVALUATE_IMG_DIR \
evaluate.test_dataset_gt_file=$EVALUATE_GT_FILE \
evaluate.trt_engine=$PATH_TO_SAVED_ENGINE
The inference
parameter in the experiment specification file provides options to set evaluation with TensorRT engine:
inference:
trt_engine: "??"
inference_dataset_dir: "/path/to/test_images_directory"
input_width: 100
input_height: 32
Parameter |
Datatype |
Default |
Description |
Supported Values |
---|---|---|---|---|
|
String |
– |
The absolute path to the TensorRT engine |
– |
|
Unsigned int |
0 |
The GPU device index |
Valid gpu index |
|
String |
– |
The absolute path to the inference images directory |
– |
|
Unsigned int |
100 |
The input width of the TensorRT engine |
>0 |
|
Unsigned int |
32 |
The input height of the TensorRT engine |
>0 |
|
Unsigned int |
1 |
The batch size of the inference |
>0 |
|
String |
– |
The absolute path to the gen_trt_engine log output |
– |
Use the following command to run inference with the TensorRT engine:
tao deploy ocrnet inference -e <experiment_spec_file>
results_dir=<global_results_dir>
[inference.<inference_option>=<evaluate_value>]
Required Arguments
-e, --experiment_spec_file
: The path to the experiment spec file.results_dir
: The global results directory. The engine generation log will be saved inresults_dir
.
Optional Arguments
You can set the optional arguments to override the option values in the experiment spec file.
inference.<inference_option>
: The inference options.
Here’s an example of using the OCRNet evaluate command:
tao deploy ocrnet inference -e $DEFAULT_SPEC \
results_dir=$RESULTS_DIR \
inference.inference_dataset_dir=$INFERENCE_IMAGES_DIR \
inference.trt_engine=$PATH_TO_SAVED_ENGINE