LPRNet with TAO Deploy

LPRNet .onnx file generated from tao model export is taken as an input to tao deploy to generate optimized TensorRT engine. For more information about training the LPRNet, please refer to LPRNet training documentation.

Same spec file can be used as the tao model lprnet export command.

Use the following command to run LPRNet engine generation:

Copy
Copied!
            

tao deploy lprnet gen_trt_engine [-h] [-v] -m MODEL_PATH -e EXPERIMENT_SPEC -r RESULTS_DIR [-k KEY] [--data_type {fp32,fp16}] [--engine_file ENGINE_FILE] [--cal_image_dir CAL_IMAGE_DIR] [--cal_data_file CAL_DATA_FILE] [--cal_cache_file CAL_CACHE_FILE] [--max_batch_size MAX_BATCH_SIZE] [--min_batch_size MIN_BATCH_SIZE] [--opt_batch_size OPT_BATCH_SIZE] [--max_workspace_size MAX_WORKSPACE_SIZE] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -m, --model_path: The .onnx or .etlt model to be converted

  • -e, --experiment_spec: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.

  • -r, --results_dir: The directory where the JSON status-log file will be dumped

Optional Arguments

  • -h, --help: Show this help message and exit.

  • -k, --key: A user-specific encoding key to load a .etlt model.

  • --data_type: The desired engine data type. The options are fp32, fp16. The default value is fp32. int8 is not supported for LPRNet.

  • --engine_file: Path to the serialized TensorRT engine file. Note that this file is hardware specific, and cannot be generalized across GPUs. As TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to training GPU.

  • --gpu_index: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that gen_trt_engine can only run on a single GPU.

  • --log_file: The path to the log file. The default path is “stdout”.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate FP16 TensorRT engine:

Copy
Copied!
            

tao deploy lprnet gen_trt_engine -m /workspace/ssd_resnet18_epoch_100_int8.onnx \ -e /workspace/default_spec.txt \ -r /export/ \ --data_type fp16 \ --max_batch_size 8 \ --engine_file /export/fp16.engine

Same spec file as TAO evaluation spec file. Sample spec file:

Copy
Copied!
            

random_seed: 42 lpr_config { hidden_units: 512 max_label_length: 8 arch: "baseline" nlayers: 10 } eval_config { batch_size: 1 } augmentation_config { output_width: 96 output_height: 48 output_channel: 3 } dataset_config { data_sources: { label_directory_path: "/workspace/tao-experiments/datasets/lpr_default_dataset/train/label" image_directory_path: "/workspace/tao-experiments/datasets/lpr_default_dataset/train/image" } characters_list_file: "/workspace/tao-experiments/datasets/lpr_default_dataset/us_lp_characters.txt" validation_data_sources: { label_directory_path: "/workspace/tao-experiments/datasets/lpr_default_dataset/test/label" image_directory_path: "/workspace/tao-experiments/datasets/lpr_default_dataset/test/image" } }

Use the following command to run LPRNet engine evaluation:

Copy
Copied!
            

tao deploy lprnet evaluate [-h] -e EXPERIMENT_SPEC -m MODEL_PATH -r RESULTS_DIR [-b BATCH_SIZE] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -r, --results_dir: The directory where evaluation results will be stored.

Optional Arguments

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:

Copy
Copied!
            

tao deploy lprnet evaluate -m /export/fp16.engine \ -e /workspace/default_spec.txt \ -r /workspace/tao-experiments/evaluate

Copy
Copied!
            

tao deploy lprnet inference [-h] -e EXPERIMENT_SPEC -m MODEL_PATH -r RESULTS_DIR [-b BATCH_SIZE] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -r, --results_dir: The directory where evaluation results will be stored.

Optional Arguments

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao deploy lprnet inference -m /export/fp16.engine \ -e /workspace/default_spec.txt \ -r /workspace/tao-experiments/inference

The decoded predictions are printed to STDOUT.

Previous Faster RCNN with TAO Deploy
Next Mask RCNN with TAO Deploy
© Copyright 2024, NVIDIA. Last updated on Mar 22, 2024.