NVIDIA TAO Toolkit v5.2.0
TAO Toolkit v5.2.0

DetectNet_v2 with TAO Deploy

The DetectNet_v2 etlt` file generated from tao model export is taken as an input to tao deploy to generate optimized TensorRT engine. For more information about training the DetectNet_v2, please refer to DetectNet_v2 training documentation.

DetectNet_v2 uses the data from the training set for INT8 calibration. The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the INT8 model. Data pre-processing in the INT8 calibration step is the same as in the training process. Calibration occurs as a one-step process with the data batches being generated on the fly. Same spec file can be used as the tao model detectnet_v2 export command.

Use the following command to run DetectNet_v2 engine generation:

Copy
Copied!
            

tao deploy detectnet_v2 gen_trt_engine [-h] [-v] -m MODEL_PATH -e EXPERIMENT_SPEC -r RESULTS_DIR [-k KEY] [--data_type {fp32,fp16,int8}] [--engine_file ENGINE_FILE] [--cal_cache_file CAL_CACHE_FILE] [--cal_json_file CAL_JSON_FILE] [--max_batch_size MAX_BATCH_SIZE] [--min_batch_size MIN_BATCH_SIZE] [--opt_batch_size OPT_BATCH_SIZE] [--batch_size BATCH_SIZE] [--batches BATCHES] [--max_workspace_size MAX_WORKSPACE_SIZE] [-s STRICT_TYPE_CONSTRAINTS] [--force_ptq FORCE_PTQ] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -m, --model_path: The .onnx or .etlt model to be converted

  • -e, --experiment_spec: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.

  • -r, --results_dir: The directory where the JSON status-log file will be dumped

Optional Arguments

  • -h, --help: Show this help message and exit.

  • -k, --key: A user-specific encoding key to load a .etlt model

  • --data_type: The desired engine data type. The options are fp32, fp16, int8. The default value is fp32. A calibration cache will be generated in INT8 mode. If using INT8, the following INT8 arguments are required.

  • --engine_file: Path to the serialized TensorRT engine file. Note that this file is hardware specific, and cannot be generalized across GPUs. As TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to training GPU.

  • -s, --strict_type_constraints: A Boolean flag indicating whether to apply the TensorRT strict type constraints when building the TensorRT engine.

  • --gpu_index: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that gen_trt_engine can only run on a single GPU.

  • --log_file: The path to the log file. The default path is “stdout”.

INT8 Engine Generation Optional Arguments

  • --cal_cache_file: The path to save the calibration cache file to. The default value is ./cal.bin.

  • --cal_json_file: The path to the json file containing tensor scale for QAT models. This argument is required if an engine for QAT model is being generated.

  • --batches: Number of batches to use for calibration. The default value is 10.

  • --batch_size: Batch size to use for calibration. The default value is 1.

  • --max_batch_size: Maximum batch size of TensorRT engine. The default value is 1.

  • --min_batch_size: Minimum batch size of TensorRT engine. The default value is 1.

  • --opt_batch_size: Optimal batch size of TensorRT engine. The default value is 1.

  • --max_workspace_size: Maximum workspace size in Gb of TensorRT engine. The default value is: (2 Gb).

  • --force_ptq: A boolean flag to force post training quantization on the exported etlt model.

Note

When generating TensorRT engine for a model trained with QAT enabled, the tensor scale factors defined by the cal_cache_file argument is required. However, note that the current version of QAT doesn’t natively support DLA int8 deployment in the Jetson. In order to deploy this model on a Jetson with DLA int8, use the --force_ptq flag to use TensorRT post training quantization to generate the calibration cache file.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate INT8 TensorRT engine:

Copy
Copied!
            

tao deploy detectnet_v2 gen_trt_engine -m /workspace/dnv2_resnet18_epoch_100_int8.onnx \ -e /workspace/dnv2_retrain_resnet18_kitti.txt \ -r /export/ \ --data_type int8 \ --batch_size 8 \ --batches 10 \ --cal_cache_file /export/cal.bin \ --engine_file /export/int8.engine

Same spec file as TAO evaluation spec file. Sample spec file:

Copy
Copied!
            

dataset_config { validation_data_sources: { image_directory_path: "/workspace/tao-experiments/data/val/images" label_directory_path: "/workspace/tao-experiments/data/val/labels" } image_extension: "png" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } validation_fold: 0 } postprocessing_config { target_class_config { key: "car" value { clustering_config { clustering_algorithm: DBSCAN dbscan_confidence_threshold: 0.9 coverage_threshold: 0.00499999988824 dbscan_eps: 0.20000000298 dbscan_min_samples: 0.0500000007451 minimum_bounding_box_height: 20 } } } target_class_config { key: "cyclist" value { clustering_config { clustering_algorithm: DBSCAN dbscan_confidence_threshold: 0.9 coverage_threshold: 0.00499999988824 dbscan_eps: 0.15000000596 dbscan_min_samples: 0.0500000007451 minimum_bounding_box_height: 20 } } } target_class_config { key: "pedestrian" value { clustering_config { clustering_algorithm: DBSCAN dbscan_confidence_threshold: 0.9 coverage_threshold: 0.00749999983236 dbscan_eps: 0.230000004172 dbscan_min_samples: 0.0500000007451 minimum_bounding_box_height: 20 } } } } evaluation_config { minimum_detection_ground_truth_overlap { key: "car" value: 0.699999988079 } minimum_detection_ground_truth_overlap { key: "cyclist" value: 0.5 } minimum_detection_ground_truth_overlap { key: "pedestrian" value: 0.5 } evaluation_box_config { key: "car" value { minimum_height: 20 maximum_height: 9999 minimum_width: 10 maximum_width: 9999 } } evaluation_box_config { key: "cyclist" value { minimum_height: 20 maximum_height: 9999 minimum_width: 10 maximum_width: 9999 } } evaluation_box_config { key: "pedestrian" value { minimum_height: 20 maximum_height: 9999 minimum_width: 10 maximum_width: 9999 } } }

Use the following command to run DetectNet_v2 engine evaluation:

Copy
Copied!
            

tao deploy detectnet_v2 evaluate [-h] -e EXPERIMENT_SPEC -m MODEL_PATH -r RESULTS_DIR [-i IMAGE_DIR] [-l LABEL_DIR] [-b BATCH_SIZE] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -r, --results_dir: The directory where evaluation results will be stored

Optional Arguments

  • -i, --image_dir: The directory where test images are located. If not specified, validation_data_sources.image_directory_path from the spec file will be used.

  • -l, --label_dir: The directory where test annotations are located. If not specified, validation_data_sources.label_directory_path from the spec file will be used.

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:

Copy
Copied!
            

tao deploy detectnet_v2 inference -m /export/int8.engine \ -e /workspace/dnv2_retrain_resnet18_kitti.txt \ -i /workspace/tao-experiments/data/val/images \ -l /workspace/tao-experiments/data/val/labels \ -r /workspace/tao-experiments/inference

Copy
Copied!
            

tao deploy detectnet_v2 inference [-h] -e EXPERIMENT_SPEC -m MODEL_PATH -r RESULTS_DIR [-i IMAGE_DIR] [-b BATCH_SIZE] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -r, --results_dir: The directory where evaluation results will be stored

Optional Arguments

  • -i, --image_dir: The directory where test images are located. If not specified, validation_data_sources.image_directory_path from the spec file will be used.

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao deploy detectnet_v2 inference -m /export/int8.engine \ -e /workspace/dnv2_retrain_resnet18_kitti.txt \ -i /workspace/tao-experiments/data/val/images \ -r /workspace/tao-experiments/inference

The visualization will be stored under $RESULTS_DIR/images_annotated and KITTI format predictions will be stored under $RESULTS_DIR/labels.

Previous DINO with TAO Deploy
Next DSSD with TAO Deploy
© Copyright 2024, NVIDIA. Last updated on Mar 18, 2024.