Multitask Image Classification with TAO Deploy

Multitask classification etlt file generated from tao export is taken as an input to tao-deploy to generate optimized TensorRT engine. For more information about training the Multitask Image Classification, please refer to Multitask Image Classification training documentation.

Same spec file can be used as the tao multitask_classification export command.

Use the following command to run Multitask Image Classification engine generation:

Copy
Copied!
            

tao-deploy multitask_classification gen_trt_engine [-h] [-v] -m MODEL_PATH -k KEY [--data_type {fp32,fp16,int8}] [--engine_file ENGINE_FILE] [--cal_image_dir CAL_IMAGE_DIR] [--cal_data_file CAL_DATA_FILE] [--cal_cache_file CAL_CACHE_FILE] [--max_batch_size MAX_BATCH_SIZE] [--min_batch_size MIN_BATCH_SIZE] [--opt_batch_size OPT_BATCH_SIZE] [--batch_size BATCH_SIZE] [--batches BATCHES] [--max_workspace_size MAX_WORKSPACE_SIZE] [-s STRICT_TYPE_CONSTRAINTS] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -m, --model_path: The .etlt model to be converted.

  • -k, --key: A user-specific encoding key to load a .etlt model.

Optional Arguments

  • -h, --help: Show this help message and exit.

  • --data_type: The desired engine data type. The options are fp32, fp16, int8. The default value is fp32. A calibration cache will be generated in INT8 mode. If using INT8, the following INT8 arguments are required.

  • --engine_file: Path to the serialized TensorRT engine file. Note that this file is hardware specific, and cannot be generalized across GPUs. As TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to training GPU.

  • -s, --strict_type_constraints: A Boolean flag indicating whether to apply the TensorRT strict type constraints when building the TensorRT engine.

  • --gpu_index: The index of (discrete) GPUs used for exporting the model. You can specify the index of the GPU to run export if the machine has multiple GPUs installed. Note that gen_trt_engine can only run on a single GPU.

  • --log_file: The path to the log file. The default path is “stdout”.

INT8 Engine Generation Required Arguments

  • --cal_data_file: Tensorfile generated for calibrating the engine. This can also be an output file if used with --cal_image_dir.

  • --cal_image_dir: Directory of images to use for calibration.

Note

--cal_image_dir parameter for images and applies the necessary preprocessing to generate a tensorfile at the path mentioned in the --cal_data_file parameter, which is in turn used for calibration. The number of batches in the tensorfile generated is obtained from the value set to the --batches parameter, and the batch_size is obtained from the value set to the --batch_size parameter. Be sure that the directory mentioned in --cal_image_dir has at least batch_size * batches number of images in it. The valid image extensions are .jpg, .jpeg, and .png. In this case, the input_dimensions of the calibration tensors are derived from the input layer of the .etlt model.

INT8 Engine Generation Optional Arguments

  • --cal_cache_file: The path to save the calibration cache file to. The default value is ./cal.bin.

  • --batches: Number of batches to use for calibration. The default value is 10.

  • --batch_size: Batch size to use for calibration. The default value is 1.

  • --max_batch_size: Maximum batch size of TensorRT engine. The default value is 1.

  • --min_batch_size: Minimum batch size of TensorRT engine. The default value is 1.

  • --opt_batch_size: Optimal batch size of TensorRT engine. The default value is 1.

  • --max_workspace_size: Maximum workspace size in Gb of TensorRT engine. The default value is: (2 Gb).

Sample Usage

Here’s an example of using the gen_trt_engine command to generate INT8 TensorRT engine:

Copy
Copied!
            

tao-deploy multitask_classification gen_trt_engine -m /workspace/mcls.etlt \ -k $KEY \ --cal_image_dir /workspace/tao-experiments/data/split/test \ --data_type int8 \ --batch_size 8 \ --batches 10 \ --cal_cache_file /export/cal.bin \ --cal_data_file /export/cal.tensorfile \ --engine_file /export/int8.engine

Label file is derived from dataset_config.val_csv_path. Same spec file as TAO evaluation spec file. Sample spec file:

Copy
Copied!
            

model_config { arch: "resnet", n_layers: 18 use_batch_norm: true all_projections: true freeze_blocks: 0 freeze_blocks: 1 input_image_size: "3,80,60" } train_config { preprocess_mode: "caffe" } eval_config { eval_dataset_path: "/workspace/tao-experiments/data/split/test" model_path: "/export/int8.engine" top_k: 3 batch_size: 256 n_workers: 8 enable_center_crop: True }

Use the following command to run Multitask Image Classification engine evaluation:

Copy
Copied!
            

tao-deploy multitask_classification evaluate [-h] -e EXPERIMENT_SPEC -m MODEL_PATH [-i IMAGE_DIR] [-l LABEL_DIR] [-b BATCH_SIZE] [-r RESULTS_DIR] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -i, --image_dir: The directory where test images are located. If not specified, dataset_config.image_directory_path from the spec file will be used.

  • -r, --results_dir: The directory where evaluation results will be stored.

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:

Copy
Copied!
            

tao-deploy multitask_classification evaluate -m /export/int8.engine \ -e /workspace/default_spec.txt \ -i /workspace/tao-experiments/data/split/test \ -r /workspace/tao-experiments/evaluate

Copy
Copied!
            

tao-deploy multitask_classification inference [-h] -e EXPERIMENT_SPEC -m MODEL_PATH [-i IMAGE_DIR] [-b BATCH_SIZE] [-r RESULTS_DIR] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • -m, --model_path: The engine file to run evaluation.

  • -i, --image_dir: The directory where test images are located. If not specified, dataset_config.image_directory_path from the spec file will be used.

  • -r, --results_dir: The directory where evaluation results will be stored.

  • -b, --batch_size: The batch size used for evaluation. Note that this value can not be larger than --max_batch_size used during the engine generation. If not specified, eval_config.batch_size will be used instead.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao-deploy multitask_classification evaluate -m /export/int8.engine \ -e /workspace/default_spec.txt \ -i /workspace/tao-experiments/data/split/test \ -r /workspace/tao-experiments/evaluate

The csv predictions will be stored under $RESULTS_DIR/result.csv.

Previous MLRecogNet with TAO Deploy
Next OCDNet with TAO Deploy
© Copyright 2024, NVIDIA. Last updated on Mar 22, 2024.