Classification (PyTorch) with TAO Deploy#

To generate an optimized TensorRT engine, a classification (PyTorch) .onnx file, which is first generated using tao model classification_pyt export, is taken as an input to tao deploy classification_pyt gen_trt_engine. For more information about training a classification (PyTorch) model, refer to the Classification PyTorch training documentation. With TAO 5.0.0, INT8 precision is not supported for classification (PyTorch) models.

Converting .onnx File into TensorRT Engine#

SPECS=$(tao-client classification_pyt get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Parameter

Datatype

Default

Description

Supported Values

onnx_file

string

The precision to be used for the TensorRT engine

trt_engine

string

The maximum workspace size for the TensorRT engine

input_channel

unsigned int

3

The input channel size. Only the value 3 is supported.

3

input_width

unsigned int

224

The input width

>0

input_height

unsigned int

224

The input height

>0

batch_size

unsigned int

-1

The batch size of the ONNX model

>=-1

verbose

bool

False

Enables verbosity for the TensorRT log

tensorrt#

The tensorrt parameter defines TensorRT engine generation.

Parameter

Datatype

Default

Description

Supported Values

data_type

string

fp32

The precision to be used for the TensorRT engine

fp32/fp16/int8

workspace_size

unsigned int

1024

The maximum workspace size for the TensorRT engine

>1024

min_batch_size

unsigned int

1

The minimum batch size used for the optimization profile shape

>0

opt_batch_size

unsigned int

1

The optimal batch size used for the optimization profile shape

>0

max_batch_size

unsigned int

1

The maximum batch size used for the optimization profile shape

>0

Use the following command to run classification (PyTorch) engine generation:

GEN_TRT_ENGINE_JOB_ID=$(tao-client classification_pyt experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Running Evaluation through a TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The classes field is only required if you are using a custom class names. If this field is not provided, class mapping is based on the alphanumerical order of the image folder names. The following is a sample spec file:

evaluate:
  trt_engine: /path/to/engine/file
  topk: 1
dataset:
  data:
    samples_per_gpu: 16
    test:
      data_prefix: /raid/ImageNet2012/ImageNet2012/val
      classes: /raid/ImageNet2012/classnames.txt

Use the following command to run classification (PyTorch) engine evaluation:

EVAL_JOB_ID=$(tao-client classification_pyt experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Note

Currently there is an accuracy regression with TAO Classification with LogisticRegressionHead in TAO Deploy TRT evaluation compared to TAO PyTorch evaluation. This will be addressed in the next release.

Running Inference through a TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:

inference:
  trt_engine: /path/to/engine/file
dataset:
  data:
    samples_per_gpu: 16
    test:
      data_prefix: /raid/ImageNet2012/ImageNet2012/val
      classes: /raid/ImageNet2012/classnames.txt

Use the following command to run classification (PyTorch) engine inference:

INFERENCE_JOB_ID=$(tao-client classification_pyt experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.