TAO v5.5.0
NVIDIA TAO v5.5.0

TAO Converter with DSSD

The tao-converter tool is provided with TAO to facilitate the deployment of TAO trained models on TensorRT and/or Deepstream. This section elaborates on how to generate a TensorRT engine using tao-converter.

For deployment platforms with an x86-based CPU and discrete GPUs, the tao-converter is distributed within the TAO docker. Therefore, we suggest using the docker to generate the engine. However, this requires that the user adhere to the same minor version of TensorRT as distributed with the docker. The TAO docker includes TensorRT version 8.0.

For an x86 platform with discrete GPUs, the default TAO package includes the tao-converter built for TensorRT 8.2.5.1 with CUDA 11.4 and CUDNN 8.2. However, for any other version of CUDA and TensorRT, please refer to the overview section for download. Once the tao-converter is downloaded, follow the instructions below to generate a TensorRT engine.

  1. Unzip the zip file on the target machine.

  2. Install the OpenSSL package using the command:

    Copy
    Copied!
                

    sudo apt-get install libssl-dev

  3. Export the following environment variables:

Copy
Copied!
            

$ export TRT_LIB_PATH=”/usr/lib/x86_64-linux-gnu” $ export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”

  1. Run the tao-converter using the sample command below and generate the engine.

  2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo.

Note

Make sure to follow the output node names as mentioned in the Exporting the Model section of the respective model.

For the Jetson platform, the tao-converter is available to download in the NVIDIA developer zone. You may choose the version you wish to download as listed in the overview section. Once the tao-converter is downloaded, please follow the instructions below to generate a TensorRT engine.

  1. Unzip the zip file on the target machine.

  2. Install the OpenSSL package using the command:

    Copy
    Copied!
                

    sudo apt-get install libssl-dev

  3. Export the following environment variables:

Copy
Copied!
            

$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” $ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

  1. For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack-5.0DP.

  2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.

  3. Run the tao-converter using the sample command below and generate the engine.

Note

Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model.

Copy
Copied!
            

tao-converter [-h] -k <encryption_key> -d <input_dimensions> -o <comma separated output nodes> [-c <path to calibration cache file>] [-e <path to output engine>] [-b <calibration batch size>] [-m <maximum batch size of the TRT engine>] [-t <engine datatype>] [-w <maximum workspace size of the TRT Engine>] [-i <input dimension ordering>] [-p <optimization_profiles>] [-s] [-u <DLA_core>] input_file

Required Arguments

  • input_file: Path to the .etlt model exported using tao model dssd export.

  • -k: The key used to encode the .tlt model when doing the training.

  • -d: Comma-separated list of input dimensions that should match the dimensions used for tao model dssd export.

  • -o: Comma-separated list of output blob names that should match the output configuration used for tao model dssd export. For DSSD, set this argument to NMS.

Optional Arguments

  • -e: Path to save the engine to. (default: ./saved.engine)

  • -t: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32. The options are {fp32, fp16, int8}.

  • -w: Maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30).

  • -i: Input dimension ordering, all other TAO commands use NCHW. The default value is nchw. The options are {nchw, nhwc, nc}. For DSSD, we can omit it(defaults to nchw).

  • -p: Optimization profiles for .etlt models with dynamic shape. Comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced in TAO 3.21.08. This parameter is not required for models that are already existed in version 2.0.

  • -s: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine.

  • -u: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.

INT8 Mode Arguments

  • -c: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin.

  • -b: Batch size used during the export step for INT8 calibration cache generation. (default: 8).

  • -m: Maximum batch size for TensorRT engine.(default: 16). If meet with out-of-memory issue, decrease the batch size accordingly. This parameter is not required for .etlt models generated with dynamic shape. (This is only possible for new models introduced in TAO 3.21.08.)

Sample Output Log

Here is a sample log for exporting a DSSD model.

Copy
Copied!
            

tao-converter -k $KEY \ -d 3,384,1248 \ -o NMS \ -e /export/trt.fp16.engine \ -t fp16 \ -i nchw \ -m 1 \ /ws/dssd_resnet18_epoch_100.etlt .. [INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [INFO] Detected 1 inputs and 2 output network tensors.

Previous TAO Converter with Detectnet_v2
Next TAO Converter with EfficientDet
© Copyright 2024, NVIDIA. Last updated on Oct 15, 2024.