NVIDIA TAO Toolkit v5.2.0
TAO Toolkit v5.2.0

tao/tao-toolkit-archive/5.2.0/text/excerpts/instructions_for_x86_with_OSS.html

For an x86 platform with discrete GPUs, the default TAO package includes the tao-converter built for TensorRT 8.2.5.1 with CUDA 11.4 and CUDNN 8.2. However, for any other version of CUDA and TensorRT, please refer to the overview section for download. Once the tao-converter is downloaded, follow the instructions below to generate a TensorRT engine.

  1. Unzip the zip file on the target machine.

  2. Install the OpenSSL package using the command:

    sudo apt-get install libssl-dev
    
  3. Export the following environment variables:

$ export TRT_LIB_PATH=”/usr/lib/x86_64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”
  1. Run the tao-converter using the sample command below and generate the engine.

  2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo.

Note

Make sure to follow the output node names as mentioned in the Exporting the Model section of the respective model.

© Copyright 2024, NVIDIA. Last updated on Mar 18, 2024.