TAO v5.5.0
NVIDIA TAO v5.5.0

tao/tao-toolkit/text/excerpts/tensorrt_oss_on_jetson_arm64.html

  1. Install Cmake (>=3.13)

    Note

    TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2.

    Upgrade TensorRT OSS using:

    sudo apt remove --purge --auto-remove cmake
    wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz
    tar xvf cmake-3.13.5.tar.gz
    cd cmake-3.13.5/
    ./configure
    make -j$(nproc)
    sudo make install
    sudo ln -s /usr/local/bin/cmake /usr/bin/cmake
    
  2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

    Jetson Platform

    GPU_ARCHS

    Nano/Tx1

    53

    Tx2

    62

    AGX Xavier/Xavier NX

    72

  3. Build TensorRT OSS:

    git clone -b 21.03 https://github.com/nvidia/TensorRT
    cd TensorRT/
    git submodule update --init --recursive
    export TRT_SOURCE=`pwd`
    cd $TRT_SOURCE
    mkdir -p build && cd build
    

    Note

    The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2.

    /usr/local/bin/cmake .. -DGPU_ARCHS=72  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
    make nvinfer_plugin -j$(nproc)
    

    After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

  4. Replace "libnvinfer_plugin.so*" with the newly generated.

    sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
    sudo cp `pwd`/out/libnvinfer_plugin.so.8.m.n  /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y
    sudo ldconfig
    
© Copyright 2024, NVIDIA. Last updated on Oct 15, 2024.