Sample Support Guide#

The TensorRT samples demonstrate how to use the TensorRT API for common inference workflows, including model conversion, network building, optimization, and deployment across different platforms.

Note

The TensorRT samples are provided for illustrative purposes only and are not meant to be used or taken as production-quality code examples.

Quick Start#

New to TensorRT? Choose a sample based on your preferred language:

C++ Samples:

Python Samples:

These samples introduce core TensorRT concepts with clear explanations and step-by-step guidance.

Sample Explorer#

Use the interactive Sample Explorer to find TensorRT samples. Select a filter mode, choose a value from the drop-down, and click Show Samples to view matching results in a table. Click Reset to clear the filter and start over.

Skip to sample results

TensorRT Sample Explorer

Showing 0 samples
Select a filter above and click Show Samples to display results.

Building and Running C++ Samples#

You can find the C++ samples in GitHub: TensorRT C++ Samples. For a complete list of C++ samples, refer to the Sample Explorer above, or start with the Quick Start section for beginner-friendly options.

Every C++ sample includes a README.md file that provides detailed information about how the sample works, sample code, and step-by-step instructions on how to run and verify its output.

Building C++ Samples for All Platforms

To build all the samples, use the following commands:

$ cd <cloned_tensorrt_dir>
$ mkdir build && cd build
$ cmake .. \
    -DTRT_LIB_DIR=$TRT_LIBPATH \
    -DTRT_OUT_DIR=`pwd`/out \
    -DBUILD_SAMPLES=ON \
    -DBUILD_PARSERS=OFF \
    -DBUILD_PLUGINS=OFF
$ cmake --build . --parallel 4

Sample binaries are available under <cloned_tensorrt_dir>/build/out.

Running C++ Samples on Linux

To run a single sample on Linux:

$ cd <cloned_tensorrt_dir>/build/out
$ ./<sample_bin>

Running C++ Samples on Windows

To run a single sample on Windows:

> cd <cloned_tensorrt_dir>\build\out
> .\<sample_bin>

Running Python Samples#

You can find the Python samples in GitHub: TensorRT Python Samples. For a complete list of Python samples, refer to the Sample Explorer above, or start with the Quick Start section for beginner-friendly options.

Every Python sample includes a README.md file in GitHub: TensorRT Python Samples README that provides detailed information about how the sample works, sample code, and step-by-step instructions on how to run and verify its output.

Running a Python sample typically involves two steps:

  1. Install the sample requirements:

    python3 -m pip install -r requirements.txt
    
  2. Run the sample code. If the TensorRT sample data is not in the default location, specify the data directory:

    python3 sample.py [-d DATA_DIR]
    

For more information on running samples, refer to the README.md file included with the sample.

Cross Compiling Samples#

The following sections show how to cross-compile TensorRT samples for AArch64 QNX and Linux platforms under x86_64 Linux. This is an advanced topic for users who need to build samples for embedded platforms.

Prerequisites#

Complete the following steps before cross-compiling for any target platform.

  1. Install the CUDA cross-platform toolkit for your target and set the environment variables:

    $ export CUDA_INSTALL_DIR=/usr/local/cuda
    $ export TRT_LIBPATH=/path/to/tensorrt/lib
    

    Note

    If you install TensorRT from the network repository, install the cuda-toolkit-X-Y and cuda-cross-<arch>-X-Y packages first to ensure all CUDA dependencies are available.

  2. Install the TensorRT cross-compilation Debian packages for your target:

    • QNX AArch64: tensorrt-dev-cross-qnx

    • Linux AArch64: tensorrt-dev-cross-aarch64

    • Linux SBSA: tensorrt-dev-cross-sbsa

    Note

    You can skip this step if you use the tar file release, which already includes the cross-compile libraries.

  3. (Linux AArch64 and SBSA only) Install the AArch64 cross-compiler:

    $ sudo apt-get install g++-aarch64-linux-gnu
    

Building Samples for QNX AArch64#

Download the QNX toolchain, set the environment variable, and build:

$ export QNX_BASE=/path/to/your/qnx/toolchain
$ cd <cloned_tensorrt_dir>
$ mkdir build && cd build
$ cmake .. \
  -DTRT_LIB_DIR=$TRT_LIBPATH \
  -DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/cmake_qnx.toolchain \
  -DBUILD_SAMPLES=ON
$ make -j$(nproc)

Building Samples for Linux AArch64#

Build for Linux AArch64 platforms (for example, Jetson devices):

$ cd <cloned_tensorrt_dir>
$ mkdir -p build && cd build
$ cmake .. \
  -DTRT_LIB_DIR=$TRT_LIBPATH \
  -DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/cmake_aarch64_cross.toolchain \
  -DBUILD_SAMPLES=ON
$ make -j$(nproc)

Building Samples for Linux SBSA#

Build for Linux SBSA (Server Base System Architecture) platforms:

$ cd <cloned_tensorrt_dir>
$ mkdir -p build && cd build
$ cmake .. \
  -DTRT_LIB_DIR=$TRT_LIBPATH \
  -DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/cmake_sbsa_cross.toolchain \
  -DBUILD_SAMPLES=ON
$ make -j$(nproc)