Method 4: Tar File Installation#
Recommended for: Multiple TensorRT versions, custom installation paths, C++ and Python development on Linux
Advantages:
✓ High flexibility in installation location ✓ No root privileges needed for installation ✓ Multiple versions can coexist ✓ Includes C++ headers ✓ Complete control over environment
Limitations:
✗ Requires manual dependency management ✗ Manual
LD_LIBRARY_PATHconfiguration ✗ No automatic updates ✗ More complex setup than other methods
Platform Support#
Supported Operating Systems:
Linux x86-64: Ubuntu 22.04+, RHEL 8+, Debian 12+, SLES 15+
Linux ARM SBSA and JetPack: Ubuntu 24.04+, Debian 12+
Prerequisites:
CUDA Toolkit installed (tar file or package manager)
Installation Steps#
Step 1: Download the TensorRT tar file
From the TensorRT download page, download the tar file that matches the CPU architecture and CUDA version you are using.
Step 2: Choose installation directory
Choose where you want to install TensorRT. The tar file will install everything into a subdirectory called TensorRT-10.x.x.x, where 10.x.x.x is your TensorRT version.
Step 3: Extract the tar file
version="10.x.x.x"
arch=$(uname -m)
cuda="cuda-x.x"
tar -xzvf TensorRT-${version}.Linux.${arch}-gnu.${cuda}.tar.gz
Where 10.x.x.x is your TensorRT version and cuda-x.x is CUDA version.
Step 4: Set environment variables
Add the absolute path to the TensorRT lib directory to the environment variable LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=<TensorRT-${version}/lib>:$LD_LIBRARY_PATH
For permanent configuration, add this line to ~/.bashrc or ~/.profile:
echo 'export LD_LIBRARY_PATH=<TensorRT-${version}/lib>:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
Step 5 (Optional): Install Python wheels
Install the Python TensorRT wheel file. Replace cp3x with the desired Python version (such as cp310 for Python 3.10):
cd TensorRT-${version}/python
python3 -m pip install tensorrt-*-cp3x-none-linux_x86_64.whl
Optionally, install the TensorRT lean and dispatch runtime wheel files:
python3 -m pip install tensorrt_lean-*-cp3x-none-linux_x86_64.whl
python3 -m pip install tensorrt_dispatch-*-cp3x-none-linux_x86_64.whl
Verification#
Ensure that the installed files are located in the correct directories. For example, run the tree -d command to check whether all supported installed files are in place in the bin, lib, include, and other directories.
C++ Verification:
Compile and run a sample, such as sampleOnnxMNIST. Samples and sample data are only available from GitHub. The instructions to prepare the sample data can be found within the samples README.md. To build all the samples, use the following commands:
$ cd <cloned_tensorrt_dir>
$ mkdir build && cd build
$ cmake .. \
-DTRT_LIB_DIR=$TRT_LIBPATH \
-DTRT_OUT_DIR=`pwd`/out \
-DBUILD_SAMPLES=ON \
-DBUILD_PARSERS=OFF \
-DBUILD_PLUGINS=OFF
$ cmake --build . --parallel 4
$ ./out/sample_onnx_mnist
For information about the samples, refer to TensorRT Sample Support Guide.
Python Verification:
import tensorrt as trt
print(trt.__version__)
assert trt.Builder(trt.Logger())
Troubleshooting#
Issue: error while loading shared libraries: libnvinfer.so.10
Solution: Ensure
LD_LIBRARY_PATHis set correctly. Check:echo $LD_LIBRARY_PATH
It should include
$TENSORRT_INSTALL_DIR/lib.
Issue: Samples fail to compile
Solution: Install build essentials and CUDA development headers:
sudo apt-get install build-essential cmake
Issue: Wrong Python wheel version
Solution: Check your Python version:
python3 --versionDownload the matching wheel (
cp38for Python 3.8,cp39for Python 3.9, and so on).