Method 2: Debian Package Installation#
Recommended for: System-wide installation, C++ and Python development, Ubuntu/Debian users
Advantages:
✓ Automatic dependency installation ✓ Integrates with system package manager ✓ Includes C++ headers ✓ Easy updates with
apt
Limitations:
✗ Requires
sudoor root privileges ✗ Fixed installation location (/usr) ✗ Only one minor version of TensorRT can be installed at a time ✗ Linux only (Ubuntu/Debian)
Platform Support#
Supported Operating Systems:
Ubuntu 22.04 (x86-64)
Ubuntu 24.04 (x86-64, ARM SBSA)
Debian 12 (x86-64, ARM SBSA)
Prerequisites:
CUDA Toolkit installed using Debian packages
sudoor root access
Tip
Choose your installation method:
Local Repo (below): For new users or complete developer installation
Network Repo (refer to Network Repo Method): For advanced users, containers, or automation
Installation Steps (Local Repo Method)#
Step 1: Download the TensorRT Debian repository package
From the TensorRT download page, download the Debian repository package for your CUDA version and OS.
Example filename: nv-tensorrt-local-repo-ubuntu2404-10.16.1-cuda-13.2_1.0-1_amd64.deb
Replace 10.x.x with your TensorRT version and cuda-x.x with your CUDA version. Replace amd64 with arm64 for ARM SBSA.
Step 2: Install the repository package
sudo dpkg -i nv-tensorrt-local-repo-ubuntu2404-10.x.x-cuda-x.x_1.0-1_amd64.deb
Step 3: Copy the keyring
sudo cp /var/nv-tensorrt-local-repo-ubuntu2404-10.x.x-cuda-x.x/*-keyring.gpg /usr/share/keyrings/
Step 4: Update the package index
sudo apt-get update
Step 5: Install TensorRT packages
Choose the package type that matches your needs:
Package Type |
Installation Command |
|---|---|
Full C++ and Python Runtime |
sudo apt-get install tensorrt
Installs all TensorRT components including runtime, development headers, and Python bindings. |
Lean Runtime Only |
sudo apt-get install libnvinfer-lean10
sudo apt-get install libnvinfer-vc-plugin10
|
Lean Runtime Python Package |
sudo apt-get install python3-libnvinfer-lean
|
Dispatch Runtime Only |
sudo apt-get install libnvinfer-dispatch10
sudo apt-get install libnvinfer-vc-plugin10
|
Dispatch Runtime Python Package |
sudo apt-get install python3-libnvinfer-dispatch
|
Windows Builder Resource Library (for cross-platform support) |
sudo apt-get install libnvinfer-win-builder-resource10
|
All TensorRT Python Packages |
python3 -m pip install numpy
sudo apt-get install python3-libnvinfer-dev
This installs: Note If you need Python for a non-default Python version, install the |
ONNX Graph Surgeon (for samples or projects) |
python3 -m pip install numpy onnx onnx-graphsurgeon
|
Note
When installing Python packages using the local repo method, you must manually install TensorRT’s Python dependencies with pip.
Installation Steps (Network Repo Method)#
This method is for advanced users already familiar with TensorRT who want quick setup or automation (such as when using containers). New users should use the local repo method above.
Note
If you are using a CUDA container, the NVIDIA CUDA network repository is already set up. Skip Step 1.
Step 1: Set up CUDA network repository
Follow the CUDA Toolkit download page instructions:
Select Linux operating system
Select desired architecture
Select Ubuntu or Debian distribution
Select desired Ubuntu or Debian version
Select deb (network) installer type
Enter the provided commands into your terminal
Tip
You can omit the final apt-get install command if you do not require the entire CUDA Toolkit. When installing TensorRT, apt downloads required CUDA dependencies automatically.
Step 2: Install TensorRT packages
Choose the package type for your needs:
Package Type |
Installation Command |
|---|---|
Lean Runtime Only |
sudo apt-get install libnvinfer-lean10
|
Lean Runtime Python Package |
sudo apt-get install python3-libnvinfer-lean
|
Dispatch Runtime Only |
sudo apt-get install libnvinfer-dispatch10
|
Dispatch Runtime Python Package |
sudo apt-get install python3-libnvinfer-dispatch
|
Windows Builder Resource Library |
sudo apt-get install libnvinfer-win-builder-resource10
|
C++ Applications Runtime Only |
sudo apt-get install tensorrt-libs
|
C++ Applications with Development |
sudo apt-get install tensorrt-dev
|
C++ with Lean Runtime Development |
sudo apt-get install libnvinfer-lean-dev
|
C++ with Dispatch Runtime Development |
sudo apt-get install libnvinfer-dispatch-dev
|
Standard Runtime Python Package |
python3 -m pip install numpy
sudo apt-get install python3-libnvinfer
|
Additional Python Modules (onnx-graphsurgeon) |
Use |
Step 3 (Optional): Install specific CUDA version
By default, Ubuntu installs TensorRT for the latest CUDA version when using the CUDA network repository. To install for a specific CUDA version and prevent automatic updates:
version="10.x.x.x-1+cudax.x"
sudo apt-get install \
libnvinfer-bin=${version} \
libnvinfer-dev=${version} \
libnvinfer-dispatch-dev=${version} \
libnvinfer-dispatch10=${version} \
libnvinfer-headers-dev=${version} \
libnvinfer-headers-plugin-dev=${version} \
libnvinfer-headers-python-plugin-dev=${version} \
libnvinfer-lean-dev=${version} \
libnvinfer-lean10=${version} \
libnvinfer-plugin-dev=${version} \
libnvinfer-plugin10=${version} \
libnvinfer-safe-headers-dev=${version} \
libnvinfer-vc-plugin-dev=${version} \
libnvinfer-vc-plugin10=${version} \
libnvinfer-win-builder-resource10=${version} \
libnvinfer10=${version} \
libnvonnxparsers-dev=${version} \
libnvonnxparsers10=${version} \
python3-libnvinfer-dev=${version} \
python3-libnvinfer-dispatch=${version} \
python3-libnvinfer-lean=${version} \
python3-libnvinfer=${version} \
tensorrt-dev=${version} \
tensorrt-libs=${version} \
tensorrt=${version}
sudo apt-mark hold \
libnvinfer-bin \
libnvinfer-dev \
libnvinfer-dispatch-dev \
libnvinfer-dispatch10 \
libnvinfer-headers-dev \
libnvinfer-headers-plugin-dev \
libnvinfer-headers-python-plugin-dev \
libnvinfer-lean-dev \
libnvinfer-lean10 \
libnvinfer-plugin-dev \
libnvinfer-plugin10 \
libnvinfer-safe-headers-dev \
libnvinfer-vc-plugin-dev \
libnvinfer-vc-plugin10 \
libnvinfer-win-builder-resource10 \
libnvinfer10 \
libnvonnxparsers-dev \
libnvonnxparsers10 \
python3-libnvinfer-dev \
python3-libnvinfer-dispatch \
python3-libnvinfer-lean \
python3-libnvinfer \
tensorrt-dev \
tensorrt-libs \
tensorrt
To upgrade to latest version, unhold the packages:
sudo apt-mark unhold \
libnvinfer-bin \
libnvinfer-dev \
libnvinfer-dispatch-dev \
libnvinfer-dispatch10 \
libnvinfer-headers-dev \
libnvinfer-headers-plugin-dev \
libnvinfer-headers-python-plugin-dev \
libnvinfer-lean-dev \
libnvinfer-lean10 \
libnvinfer-plugin-dev \
libnvinfer-plugin10 \
libnvinfer-safe-headers-dev \
libnvinfer-vc-plugin-dev \
libnvinfer-vc-plugin10 \
libnvinfer-win-builder-resource10 \
libnvinfer10 \
libnvonnxparsers-dev \
libnvonnxparsers10 \
python3-libnvinfer-dev \
python3-libnvinfer-dispatch \
python3-libnvinfer-lean \
python3-libnvinfer \
tensorrt-dev \
tensorrt-libs \
tensorrt
Verification#
These verification steps apply to both Local Repo and Network Repo installation methods.
Package Verification:
Verify that TensorRT packages are installed:
Package Type |
Command |
Expected Output |
|---|---|---|
Full TensorRT Release |
dpkg-query -W tensorrt
|
tensorrt 10.16.1.x-1+cuda13.2
|
Lean/Dispatch Runtime |
dpkg-query -W "*nvinfer*"
|
Lists all installed |
C++ Verification:
Compile and run a sample, such as sampleOnnxMNIST. Samples and sample data are only available from GitHub. The instructions to prepare the sample data can be found within the samples README.md. To build all the samples, use the following commands:
$ cd <cloned_tensorrt_dir>
$ mkdir build && cd build
$ cmake .. \
-DTRT_LIB_DIR=$TRT_LIBPATH \
-DTRT_OUT_DIR=`pwd`/out \
-DBUILD_SAMPLES=ON \
-DBUILD_PARSERS=OFF \
-DBUILD_PLUGINS=OFF
$ cmake --build . --parallel 4
$ ./out/sample_onnx_mnist
For detailed information about the samples, refer to TensorRT Sample Support Guide.
Python Verification:
import tensorrt as trt
print(trt.__version__)
assert trt.Builder(trt.Logger())
Troubleshooting#
These troubleshooting steps apply to both Local Repo and Network Repo installation methods.
Issue: E: Unable to locate package tensorrt
Solution: Ensure you completed the repository setup steps correctly. Check that the repository is enabled:
apt-cache policy tensorrt
Issue: Dependency conflicts with existing CUDA installation
Solution: Ensure CUDA Toolkit was installed using Debian packages (not tar file or runfile).
Issue: Samples fail to compile
Solution: Install build essentials:
sudo apt-get install build-essential cmake