Method 3: RPM Package Installation#
Recommended for: System-wide installation, C++ and Python development, RHEL/CentOS/Fedora users
Advantages:
✓ Automatic dependency installation ✓ Integrates with system package manager ✓ Includes C++ headers ✓ Easy updates with
dnf
Limitations:
✗ Requires
sudoor root privileges ✗ Fixed installation location (/usr) ✗ Only one minor version of TensorRT can be installed at a time ✗ Linux only (RHEL/CentOS/Fedora)
Platform Support#
Supported Operating Systems:
RHEL 8, 9 (x86-64)
Rocky Linux 8, 9 (x86-64)
Prerequisites:
CUDA Toolkit installed using RPM packages
sudoor root access
Tip
Choose your installation method:
Local Repo (below): For new users or complete developer installation
Network Repo (refer to Network Repo Method): For advanced users, containers, or automation
Installation Steps (Local Repo Method)#
Note
Before issuing commands, replace
rhelx,10.x.x, andcuda-x.xwith your specific OS, TensorRT, and CUDA versions.When installing Python packages using this method, you must manually install dependencies with
pip.
Step 1: Download the TensorRT RPM repository package
From the TensorRT download page, download the RPM repository package for your CUDA version and OS.
Example filename: nv-tensorrt-local-repo-rhel8-10.16.1-cuda-13.2-1.0-1.x86_64.rpm
Step 2: Install the repository package
os="rhelx"
tag="10.x.x-cuda-x.x"
sudo rpm -Uvh nv-tensorrt-local-repo-${os}-${tag}-1.0-1.x86_64.rpm
sudo dnf clean expire-cache
Step 3: Install TensorRT packages
Choose the package type that matches your needs:
Package Type |
Installation Command |
|---|---|
Full C++ and Python Runtime |
sudo dnf install tensorrt
Installs all TensorRT components including runtime, development headers, and Python bindings. |
Lean Runtime Only |
sudo dnf install libnvinfer-lean10
sudo dnf install libnvinfer-vc-plugin10
|
Lean Runtime Python Package |
sudo dnf install python3-libnvinfer-lean
|
Dispatch Runtime Only |
sudo dnf install libnvinfer-dispatch10
sudo dnf install libnvinfer-vc-plugin10
|
Dispatch Runtime Python Package |
sudo dnf install python3-libnvinfer-dispatch
|
Windows Builder Resource Library (for cross-platform support) |
sudo dnf install libnvinfer-win-builder-resource10
|
All TensorRT Python Packages |
python3 -m pip install numpy
sudo dnf install python3-libnvinfer-devel
This installs: |
ONNX Graph Surgeon (for samples or projects) |
python3 -m pip install numpy onnx onnx-graphsurgeon
|
TensorRT Python bindings are only installed for Python 3.12 due to package dependencies. If your default python3 is not version 3.12:
Use
update-alternativesto switch to Python 3.12 by defaultInvoke Python using
python3.12For non-default Python versions, install
.whlfiles directly from the tar package
Note
When installing Python packages using the local repo method, you must manually install TensorRT’s Python dependencies with pip.
Installation Steps (Network Repo Method)#
This method is for advanced users already familiar with TensorRT who want quick setup or automation (such as when using containers). New users should use the local repo method above.
Note
If you are using a CUDA container, the NVIDIA CUDA network repository is already set up. Skip Step 1.
Step 1: Set up CUDA network repository
Follow the CUDA Toolkit download page instructions:
Select Linux operating system
Select desired architecture
Select RHEL or Rocky distribution
Select desired RHEL or Rocky version
Select rpm (network) installer type
Enter the provided commands into your terminal
Tip
You can omit the final dnf install command if you do not require the entire CUDA Toolkit. When installing TensorRT, dnf downloads required CUDA dependencies automatically.
Step 2: Install TensorRT packages
Choose the package type for your needs:
Package Type |
Installation Command |
|---|---|
Lean Runtime Only |
sudo dnf install libnvinfer-lean10
|
Lean Runtime Python Package |
sudo dnf install python3-libnvinfer-lean
|
Dispatch Runtime Only |
sudo dnf install libnvinfer-dispatch10
|
Dispatch Runtime Python Package |
sudo dnf install python3-libnvinfer-dispatch
|
Windows Builder Resource Library |
sudo dnf install libnvinfer-win-builder-resource10
|
C++ Applications Runtime Only |
sudo dnf install tensorrt-libs
|
C++ Applications with Development |
sudo dnf install tensorrt-devel
|
C++ with Lean Runtime Development |
sudo dnf install libnvinfer-lean-devel
|
C++ with Dispatch Runtime Development |
sudo dnf install libnvinfer-dispatch-devel
|
Standard Runtime Python Package |
python3 -m pip install numpy
sudo dnf install python3-libnvinfer
|
Additional Python Modules (onnx-graphsurgeon) |
Use |
Step 3 (Optional): Install specific CUDA version
By default, RHEL installs TensorRT for the latest CUDA version when using the CUDA network repository. To install TensorRT for a specific CUDA version and prevent automatic updates:
version="10.x.x.x-1.cudax.x"
sudo dnf install \
libnvinfer-bin-${version} \
libnvinfer-devel-${version} \
libnvinfer-dispatch-devel-${version} \
libnvinfer-dispatch10-${version} \
libnvinfer-headers-devel-${version} \
libnvinfer-headers-plugin-devel-${version} \
libnvinfer-headers-python-plugin-devel-${version} \
libnvinfer-lean-devel-${version} \
libnvinfer-lean10-${version} \
libnvinfer-plugin-devel-${version} \
libnvinfer-plugin10-${version} \
libnvinfer-safe-headers-devel-${version} \
libnvinfer-vc-plugin-devel-${version} \
libnvinfer-vc-plugin10-${version} \
libnvinfer-win-builder-resource10-${version} \
libnvinfer10-${version} \
libnvonnxparsers-devel-${version} \
libnvonnxparsers10-${version} \
python3-libnvinfer-${version} \
python3-libnvinfer-devel-${version} \
python3-libnvinfer-dispatch-${version} \
python3-libnvinfer-lean-${version} \
tensorrt-${version} \
tensorrt-devel-${version} \
tensorrt-libs-${version}
sudo dnf install dnf-plugin-versionlock
sudo dnf versionlock \
libnvinfer-bin \
libnvinfer-devel \
libnvinfer-dispatch-devel \
libnvinfer-dispatch10 \
libnvinfer-headers-devel \
libnvinfer-headers-plugin-devel \
libnvinfer-headers-python-plugin-devel \
libnvinfer-lean-devel \
libnvinfer-lean10 \
libnvinfer-plugin-devel \
libnvinfer-plugin10 \
libnvinfer-safe-headers-devel \
libnvinfer-vc-plugin-devel \
libnvinfer-vc-plugin10 \
libnvinfer-win-builder-resource10 \
libnvinfer10 \
libnvonnxparsers-devel \
libnvonnxparsers10 \
python3-libnvinfer \
python3-libnvinfer-devel \
python3-libnvinfer-dispatch \
python3-libnvinfer-lean \
tensorrt \
tensorrt-devel \
tensorrt-libs
To upgrade to latest version, unlock the packages:
sudo dnf versionlock delete \
libnvinfer-bin \
libnvinfer-devel \
libnvinfer-dispatch-devel \
libnvinfer-dispatch10 \
libnvinfer-headers-devel \
libnvinfer-headers-plugin-devel \
libnvinfer-headers-python-plugin-devel \
libnvinfer-lean-devel \
libnvinfer-lean10 \
libnvinfer-plugin-devel \
libnvinfer-plugin10 \
libnvinfer-safe-headers-devel \
libnvinfer-vc-plugin-devel \
libnvinfer-vc-plugin10 \
libnvinfer-win-builder-resource10 \
libnvinfer10 \
libnvonnxparsers-devel \
libnvonnxparsers10 \
python3-libnvinfer \
python3-libnvinfer-devel \
python3-libnvinfer-dispatch \
python3-libnvinfer-lean \
tensorrt \
tensorrt-devel \
tensorrt-libs
Verification#
These verification steps apply to both Local Repo and Network Repo installation methods.
Package Verification:
Verify that TensorRT packages are installed:
Package Type |
Command |
Expected Output |
|---|---|---|
Full TensorRT Release |
rpm -q tensorrt
|
tensorrt-10.16.1.x-1.cuda-13.2.x86_64
|
Lean/Dispatch Runtime |
rpm -qa | grep nvinfer
|
Lists all installed |
C++ Verification:
Compile and run a sample, such as sampleOnnxMNIST. Samples and sample data are only available from GitHub. The instructions to prepare the sample data can be found within the samples README.md. To build all the samples, use the following commands:
$ cd <cloned_tensorrt_dir>
$ mkdir build && cd build
$ cmake .. \
-DTRT_LIB_DIR=$TRT_LIBPATH \
-DTRT_OUT_DIR=`pwd`/out \
-DBUILD_SAMPLES=ON \
-DBUILD_PARSERS=OFF \
-DBUILD_PLUGINS=OFF
$ cmake --build . --parallel 4
$ ./out/sample_onnx_mnist
For information about the samples, refer to TensorRT Sample Support Guide.
Python Verification:
import tensorrt as trt
print(trt.__version__)
assert trt.Builder(trt.Logger())
Troubleshooting#
These troubleshooting steps apply to both Local Repo and Network Repo installation methods.
Issue: No package tensorrt available
Solution: Ensure you completed the repository setup steps correctly. Check that the repository is enabled:
dnf repolist
Issue: Dependency conflicts with existing CUDA installation
Solution: Ensure CUDA Toolkit was installed using RPM packages (not tar file or runfile).
Issue: Samples fail to compile
Solution: Install development tools:
sudo dnf groupinstall "Development Tools"