Method 1: Python Package Index (pip)#

Recommended for: Python developers, quick prototyping, virtual environments

Advantages:

✓ Fastest installation method ✓ No manual downloads required ✓ Automatic dependency management ✓ Works in virtual environments ✓ No root access needed

Limitations:

✗ No C++ development headers ✗ Python-only (no standalone C++ apps) ✗ May install redundant libraries if C++ TensorRT already installed

Platform Support#

Supported Configurations:

  • Python versions: 3.8, 3.9, 3.10, 3.11, 3.12, 3.13

  • Operating Systems:

    • Linux x86-64: Ubuntu 22.04+, RHEL 8+, Debian 12+, SLES 15+

    • Linux ARM SBSA and JetPack: Ubuntu 24.04+, Debian 12+

    • Windows x64: Windows 10+

Attention

Python 3.10+ is required for TensorRT samples. Python 3.8/3.9 support bindings only.

Installation Steps#

Step 1: Update pip and install wheel

python3 -m pip install --upgrade pip
python3 -m pip install wheel

Step 2: Install TensorRT

Install the complete TensorRT package with builder and runtime:

python3 -m pip install --upgrade tensorrt

This installs:

  • tensorrt-libs: TensorRT core libraries

  • tensorrt-bindings: Python bindings matching your Python version

Install only the lean runtime for running pre-built engines:

python3 -m pip install --upgrade tensorrt-lean

Install the minimal dispatch runtime:

python3 -m pip install --upgrade tensorrt-dispatch

CUDA Version Selection:

By default, TensorRT Python packages install the CUDA 13.x variants (the latest CUDA version supported by TensorRT). If you need a specific CUDA major version, append -cu12 or -cu13 to the package name:

# For CUDA 12.x
python3 -m pip install --upgrade tensorrt-cu12

# For CUDA 13.x (default, explicit)
python3 -m pip install --upgrade tensorrt-cu13

# Lean and Dispatch runtimes with CUDA 12.x
python3 -m pip install --upgrade tensorrt-lean-cu12 tensorrt-dispatch-cu12

Tip

For upgrading an existing installation, clear the pip cache first:

python3 -m pip cache remove "tensorrt*"
python3 -m pip install --upgrade tensorrt tensorrt-lean tensorrt-dispatch

Verification#

Verify Full Runtime (if installed):

To confirm your installation is working:

  • Import the tensorrt module

  • Check the installed version

  • Create a Builder object to verify CUDA installation

import tensorrt as trt
print(trt.__version__)
assert trt.Builder(trt.Logger())

Expected output:

10.x.x

Verify Lean Runtime (if installed):

import tensorrt_lean as trt
print(trt.__version__)
assert trt.Runtime(trt.Logger())

Verify Dispatch Runtime (if installed):

import tensorrt_dispatch as trt
print(trt.__version__)
assert trt.Runtime(trt.Logger())

Next Steps:

If the verification commands above worked successfully, you can now run TensorRT Python samples to further confirm your installation. For more information about TensorRT samples, refer to the TensorRT Sample Support Guide.

Troubleshooting#

Issue: ModuleNotFoundError: No module named 'tensorrt'

  • Solution: Ensure you are in the correct Python environment. Activate your virtual environment if using one.

Issue: Need C++ headers or samples

Issue: User-specific installation (no root)

  • Solution: Use --user flag:

    python3 -m pip install --user tensorrt
    

Issue: Encounter a TypeError while executing pip install

  • Solution: Update the setuptools and packaging Python modules:

    python3 -m pip install --upgrade setuptools packaging
    

Issue: CUDA initialization failure during verification

If the verification command fails with an error similar to:

[TensorRT] ERROR: CUDA initialization failure with error 100. Please check your CUDA installation: ...
  • Solution: This indicates the NVIDIA driver is not installed or not functioning properly. Check the following:

    1. Verify the NVIDIA driver is installed:

      nvidia-smi
      
    2. If running inside a container, ensure you are using a base image with GPU support, such as nvidia/cuda:13.2.1-base-ubuntu24.04.

    3. Reinstall or update the NVIDIA driver if necessary. Refer to the NVIDIA Driver Downloads page.