TensorRT 10.13.2 Support Matrix#
Platform and Software Support#
The following table shows TensorRT component support across different platforms, including supported CUDA versions, cuDNN, Python API, ONNX parser, and control flow features.
Important
Engine Portability
Platform: Serialized engines are not portable across platforms (Linux, Windows, etc.).
Version Compatibility: Engines built with the version-compatible flag can run with newer TensorRT versions within the same major version.
Hardware Compatibility: Engines built with hardware compatibility mode can run on multiple GPU architectures, depending on the hardware compatibility level used. Without this mode, engines are not portable across different GPU architectures.
Driver Requirements: Refer to the NVIDIA CUDA Release Notes for minimum compatible NVIDIA Driver versions.
Component |
Linux x86-64 (10.13.x) |
Windows x64 (10.13.x) |
Linux SBSA (10.13.x) |
NVIDIA JetPack (10.7.x) |
|---|---|---|---|---|
NVIDIA CUDA |
||||
NVIDIA cuDNN (Optional) |
N/A |
|||
TensorRT Python API |
Supported |
Supported |
Supported |
Supported |
ONNX Parser |
Supported |
Supported |
Supported |
Supported |
Control Flow (Loops) |
Supported |
Supported |
Supported |
Supported |
GPU Architecture and Precision Support#
The following table shows which NVIDIA GPUs and their compute capabilities are supported by TensorRT, along with the precision modes (TF32, FP32, FP16, FP8, FP4, BF16, INT8) and Deep Learning Accelerator (DLA) availability for each architecture.
Example Devices |
TF32 |
FP32 |
FP16 |
FP8 |
FP4 |
BF16 |
INT8 |
DLA |
|
|---|---|---|---|---|---|---|---|---|---|
12.0 |
NVIDIA RTX PRO 6000 Blackwell |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
N/A |
11.0 |
NVIDIA Jetson AGX Thor |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
N/A |
10.0 |
NVIDIA B200 |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
N/A |
9.0 |
|
Supported |
Supported |
Supported |
Supported |
Supported [2] |
Supported |
Supported |
N/A |
8.9 |
NVIDIA L40S |
Supported |
Supported |
Supported |
Supported |
Supported [2] |
Supported |
Supported |
N/A |
8.7 |
NVIDIA DRIVE AGX Orin |
Supported |
Supported |
Supported |
N/A |
N/A |
N/A |
Supported |
Supported |
8.6 |
NVIDIA A10 |
Supported |
Supported |
Supported |
N/A |
N/A |
Supported |
Supported |
N/A |
8.0 |
NVIDIA A100 |
Supported |
Supported |
Supported |
N/A |
N/A |
Supported |
Supported |
N/A |
7.5 |
NVIDIA T4 |
N/A |
Supported |
Supported |
N/A |
N/A |
N/A |
Supported |
N/A |
Supported Compute Capabilities#
The following table shows which GPU compute capabilities are supported on each platform.
Compiler and Python Requirements#
The following table shows the required compiler and Python versions for building and using TensorRT on each supported platform.
Platform |
Compiler Version |
Python Version |
|---|---|---|
Ubuntu 22.04 x86-64 |
||
Ubuntu 24.04 x86-64 |
||
Rocky Linux 8.9 x86-64 |
||
Rocky Linux 9.3 x86-64 |
||
SLES 15 x86-64 |
N/A |
|
|
N/A |
|
Ubuntu 24.04 SBSA |
||
NVIDIA JetPack AArch64 |
Note
Python Version Support
Debian/RPM packages: Support the Python version listed in the table for each platform.
Wheel packages (tar/zip): Support Python 3.8, 3.9, 3.10, 3.11, 3.12, and 3.13 across all platforms.
ONNX Operator Support#
TensorRT provides extensive ONNX operator support through the ONNX parser. The complete and up-to-date ONNX operator support list for TensorRT is available in the ONNX-TensorRT Operator Support documentation.
Footnotes