TensorRT 10.1.0 Support Matrix#
Platform and Software Support#
The following table shows TensorRT component support across different platforms, including supported CUDA versions, cuDNN, Python API, ONNX parser, and control flow features.
Important
Engine Portability
Platform: Serialized engines are not portable across platforms (Linux, Windows, etc.).
Version Compatibility: Engines built with the version-compatible flag can run with newer TensorRT versions within the same major version.
Hardware Compatibility: Engines built with hardware compatibility mode can run on multiple GPU architectures, depending on the hardware compatibility level used. Without this mode, engines are not portable across different GPU architectures.
Driver Requirements: Refer to the NVIDIA CUDA Release Notes for minimum compatible NVIDIA Driver versions.
Component |
Linux x86-64 (10.1.x) |
Windows x64 (10.1.x) |
Linux ppc64le (8.5.x) |
Linux SBSA (10.1.x) |
NVIDIA JetPack (10.1.x) |
|---|---|---|---|---|---|
NVIDIA CUDA |
|||||
NVIDIA cuDNN (Optional) |
|||||
TensorRT Python API |
Supported |
Supported |
Supported |
Supported |
Supported |
UFF Parser |
N/A |
N/A |
Supported |
N/A |
N/A |
ONNX Parser |
Supported |
Supported |
Supported |
Supported |
Supported |
Control Flow (Loops) |
Supported |
Supported |
Supported |
Supported |
Supported |
GPU Architecture and Precision Support#
The following table shows supported precision modes for each NVIDIA GPU architecture. TensorRT supports NVIDIA hardware with compute capability SM 7.5 or higher. The table also indicates Deep Learning Accelerator (DLA) availability.
Example Devices |
TF32 |
FP32 |
FP16 |
FP8 |
BF16 |
INT8 |
FP16 Tensor Cores |
INT8 Tensor Cores |
DLA |
|
|---|---|---|---|---|---|---|---|---|---|---|
9.0 |
|
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
N/A |
8.9 |
NVIDIA L40S |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
Supported |
N/A |
8.7 |
NVIDIA DRIVE AGX Orin |
Supported |
Supported |
Supported |
N/A |
N/A |
Supported |
Supported |
Supported |
Supported |
8.6 |
NVIDIA A10 |
Supported |
Supported |
Supported |
N/A |
Supported |
Supported |
Supported |
Supported |
N/A |
8.0 |
NVIDIA A100 |
Supported |
Supported |
Supported |
N/A |
Supported |
Supported |
Supported |
Supported |
N/A |
7.5 |
NVIDIA T4 |
N/A |
Supported |
Supported |
N/A |
N/A |
Supported |
Supported |
Supported |
N/A |
Supported Compute Capabilities#
The following table shows which GPU compute capabilities are supported on each platform.
Compiler and Python Requirements#
The following table shows the required compiler and Python versions for building and using TensorRT on each supported platform.
Platform |
Compiler Version |
Python Version |
|---|---|---|
Ubuntu 20.04 x86-64 |
||
Ubuntu 22.04 x86-64 |
||
Rocky Linux 8.9 x86-64 |
||
Rocky Linux 9.3 x86-64 |
||
SLES 15 x86-64 |
N/A |
|
|
N/A |
|
CentOS 8.5 ppc64le |
||
Ubuntu 22.04 SBSA |
||
NVIDIA JetPack AArch64 |
Note
Python Version Support
Debian/RPM packages: Support the Python version listed in the table for each platform.
Wheel packages (tar/zip): Support Python 3.8, 3.9, 3.10, 3.11 and 3.12 across all platforms.
ONNX Operator Support#
TensorRT provides extensive ONNX operator support through the ONNX parser. The complete and up-to-date ONNX operator support list for TensorRT is available in the ONNX-TensorRT Operator Support documentation.
Footnotes