TensorRT 10.4.0 Support Matrix#

Platform and Software Support#

The following table shows TensorRT component support across different platforms, including supported CUDA versions, cuDNN, Python API, ONNX parser, and control flow features.

Important

Engine Portability

  • Platform: Serialized engines are not portable across platforms (Linux, Windows, etc.).

  • Version Compatibility: Engines built with the version-compatible flag can run with newer TensorRT versions within the same major version.

  • Hardware Compatibility: Engines built with hardware compatibility mode can run on multiple GPU architectures, depending on the hardware compatibility level used. Without this mode, engines are not portable across different GPU architectures.

Driver Requirements: Refer to the NVIDIA CUDA Release Notes for minimum compatible NVIDIA Driver versions.

Component

Linux x86-64 (10.4.x)

Windows x64 (10.4.x)

Linux ppc64le (8.5.x)

Linux SBSA (10.4.x)

NVIDIA JetPack (10.4.x)

NVIDIA CUDA

11.8

12.6

12.6

NVIDIA cuDNN (Optional)

8.9.7

8.9.7

8.6.0

8.9.7

8.9.6

TensorRT Python API

Supported

Supported

Supported

Supported

Supported

ONNX Parser

Supported

Supported

Supported

Supported

Supported

Control Flow (Loops)

Supported

Supported

Supported

Supported

Supported

GPU Architecture and Precision Support#

The following table shows supported precision modes for each NVIDIA GPU architecture. TensorRT supports NVIDIA hardware with compute capability SM 7.5 or higher. The table also indicates Deep Learning Accelerator (DLA) availability.

CUDA Compute Capability

Example Devices

TF32

FP32

FP16

FP8

BF16

INT8

FP16 Tensor Cores

INT8 Tensor Cores

DLA

9.0

  • NVIDIA H100

  • NVIDIA GH200 480 GB

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

N/A

8.9

NVIDIA L40S

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

N/A

8.7

NVIDIA DRIVE AGX Orin

Supported

Supported

Supported

N/A

N/A

Supported

Supported

Supported

Supported

8.6

NVIDIA A10

Supported

Supported

Supported

N/A

Supported

Supported

Supported

Supported

N/A

8.0

NVIDIA A100

Supported

Supported

Supported

N/A

Supported

Supported

Supported

Supported

N/A

7.5

NVIDIA T4

N/A

Supported

Supported

N/A

N/A

Supported

Supported

Supported

N/A

Supported Compute Capabilities#

The following table shows which GPU compute capabilities are supported on each platform.

Platform

Compute Capability

Linux x86-64

Windows 10 x64

CentOS 8.5 ppc64le

  • 7.0

  • 7.5

  • 8.0

  • 8.6

  • 9.0

Ubuntu 22.04 SBSA

  • 7.0

  • 7.5

  • 8.0

  • 8.6

  • 9.0

NVIDIA JetPack AArch64

8.7

Compiler and Python Requirements#

The following table shows the required compiler and Python versions for building and using TensorRT on each supported platform.

Platform

Compiler Version

Python Version

Ubuntu 20.04 x86-64

gcc 8.3.1

3.8

Ubuntu 22.04 x86-64

gcc 8.3.1

3.10

Ubuntu 24.04 x86-64

gcc 8.3.1

3.12

Rocky Linux 8.9 x86-64

gcc 8.3.1

3.8

Rocky Linux 9.3 x86-64

gcc 8.3.1

3.8

SLES 15 x86-64

gcc 8.3.1

N/A

  • Windows 10 x64

  • Windows 11 x64

  • Windows Server 2019 x64

  • Windows Server 2022 x64

MSVC 2019 v16.9.2

N/A

CentOS 8.5 ppc64le

Clang 14.0.6

3.8

Ubuntu 24.04 SBSA

gcc 8.4

3.12

NVIDIA JetPack AArch64

gcc 11.4

3.10

Note

Python Version Support

  • Debian/RPM packages: Support the Python version listed in the table for each platform.

  • Wheel packages (tar/zip): Support Python 3.8, 3.9, 3.10, 3.11 and 3.12 across all platforms.

ONNX Operator Support#

TensorRT provides extensive ONNX operator support through the ONNX parser. The complete and up-to-date ONNX operator support list for TensorRT is available in the ONNX-TensorRT Operator Support documentation.

Footnotes