Support Matrix#

This support matrix provides filterable access to TensorRT 10.x compatibility information across all releases from 10.0.0 EA through 10.16.0. Use the three explorers below to find information about system requirements, hardware capabilities, and feature support.

Software and System Requirements#

Use this when you need to know what to install or what versions are required

Engine Portability

  • Platform: Serialized engines are not portable across platforms (Linux, Windows, etc.).
  • Version Compatibility: Engines built with the version-compatible flag can run with newer TensorRT versions within the same major version.
  • Hardware Compatibility: Engines built with hardware compatibility mode can run on multiple GPU architectures, depending on the hardware compatibility level used. Without this mode, engines are not portable across different GPU architectures.

Find out what you need to install and run TensorRT. First select your TensorRT version, then optionally select a platform to view CUDA versions, cuDNN, Python requirements, compiler versions, driver requirements, and framework compatibility. If no platform is selected, you'll see all supported platforms for the chosen TensorRT version.

Select a TensorRT version above to view compatibility information. Optionally select a platform to filter results.

GPU Architecture and Precision Support#

Use this when you need to know hardware capabilities and precision support

Find out which GPU architectures and precision modes are supported. Filter by GPU architecture (compute capability) or precision mode to see which hardware supports which precision types. TensorRT supports NVIDIA hardware with compute capability SM 7.5 or higher.

Select filters above to display GPU architecture and precision support.

Feature and Component Support#

Use this when you need to know which TensorRT features are available in which versions

Find out which TensorRT features and components are supported in which versions. Filter by TensorRT version and component to see feature availability across releases. Use this to determine if a specific feature is available in your target TensorRT version.

Select a TensorRT version above to view feature support information. Optionally select a component to filter results.

Footnotes

  • [#f1] Built with CUDA Toolkit 12.9. Compatible with CUDA 12.x versions only.
  • [#f2] Built with CUDA Toolkit 13.2. Compatible with CUDA 13.x versions only.
  • [#f3] Supported in hardware emulation mode (hardware does not accelerate FP4 linear operations).
  • [#f4] Requires CUDA Toolkit 12.8 or newer.
  • [#f5] Requires CUDA Toolkit 12.9 or newer.
  • [#f6] Python Version Support: Debian/RPM packages support the Python version listed in the table for each platform. Wheel packages (tar/zip) support Python 3.8, 3.9, 3.10, 3.11, 3.12, and 3.13 across all platforms.
  • [#f7] For detailed driver compatibility information, refer to the NVIDIA CUDA Release Notes and NVIDIA Driver Downloads.