Support Matrix#
The TensorRT Support Matrix provides comprehensive information about platform compatibility, hardware requirements, and feature availability for each TensorRT release. Each support matrix includes:
Platform and Software Support: Supported operating systems, CUDA versions, cuDNN, Python API, ONNX parser, and control flow features
Hardware and Precision Support: GPU architectures with their supported precision modes (TF32, FP32, FP16, FP8, FP4, BF16, INT8) and Deep Learning Accelerator (DLA) availability
Compute Capability: GPU compute capabilities supported on each platform
Compiler and Python Versions: Required compiler and Python versions for building and using TensorRT on each platform
ONNX Operator Support: Links to comprehensive ONNX operator compatibility documentation
Documentation Structure#
This documentation is organized into the following sections: