Abstract

This support matrix is for TensorRT 5.0.2. These matrices provide a look into supported features and software for TensorRT APIs, parsers, and layers.

For previously released TensorRT documentation, see TensorRT Archives.

1. Features For Platforms And Software

Table 1. List of supported features per platform.
  Linux x86-64 Linux AArch64 QNX AArch64 Windows x64
Supported CUDA versions 9.0, 10.0 10.0 10.0 10.0
Supported cuDNN versions 7.3.1 7.3.1 7.3.1 7.3.1
TensorRT Python API Yes No No No
NvUffParser Yes Yes Yes Yes
NvOnnxParser Yes Yes Yes No
Note: Serialized engines are not portable across platforms or TensorRT versions.

2. Layers And Features

Table 2. List of supported features per TensorRT layer.
Layer Dimensions of input tensor Dimensions of output tensor Does the operation apply to only the innermost 3 dimensions? Supports broadcast1 Supports broadcast across batch2
Activation 0-7 dimensions 0-7 dimensions No No No
Concatenation 1-7 dimensions 1-7 dimensions No No No
Constant 0-7 dimensions 0-7 dimensions No No Always
Convolution 3 or more dimensions 3 or more dimensions Yes No No
Deconvolution 3 or more dimensions 3 or more dimensions Yes No No
ElementWise 0-7 dimensions 0-7 dimensions No Yes Yes
FullyConnected 3 or more dimensions 3 or more dimensions Yes No No
Gather
  • Input1: 1-7 dimensions
  • Input2: 0-7 dimensions
0-7 dimensions No No Yes
Identity 0-7 dimensions 0-7 dimensions No No No
IPluginV2 User defined User defined User defined User defined User defined
LRN 3 or more dimensions 3 or more dimensions Yes No No
MatrixMultiply 2 or more dimensions 2 or more dimensions No Yes Yes
Padding 3 or more dimensions 3 or more dimensions Yes No No
Plugin User defined User defined User defined User defined User defined
Pooling 3 or more dimensions 3 or more dimensions Yes Yes Yes
RaggedSoftMax
  • Input: 2 dimensions
  • Bounds: 2 dimensions
2 or more dimensions No No Yes
Reduce 1-7 dimensions 0-7 dimensions No No No
RNN 3 dimensions 3 dimensions No No No
RNNv2
  • Data/Hidden/Cell: 2 or more dimensions
  • Seqlen: 0 or more dimensions
Data/Hidden/Cell: 2 or more dimensions No No No
Scale 3 or more dimensions 3 or more dimensions Yes No No
Shuffle 0-7 dimensions 0-7 dimensions No No No
SoftMax 1-7 dimensions 1-7 dimensions No No No
TopK 1-7 dimensions
  • Output1: 1-7 dimensions
  • Output2: 1-7 dimensions
Yes No Yes
Unary 0-7 dimensions 0-7 dimensions No No No
For more information about each of the TensorRT layers, see TensorRT Layers.

3. Layers And Precision

The following table lists the TensorRT layers and the precision modes that each layer supports. It also lists the ability of the layer to run on Deep Learning Accelerator (DLA). For more information about additional constraints, see DLA Supported Layers.

For more information about each of the TensorRT layers, see TensorRT Layers. To view a list of the specific attributes that are supported by each layer, refer to the TensorRT API documentation.

Table 3. List of supported precision mode per TensorRT layer.
Layer FP32 FP16 INT32 DLA3
Activation Yes Yes No Yes
Concatenation Yes Yes Yes Yes
Constant Yes Yes Yes No
Convolution Yes Yes No Yes
Deconvolution Yes Yes No Yes
ElementWise Yes Yes No Yes
FullyConnected Yes Yes No Yes
Gather Yes Yes Yes No
Identity Yes Yes Yes No
IPluginV2 Yes Yes No No
LRN Yes Yes No Yes
MatrixMultiply Yes Yes No No
Padding Yes Yes No No
Plugin Yes Yes No No
Pooling Yes Yes No Yes
RaggedSoftMax Yes No No No
Reduce Yes Yes No No
RNN Yes Yes No No
RNNv2 Yes Yes No No
Scale Yes Yes No Yes
Shuffle Yes Yes Yes No
SoftMax Yes Yes No No
TopK Yes Yes No No
Unary Yes Yes No No

4. Hardware And Precision

The following table lists NVIDIA hardware and which precision modes each hardware supports. It also lists availability of Deep Learning Accelerator (DLA) on these hardware.
Table 4. List of supported precision mode per hardware.
SM Version Example Device FP32 FP16 INT8 FP16 Tensor Cores INT8 Tensor Cores DLA
7.5 Tesla T4 Yes Yes Yes Yes Yes No
7.2 Jetson AGX Xavier Yes Yes Yes Yes Yes Yes
7.0 Tesla V100 Yes Yes Yes Yes No No
6.1 Tesla P4 Yes No Yes No No No
6.0 Tesla P100 Yes Yes No No No No
5.2 Tesla M4 Yes No No No No No

5. Software Versions Per Platform

Table 5. List of supported platforms per software version.
  Ubuntu 14.04 Ubuntu 16.04 Ubuntu 18.04 CentOS 7.5 Linux AArch64 QNX Windows 10
Compiler version gcc 4.8.4 gcc 5.4.0 gcc 7.3.0 gcc 4.8.5 gcc 5.3.1 gcc 5.4.0 MSVC 2017u5
Python versions 2.7, 3.4 2.7, 3.5 2.7, 3.6 2.7      
Note: Serialized engines are not portable across platforms or TensorRT versions.

Notices

Notice

THE INFORMATION IN THIS GUIDE AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS GUIDE IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the product described in this guide shall be limited in accordance with the NVIDIA terms and conditions of sale for the product.

THE NVIDIA PRODUCT DESCRIBED IN THIS GUIDE IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.

NVIDIA makes no representation or warranty that the product described in this guide will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this guide. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this guide, or (ii) customer product designs.

Other than the right for customer to use the information in this guide with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this guide. Reproduction of information in this guide is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.

Trademarks

NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, cuDNN, cuFFT, cuSPARSE, DALI, DIGITS, DGX, DGX-1, Jetson, Kepler, NVIDIA Maxwell, NCCL, NVLink, Pascal, Tegra, TensorRT, and Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

1 Indicates support for broadcast in this layer. This layer allows its two input tensors to be of dimensions [1, 5, 4, 3] and [1, 5, 1, 1], and its output out be [1, 5, 4, 3]. Note: The second input tensor has been broadcast in the innermost 2 dimensions.
2 Indicates support for broadcast across the batch dimension.
3 DLA with FP16 precision.