Release Notes#
These Release Note describe the key features, software enhancements and improvements, and known issues for the TensorRT release product package.
To review the TensorRT documentation for earlier versions, refer to the TensorRT Archived Documentation.
To review TensorRT documentation 10.8.0 and more recent, choose a version from the bottom left navigation selector toggle.
TensorRT 10.9.0#
These are the TensorRT 10.9.0 Release Notes, which apply to x86 Linux and Windows users, and Arm-based CPU cores for Server Base System Architecture (SBSA) users on Linux. This release includes several fixes from the previous TensorRT releases and additional changes.
Key Features and Enhancements
This TensorRT release includes the following key features and enhancements.
This release adds support for Python 3.13. At this time, only the Python bindings are supported. Some of the dependencies for the TensorRT Python samples do not have releases compatible with Python 3.13, or need updating to compatible versions that may require API changes. Some examples are samples that require
onnx
ornumpy
1.x.Added a new hardware compatibility level,
kSAME_COMPUTE_CAPABILITY
, to allow the engine to be compatible with GPUs that have the same compute capability as the one on which it was built. For more information, refer to the Same Compute Capability Compatibility Level section.The quickly deployable plugin (QDP) feature has been extended to define ahead-of-time compilable Python plugins (AOT QDPs). By providing a compiled kernel representation, you can embed the Python plugin into the TensorRT engine such that there are no Python or plugin library dependencies at runtime.
A new sample,
dds_faster_rcnn
, was added to demonstrate how to deal with data-dependent output shapes with native TensorRT. For more information, refer to the DDS Faster R-CNN Object Detection in TensorRT sample.Enhanced TensorRT EngineInspector output to provide additional visibility.
Added jump target information for
cjmp
andjmp
instructions.Added event ID details for
signal
andwait
instructions.
Breaking ABI Changes
There was an ABI breakage in
INetworkDefinition
. Applications linked against previous versions of TensorRT 10.x usingINetworkDefinition
APIs may not have worked correctly with TensorRT 10.8 unless relinked. This issue has been fixed in this release.
Compatibility
TensorRT 10.9.0 has been tested with the following:
PyTorch >= 2.0 (refer to the
requirements.txt
file for each sample)
This TensorRT release supports NVIDIA CUDA:
This TensorRT release requires at least NVIDIA driver r450 on Linux or r452 on Windows as required by CUDA 11.0, which is the minimum CUDA version supported by this TensorRT release. For CUDA 12.x the minimum NVIDIA driver version is r535.
Limitations
There are no optimized FP8 Convolutions for Group Convolutions and Depthwise Convolutions. Therefore, INT8 is still recommended for ConvNets containing these convolution ops.
The FP8 Convolutions only support input/output channels, which are multiples of 16. Otherwise, TensorRT will fall back to non-FP8 convolutions.
The FP8 Convolutions do not support kernel sizes larger than 32, such as 7x7 convolutions, and FP16 or FP32 fallback kernels will be used with suboptimal performance. Therefore, do not add FP8 Q/DQ ops before Convolutions with large kernel sizes for better performance.
The accumulation
dtype
for the batched GEMMS in the FP8 MHA must be in FP32.This can be achieved by adding Cast (to FP32) ops before the batched GEMM and Cast (to FP16) after the batched GEMM.
Alternatively, you can convert your ONNX model using TensorRT Model Optimizer, which adds the Cast ops automatically.
There cannot be any pointwise operations between the first batched GEMM and the softmax inside FP8 MHAs, such as having an attention mask. This will be improved in future TensorRT releases.
On QNX, networks that are segmented into a large number of DLA loadables may fail during inference.
The DLA compiler can remove identity transposes but cannot fuse multiple adjacent transpose layers into a single transpose layer (likewise for reshaping). For example, given a TensorRT
IShuffleLayer
consisting of two non-trivial transposes and an identity reshape in between, the shuffle layer is translated into two consecutive DLA transpose layers unless the user merges the transposes manually in the model definition in advance.nvinfer1::UnaryOperation::kROUND
ornvinfer1::UnaryOperation::kSIGN
operations ofIUnaryLayer
are not supported in the implicit batch mode.For networks containing normalization layers, particularly if deploying with mixed precision, target the latest ONNX opset containing the corresponding function ops, such as opset 17 for LayerNormalization or opset 18 GroupNormalization. Numerical accuracy using function ops is superior to the corresponding implementation with primitive ops for normalization layers.
Weight streaming mainly supports GEMM-based networks like Transformers for now. Convolution-based networks may have only a few weights that can be streamed.
When two convolutions with INT8-QDQ and residual add share the same weight, constant weight fusion will not occur. Make a copy of the shared weight for better performance.
When building the
nonZeroPlugin
sample on Windows, you may need to modify the CUDA version specified in theBuildCustomizations
paths in thevcxproj
file to match the installed version of CUDA.The scale factor must be a build-time constant if QuantizeLayer is used with the output FP4 data type.
The weights used in INT4 weights-only quantization (WoQ) cannot be refitted.
The high-precision weights used in FP4 double quantization are not refittable.
Python samples do not support Python 3.13. Only the 3.13 Python bindings are currently supported.
When batch stride >= 2^31, the convolution and GEMM layers may fail to find a tactic.
Deprecated API Lifetime
APIs deprecated in TensorRT 10.9 will be retained until 3/2026.
APIs deprecated in TensorRT 10.8 will be retained until 2/2026.
APIs deprecated in TensorRT 10.7 will be retained until 12/2025.
APIs deprecated in TensorRT 10.6 will be retained until 11/2025.
APIs deprecated in TensorRT 10.5 will be retained until 10/2025.
APIs deprecated in TensorRT 10.4 will be retained until 9/2025.
APIs deprecated in TensorRT 10.3 will be retained until 8/2025.
APIs deprecated in TensorRT 10.2 will be retained until 7/2025.
APIs deprecated in TensorRT 10.1 will be retained until 5/2025.
Refer to the API documentation (C++, Python) for instructions on updating your code to remove the use of deprecated features.
Fixed Issues
Fixed an up to 10% inference performance regression for VIT; the GEMMs selection did not find the best kernel.
Fixed an up to 37% inference performance regression for the
cortanaasr_s128_bunk_e128
network on Hopper precision GPUs compared to TensorRT 10.7 in the CUDA 12.8 environment.The
sampleEditableTimingCache
sample did not compile on SLES 15 when the default GCC version was 7.5.0. This was due to a missing header,<charconv>
, required for complete C++17 support. This header is no longer used, which allows GCC version 7.5.0 to be used again to compile this sample.On the NVIDIA Blackwell platform, TensorRT can now handle tensors with data-dependent shapes that are passed into an
IShuffleLayer
to be reshaped, anISliceLayer
with dynamic axes, anIGatherLayer
, or a layer where the data-dependent dimensions of the tensor would be subject to arithmetic or logical operations.On the NVIDIA Blackwell platform, the
format
attribute in thePluginTensorDesc
parameters of PluginV3’sonShapeChange
andenqueue
function is now correct.Exceptions thrown from PluginV3’s
enqueue
on the NVIDIA Blackwell platform are now handled properly by the exception-handling routine.Dynamic quantization with a non-innermost block axis is now supported.
The engine build process will no longer encounter failures when handling
ScatterND
operations with empty indices.The thread sanitizer tool will no longer report data races of CPU threads when TensorRT is building the engine.
Convolution/Deconvolution now supports non-zero spatial output dimensions with corresponding zero input dimensions.
The backend compiler now supports sigmoid fusion for GEMMs, improving GPU device memory on graphs with this pattern.
Known Issues
Functional
When running OSS demoBERT FP16 inference on H20 GPUs, different batch sizes may generate different outputs given the same input values. This can be worked around by using a fixed batch size.
There is a known accuracy issue running certain networks on NVIDIA HGX H20.
Inputs to the
IRecurrenceLayer
must always have the same shape. This means that ONNX models with loops whose recurrence inputs change shapes will be rejected.CUDA compute sanitizer may report racecheck hazards for some legacy kernels. However, related kernels do not have functional issues at runtime.
The compute sanitizer
initcheck
tool may flag false positiveUninitialized __global__ memory read
errors when running TensorRT applications on NVIDIA Hopper GPUs. These errors can be safely ignored and will be fixed in an upcoming CUDA release.Multihead attention fusion might not happen and affect performance if the number of heads is small.
An occurrence of use-after-free in NVRTC has been fixed in CUDA 12.1. When using NVRTC from CUDA 12.0 together with the TensorRT static library, you may encounter a crash in certain scenarios. Linking the NVRTC and PTXJIT compiler from CUDA 12.1 or newer will resolve this issue.
There are known issues reported by the Valgrind memory leak check tool when detecting potential memory leaks from TensorRT applications. The recommendation to suppress the issues is to provide a Valgrind suppression file with the following contents when running the Valgrind memory leak check tool. Add the option
--keep-debuginfo=yes
to the Valgrind command line to suppress these errors.{ Memory leak errors with dlopen. Memcheck:Leak match-leak-kinds: definite ... fun:*dlopen* ... } { Memory leak errors with nvrtc Memcheck:Leak match-leak-kinds: definite fun:malloc obj:*libnvrtc.so* ... }
SM 7.5 and earlier devices may not have INT8 implementations for all layers with Q/DQ nodes. In this case, you will encounter a
could not find any implementation
error while building your engine. To resolve this, remove the Q/DQ nodes, which quantize the failing layers.Installing the
cuda-compat-11-4
package may interfere with CUDA-enhanced compatibility and cause TensorRT to fail even when the driver is r465. The workaround is to remove thecuda-compat-11-4
package or upgrade the driver to r470.For some networks, using a batch size of 4096 may cause accuracy degradation on DLA.
For broadcasting elementwise layers running on DLA with GPU fallback enabled with one NxCxHxW input and one Nx1x1x1 input, there is a known accuracy issue if at least one of the inputs is consumed in
kDLA_LINEAR
format. It is recommended to explicitly set the input formats of such elementwise layers to different tensor formats.Exclusive padding with
kAVERAGE
pooling is not supported.Asynchronous CUDA calls are not supported in the user-defined
processDebugTensor
function for the debug tensor feature due to a bug in Windows 10.inplace_add
mini-sample of thequickly_deployable_plugins
Python sample may produce incorrect outputs on Windows. This will be fixed in a future release.When linking with
libcudart_static.a
using a RedHatgcc-toolset-11
or earlier compiler, you may encounter an issue where exception handling isn’t working. When a throw or exception happens, the catch is ignored, and an abort is raised, killing the program. This may be related to a linker bug causing theeh_frame_hdr
ELF segment to be empty. You can workaround this issue using a new linker, such as the one fromgcc-toolset-13
.TensorRT may exit if inputs with invalid values are provided to the
RoiAlign
plugin (ROIAlign_TRT
), especially if there is inconsistency in the indices specified in thebatch_indices
input and the actual batch size used.The Valgrind Memcheck tool may report memory leaks when TensorRT builds the engine on pre-Blackwell GPUs, especially if the model contains convolution layers.
On the NVIDIA Blackwell platform, engine build may fail when a tensor that is to be used to define the data-dependent dimensions of another tensor with data-dependent shape, is calculated only with layer types that can calculate shapes such as, but not limited to,
IReduceLayer
withReduceOperation::kSUM/kPROD/kAVG
,IElementWiseLayer
,IShapeLayer
,IShuffleLayer
,ISliceLayer
, andIGatherLayer
. Users may run into the error message that the tensor is an input shape tensor without the input shape profile.In the Validate against Ground Truth section of the efficientnet samples, the link to download Caffe’s ILSVRC2012 auxiliary package is unstable. Therefore, the download might fail intermittently.
The ONNX specification of the
NonMaxSuppression
operation requires theiou_threshold
parameter to be in the range of[0.0-1.0]
. However, TensorRT does not validate the value of the parameter; therefore, TensorRT will accept values outside of this range, in which case, the engine will continue executing as if the value was capped at either end of this range.
Performance
FP8 MHA performance may be lower than BF16/FP16 MHA on SM89 when the sequence length is long (for example, >100k).
Up to 26% performance regression for a particular version of GPT-2 which has a large concatenation at the end of the network.
CPU peak memory usage regression with the
roberta_base
engine on Ampere GPUs compared to TensorRT 10.7.Up to 10% performance regression for Megatron networks in FP32 precision compared to TensorRT 10.8 for BS4.
Up to 100 MB context memory size regression compared to TensorRT 8.6 on Hopper GPUs for CRNN (Convolutional Recurrent Neural Network) models. Inference performance is not affected.
Up to 9% inference performance regression for
StableDiffusion
v2.0/2.1 VAE network in FP16 precision on Hopper GPUs compared to TensorRT 10.6 in CUDA 11.8 environment. This issue can be fixed by upgrading CUDA to 12.6.Up to 60% performance regression compared to TensorRT 8.6 on Ampere GPUs for group convolutions with N channels per group, where N is not a power of 2. This can be worked around by padding N to the next power of 2
Up to 22% context memory size regression for HiFi-GAN networks in INT8 precision compared to TensorRT 10.5 on Ampere GPUs.
Up to 7% performance regression for Megatron networks in FP16 precision compared to TensorRT 10.6 for BS1 and Seq128 on H100 GPUs.
Up to 10% performance regression for BERT networks exported from TensorFlow2 in FP16 precision compared to TensorRT 10.4 for BS1 and Seq128 on A16 GPUs.
Up to 16% regression in context memory usage for
StableDiffusion
XL VAE network in FP8 precision on H100 GPUs compared to TensorRT 10.3 due to a necessary functional fix.Up to 15% regressing in context memory usage for networks containing InstanceNorm and Activation ops compared to TensorRT 10.0.
Up to 15% CPU memory usage regression for mbart-cnn/mamba-370m in FP16 precision and OOTB mode on NVIDIA Ada Lovelace GPUs compared to TensorRT 10.2.
Up to 6% performance regression for BERT/Megatron networks in FP16 precision compared to TensorRT 10.2 for BS1 and Seq128 on H100 GPUs.
Up to 6% performance regression for Bidirectional LSTM in FP16 precision on H100 GPUs compared to TensorRT 10.2.
Up to 25% performance regression when running TensorRT-LLM without the attention plugin. The current recommendation is always to enable the attention plugin when using TensorRT-LLM.
Performance gaps between engines built with REFIT enabled and engines built with REFIT disabled.
Up to 60 MB engine size fluctuations for the BERT-Large INT8-QDQ model on Orin due to unstable tactic selection among tactics.
Up to 16% performance regression for BasicUNet, DynUNet, and HighResNet in INT8 precision compared to TensorRT 9.3.
Up to 40-second increase in engine building for BART networks on NVIDIA Hopper GPUs.
Up to 20-second increase in engine building for some large language models (LLMs) on NVIDIA Ampere GPUs.
Up to 2.5x build time increase compared to TensorRT 9.0 for certain Bert-like models due to additional tactics available for evaluation.
Up to 13% performance drop for the CortanaASR model on NVIDIA Ampere GPUs compared to TensorRT 8.5.
Up to 18% performance drop for the ShuffleNet model on A30/A40 compared to TensorRT 8.5.1.
Convolution on a tensor with an implicitly data-dependent shape may run slower than on other tensors of the same size. Refer to the Glossary for the definition of implicitly data-dependent shapes.
Up to 5% performance drop for networks using sparsity in FP16 precision.
Up to 6% performance regression compared to TensorRT 8.5 on OpenRoadNet in FP16 precision on NVIDIA A10 GPUs.
Up to 70% performance regression compared to TensorRT 8.6 on BERT networks in INT8 precision with FP16 disabled on L4 GPUs. Enable FP16 and disable INT8 in the builder config to work around this.
In explicitly quantized networks, a group convolution with a Q/DQ pair before but no Q/DQ pair after runs with INT8-IN-FP32-OUT mixed precision. However, NVIDIA Hopper may fall back to FP32-IN-FP32-OUT if the input channel count is small.
The
kREFIT
andkREFIT_IDENTICAL
have performance regressions compared with non-refit engines where convolution layers are present within a branch or loop, and the precision is FP16/INT8. This issue will be addressed in future releases.