Release Notes#

These Release Notes describe the key features, software enhancements and improvements, and known issues for the TensorRT-RTX release product package.

TensorRT-RTX 1.3#

These are the NVIDIA TensorRT-RTX 1.3 Release Notes.

Key Features and Enhancements

This TensorRT-RTX release includes the following key features and enhancements when compared to NVIDIA TensorRT-RTX 1.2.

  • Enabled thread-safe execution for multiple GPUs with different compute capabilities, up to one network per thread.

  • Performance has been improved for LLMs and convolution-based models.

  • Supports CUDA contexts created in NVIDIA CUDA graphics mode on NVIDIA Blackwell devices.

  • Performance has been improved for many FP8 models on Blackwell.

  • Performance has been improved for many 2D convolutions.

Compatibility

This TensorRT-RTX release supports NVIDIA CUDA 12.9 and CUDA 13.1.

TensorRT-RTX supports both Windows and Linux platforms. The Linux build is expected to work on x86-64 architecture with Rocky Linux 8.9, Rocky Linux 9.3, Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04, and SLES 15. However, only platforms listed in the Support Matrix are officially supported in this release.

Limitations

  • Using a timing cache via ITimingCache and related APIs forces the ahead-of-time compilation step to query your system for a GPU (that is, prevents use of CPU-only AoT compilation). This has been true since the initial release (version 1.0). The timing cache has no effect on the built engine in TensorRT-RTX. We have deprecated the timing cache APIs in 1.2 and will remove them in a future update. They remain in place to avoid breaking source or binary compatibility. Applications should stop using timing cache APIs in preparation for their removal.

  • If the cache size grows too large (larger than 100MB), it may require more overhead to de/serialize to and from disk. If it negatively affects performance, delete the cache file and recreate one.

  • TensorRT-RTX engines are not forward-compatible with other versions of the TensorRT-RTX runtime. Ensure that any TensorRT-RTX engines you produce are run using the runtime from the same version of TensorRT-RTX which was used to generate the engine.

  • While TensorRT-RTX supports Turing (CUDA compute capability 7.5), a TensorRT-RTX engine created with default Compute Capability settings will produce an engine with support for Ampere and later GPUs, therefore excluding Turing. It is recommended to build a separate engine specifically for Turing to achieve the best performance. Creating a single engine with support for Turing and later GPUs will lead to less performant inference on the Ampere and later GPUs due to technical limitations of the engine format.

Deprecated API Lifetime

  • APIs deprecated in TensorRT for RTX 1.3 will be retained until 12/2026.

  • APIs deprecated in TensorRT for RTX 1.2 will be retained until 10/2026.

  • APIs deprecated in TensorRT for RTX 1.1 will be retained until 8/2026.

  • APIs deprecated in TensorRT for RTX 1.0 will be retained until 6/2026.

Fixed Issues

  • TensorRT-RTX can now run with CUDA context created in CUDA graphics mode on Blackwell devices.

  • Concurrent execution with multiple threads on multiple GPUs with different compute capabilities is now supported.

  • Processing throughput for LLMs with short prompt lengths has been improved.

  • INT8 weight-only-quantization performance for IMatrixMultiplyLayers in LLMs has been improved.

  • More kernel fusion patterns are now supported, leading to better performance on convolution-based models.

  • A subtle data-race in the CUDA graph capture for dynamic shapes has been fixed.

Known Issues

Functional

  • The TensorRT-RTX Flux.1 demo has an intermittent issue with the standalone Python script. When using both --dynamic-shape and --enable-runtime-cache options jointly, errors may occur during runtime cache serialization. This issue is not present in the interactive Jupyter notebook version of the demo.

  • NonMaxSuppression, NonZero, and Multinomial layers are not supported.

  • Only the WDDM driver for Windows is supported. The TCC driver for Windows (refer to Tesla Compute Cluster (TCC)) is unsupported and may fail with the following error.

    [E] Error[1]: [defaultAllocator.cpp::nvinfer1::internal::DefaultAllocator::allocateAsync::48] Error Code 1: Cuda Runtime (operation not supported)
    

    For instructions on changing the driver mode, refer to the Nsight Visual Studio Edition documentation.

  • When using TensorRT-RTX with the PyCUDA library in Python, use import pycuda.autoprimaryctx instead of import pycuda.autoinit in order to avoid device conflicts.

  • Depthwise convolutions/deconvolutions for BF16 precision are not supported.

  • Convolutions and deconvolutions with both non-unit strides and dilations are not supported for all precisions. Non-unit strided convolutions and deconvolutions, and non-unit dilated convolutions and deconvolutions are supported.

  • On Windows, the following symbols in tensorrt_rtx_1_3.dll should not be used and will be removed in the future:

    • ?disableInternalBuildFlags@nvinfer1@@YAXAEAVINetworkDefinition@1@_K@Z

    • ?enableInternalBuildFlags@nvinfer1@@YAXAEAVINetworkDefinition@1@_K@Z

    • ?getInternalBuildFlags@nvinfer1@@YA_KAEBVINetworkDefinition@1@@Z

    • ?setDebugOutput@nvinfer1@@YAXAEAVIExecutionContext@1@PEAVIDebugOutput@1@@Z

    • ?setInternalBuildFlags@nvinfer1@@YAXAEAVINetworkDefinition@1@_K@Z

    • nvinfer1DisableInternalBuildFlags

    • nvinfer1EnableInternalBuildFlags

    • nvinfer1GetInternalBuildFlags

    • nvinfer1SetInternalBuildFlags

Performance

  • Use of the CPU-only Ahead-of-Time (AOT) feature can lead to reduced performance for some models; particularly those with multi-head attention (MHA) due to CPU-only AOT’s use of conservative shared memory limits. Affected applications will achieve the best performance if they instead perform AOT compilation on-device, targeted to the specific end-user machine. This can be done with the --useGPU flag for the tensorrt_rtx binary, or if using the APIs by setting the compute capabilities to contain only kCURRENT using IBuilderConfig::setComputeCapability(). You can measure performance with both approaches to determine the approach that is best for your application. We plan to resolve this performance discrepancy in a future release.

  • We have prioritized optimizing the performance for 16-bit floating point types, and such models will frequently achieve throughput using TensorRT-RTX that is very close to that achieved with TensorRT. Models that heavily use 32-bit floating point types will still see improvement, but performance will tend to not be as strong as that achieved using TensorRT. Expect performance across many models and data types to improve in future versions of TensorRT-RTX.

  • Background kernel compilations, triggered as part of the dynamic shapes specialization strategy, are opportunistic and are not guaranteed to complete before network execution finishes. In case your dynamic-shapes workload contains a fixed set of shapes, consider using the eager specialization strategy along with the runtime cache to load/store kernels quickly for best performance.

  • When running in CiG mode, some models show significantly reduced performance compared to non-CiG because of suboptimal kernels that are compatible with the shared memory limitations.

  • Convolutions and deconvolutions with large filter sizes may have degraded performance. We plan to improve performance on such cases in a future release.