Triton Inference Server Release 25.03

The Triton Inference Server container image, release 25.03, is available on NGC and is open source on GitHub.

Contents of the Triton Inference Server container

The Triton Inference Server Docker image contains the inference server executable and related shared libraries in /opt/tritonserver.

For a complete list of what the container includes, refer to Deep Learning Frameworks Support Matrix.

The container also includes the following:

Driver Requirements

Release 25.03 is based on CUDA 12.8.1.012 which requires NVIDIA Driver release 560 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 470.57 (or later R470), 525.85 (or later R525), 535.86 (or later R535), or 545.23 (or later R545).

The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R450, R460, R510, R520, R530, R545 and R555 drivers, which are not forward-compatible with CUDA 12.6. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

GPU Requirements

Release 25.03 supports CUDA compute capability 7.5 and later. This corresponds to GPUs in the NVIDIA Turing™, NVIDIA Ampere architecture, NVIDIA Hopper™, NVIDIA Ada Lovelace, and NVIDIA Blackwell architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This Inference Server release includes the following key features and enhancements.

  • The Tensorflow Backend has been deprecated starting in 25.03. The last release of Triton Inference Server with the Tensorflow Backend is 25.02. Users wishing to continue using the Tensorflow backend in 25.03 and later can build the Tensorflow Backend from the source.
  • The “XX.YY-tf2-python-py3” container will no longer be available starting in 25.03. See the Tensorflow Backend deprecation.
  • Added generate and generate_stream inference types to SageMaker server. Customers can choose which inference types - infer (default), generate or generate_stream using SAGEMAKER_TRITON_INFERENCE_TYPE environment variable during server launch.
  • In an effort to allow quick, on-demand metric retrieval for external load balancers such as the Kubernetes Inference Gateway API, Triton when used with TRT-LLM can include live KV-cache utilization and capacity metrics in the HTTP response header when processing inference requests.

NVIDIA Triton Inference Server Container Versions

The following table shows what versions of Ubuntu, CUDA, Triton Inference Server, and NVIDIA TensorRT™ are supported in each of the NVIDIA containers for Triton Inference Server. For older container versions, refer to the Frameworks Support Matrix.

Container Version Triton Inference Server Ubuntu CUDA Toolkit TensorRT
25.03 2.56.0 24.04 NVIDIA CUDA 12.8.1.012 TensorRT 10.9.0.34
25.02 2.55.0 NVIDIA CUDA 12.8.0.38 TensorRT 10.8.0.43
25.01 2.54.0 NVIDIA CUDA 12.8.0 TensorRT 10.8.0.43
24.12 2.53.0 NVIDIA CUDA 12.6.3 TensorRT 10.6.0.26
24.11 2.52.0 NVIDIA CUDA 12.6.3 TensorRT 10.6.0.26
24.10 2.51.0 22.04 NVIDIA CUDA 12.6.2 TensorRT 10.5.0.18
24.09 2.50.0 NVIDIA CUDA 12.6.1 TensorRT 10.4.0.26
24.08 2.49.0 NVIDIA CUDA 12.6 TensorRT 10.3.0.26
24.07 2.48.0 NVIDIA CUDA 12.5.1 TensorRT 10.2.0.19
24.06 2.47.0 NVIDIA CUDA 12.5.0.23 TensorRT 10.1.0.27
24.05 2.46.0 NVIDIA CUDA 12.4.1 TensorRT 10.0.1.6
24.04 2.45.0 TensorRT 8.6.3
24.03 2.44 NVIDIA CUDA 12.4.0.41
24.02 2.43 NVIDIA CUDA 12.3.2
24.01 2.42 TensorRT 8.6.1.6
23.12 2.41
23.11 2.40 NVIDIA CUDA 12.3.0
23.10 2.39.0 NVIDIA CUDA 12.2.2
23.09 2.38.0 NVIDIA CUDA 12.2.1
23.08 2.37.0
23.07 2.36.0 NVIDIA CUDA 12.1.1
23.06 2.35.0
23.05 2.34.0 TensorRT 8.6.1.2
23.04 2.33.0 20.04 NVIDIA CUDA 12.1.0 TensorRT 8.6.1
23.03 2.32.0 TensorRT 8.5.3
23.02 2.31.0 NVIDIA CUDA 12.0.1
23.01 2.30.0 TensorRT 8.5.2.2
22.12 2.29.0 NVIDIA CUDA 11.8.0 TensorRT 8.5.1
22.11 2.28.0
22.10 2.27.0 TensorRT 8.5 EA
22.09 2.26.0
22.08 2.25.0 NVIDIA CUDA 11.7.1 TensorRT 8.4.2.4
22.07 2.24.0 NVIDIA CUDA 11.7 Update 1 Preview TensorRT 8.4.1
22.06 2.23.0 TensorRT 8.2.5
22.05 2.22.0 NVIDIA CUDA 11.7.0
22.04 2.21.0 NVIDIA CUDA 11.6.2 TensorRT 8.2.4.2 and

for x86 Linux and SBSA

TensorRT 8.4.0 for JetPack/Jetson

22.03 2.20.0 NVIDIA CUDA 11.6.1 TensorRT 8.2.3 and

for x86 Linux and SBSA

TensorRT 8.4.0 for JetPack/Jetson

22.02 2.19.0 NVIDIA CUDA 11.6.0 TensorRT 8.2.3
22.01 2.18.0 TensorRT 8.2.2
21.12 2.17.0 NVIDIA CUDA 11.5.0 TensorRT 8.2.1.8
21.11 2.16.0

TensorRT 8.2.1.8 for x64 Linux

TensorRT 8.0.2.2 for ARM SBSA Linux
21.10 2.15.0 NVIDIA CUDA 11.4.2 with cuBLAS 11.6.5.2
21.09 2.14.0 NVIDIA CUDA 11.4.2 TensorRT 8.0.3
21.08 2.13.0 NVIDIA CUDA 11.4.1 TensorRT 8.0.1.6
21.07 2.12.0 NVIDIA CUDA 11.4.0
21.06.1 2.11.0 NVIDIA CUDA 11.3.1 TensorRT 7.2.3.4
21.06
21.05 2.10.0 NVIDIA CUDA 11.3.0
21.04 2.9.0
21.03 2.8.0 NVIDIA CUDA 11.2.1 TensorRT 7.2.2.3
21.02 2.7.0 NVIDIA CUDA 11.2.0 TensorRT 7.2.2.3+cuda11.1.0.024
20.12 2.6.0 NVIDIA CUDA 11.1.1 TensorRT 7.2.2
20.11 2.5.0

18.04

NVIDIA CUDA 11.1.0 TensorRT 7.2.1
20.10 2.4.0
20.09 2.3.0 NVIDIA CUDA 11.0.3 TensorRT 7.1.3
20.08

2.2.0

20.07 1.15.0

2.1.0

NVIDIA CUDA 11.0.194
20.06 1.14.0

2.0.0

NVIDIA CUDA 11.0.167 TensorRT 7.1.2
20.03.1 1.13.0 NVIDIA CUDA 10.2.89 TensorRT 7.0.0
20.03 1.12.0

20.02

20.01

1.11.0
1.10.0

19.12

19.11

1.9.0 TensorRT 6.0.1
1.8.0
19.10 1.7.0 NVIDIA CUDA 10.1.243
19.09 1.6.0
19.08 1.5.0 TensorRT 5.1.5

Known Issues

  • The core Python binding may incur an additional D2H and H2D copy if the backend and frontend both specify device memory to be used for response tensors.
  • A segmentation fault related to DCGM and NSCQ may be encountered during server shutdown on NVSwitch systems. A possible workaround for this issue is to disable the collection of GPU metrics `tritonserver --allow-gpu-metrics false ...`
  • vLLM backend currently does not take advantage of the vLLM v0.6 performance improvement when metrics are enabled.
  • When using TensorRT models, if auto-complete configuration is disabled and is_non_linear_format_io:true for reformat-free tensors is not provided in the model configuration, the model may not load successfully.
  • When using Python models indecoupled mode, users need to ensure that the ResponseSender goes out of scope or is properly cleaned up before unloading the model to guarantee that the unloading process executes correctly.
  • The Triton Inference Server with vLLM backend currently does not support running vLLM models with tensor parallelism sizes greater than 1 and default "distributed_executor_backend" setting when using explicit model control mode. In attempt to load a vllm model (tp > 1) in explicit mode, users could potentially see failure at the `initialize` step: `could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads`. For the default model control mode, after server shutdown, vllm related sub-processes are not killed. Related vllm issue: https://github.com/vllm-project/vllm/issues/6766 . Please specify distributed_executor_backend:ray in the model.json when deploying vllm models with tensor parallelism > 1.

  • When loading models with file override, multiple model configuration files are not supported. Users must provide the model configuration by setting parameter config : <JSON> instead of custom configuration file in the following format: file:configs/<model-config-name>.pbtxt : <base64-encoded-file-content>.
  • TensorRT-LLM backend provides limited support of Triton extensions and features.
  • The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.
  • The Java CAPI is known to have intermittent segfaults.
  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. NVIDIA recommends experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.
  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.
  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug:https://github.com/pytorch/pytorch/issues/38273.
  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices.
  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
  • When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
  • Python backend support for Windows is limited and does not currently support the following features:
    • GPU tensors
    • CPU and GPU-related metrics
    • Custom execution environments
    • The model load/unload APIs