Triton Inference Server Release 23.04
The Triton Inference Server container image, release 23.04, is available on NGC and is open source on GitHub.
Contents of the Triton Inference Server container
The Triton Inference Server Docker image contains the inference server executable and related shared libraries in /opt/tritonserver.
For the list of what the container includes, refer to Deep Learning Frameworks Support Matrix.
Driver Requirements
Release 23.04 is based on CUDA 12.1.0, which requires NVIDIA Driver release 530 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 450.51 (or later R450), 470.57 (or later R470), 510.47 (or later R510), 515.65 (or later R515), 525.85 (or later R525), or 530.30 (or later R530).
The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R460, and R520 drivers, which are not forward-compatible with CUDA 12.1. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.GPU Requirements
Release 23.04 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, NVIDIA Hopper™, and NVIDIA Ada Lovelace architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
Key Features and Enhancements
This Inference Server release includes the following key features and enhancements.
- Triton can now load models concurrently reducing the server start-up times. Sequence batcher with direct scheduling strategy now includes experimental support for schedule policy.
- Triton’s ragged batching support has been extended to the PyTorch backend.
- Triton can now forward HTTP/GRPC headers as inference request parameters to the backend.
- Triton python backend’s business logic scripting now allows developers to select a specific device to receive output tensors from a BLS call.
- Triton latency metrics can now be obtained as configurable quantiles over a sliding time window using experimental metrics summary support.
- Users can now restrict the access of protocols on a given Triton endpoint.
- Triton now provides limited support for tracing inference requests using OpenTelemetry Trace APIs.
- Model Analyzer now supports BLS Models.
- Refer to the 23.04 column of the Frameworks Support Matrix for container image versions on which the 23.04 inference server container is based.
NVIDIA Triton Inference Server Container Versions
The following table shows what versions of Ubuntu, CUDA, Triton Inference Server, and NVIDIA TensorRT™ are supported in each of the NVIDIA containers for Triton Inference Server. For older container versions, refer to the Frameworks Support Matrix.
Known Issues
- Tensorflow backend no longer supports TensorFlow version 1.
- Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.
- Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.
- Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.
- Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug:https://github.com/pytorch/pytorch/issues/38273
- Triton Client PIP wheels for Arm SBSA are not available
from PyPI and pip will install an incorrect Jetson
version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
- Traced models in PyTorch seem to create overflows when
int8 tensor values are transformed to int32 on the
GPU.
Refer to https://github.com/pytorch/pytorch/issues/66930 for more information.
- Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
- Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.