Triton Inference Server Release 23.06
The Triton Inference Server container image, release 23.06, is available on NGC and is open source on GitHub.
Contents of the Triton Inference Server container
The Triton Inference Server Docker image contains the inference server executable and related shared libraries in /opt/tritonserver.
For the list of what the container includes, refer to Deep Learning Frameworks Support Matrix.
Driver Requirements
Release 23.06 is based on CUDA 12.1.1, which requires NVIDIA Driver release 530 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 450.51 (or later R450), 470.57 (or later R470), 510.47 (or later R510), 515.65 (or later R515), 525.85 (or later R525), or 530.30 (or later R530).
The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R460, and R520 drivers, which are not forward-compatible with CUDA 12.1. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.GPU Requirements
Release 23.06 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, NVIDIA Hopper™, and NVIDIA Ada Lovelace architecture families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
Key Features and Enhancements
This Inference Server release includes the following key features and enhancements.
- Support for KIND_MODEL instance type has been extended to PyTorch backend.
- The gRPC clients can now indicate whether they want to receive the flags associated with each response. This can help the clients to programmatically determine when all the responses for a given request have been received on the client side for decoupled models.
- Added beta support for using Redis as a cache for inference requests.
- The statistics extension now includes the memory usage of the loaded models This statistics is currently implemented only for TensorRT and ONNXRuntime backends.
- Added support for batch inputs in ragged batching for PyTorch backend.
- Added serial sequences for Perf Analyzer.
- Refer to the 23.06 column of the Frameworks Support Matrix for container image versions on which the 23.06 inference server container is based.
NVIDIA Triton Inference Server Container Versions
The following table shows what versions of Ubuntu, CUDA, Triton Inference Server, and NVIDIA TensorRT™ are supported in each of the NVIDIA containers for Triton Inference Server. For older container versions, refer to the Frameworks Support Matrix.
Known Issues
- The FasterTransformer backend build only works with Triton 23.04 and older releases.
- OpenVINO 2022.1 is used in the OpenVINO backend and the OpenVINO execution provider for the Onnxruntime Backend. OpenVINO 2022.1 is not officially supported on Ubuntu 22.04 and should be treated as beta.
- Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.
- Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.
- Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug:https://github.com/pytorch/pytorch/issues/38273
- Triton Client PIP wheels for Arm SBSA are not available
from PyPI and pip will install an incorrect Jetson
version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
- Traced models in PyTorch seem to create overflows when
int8 tensor values are transformed to int32 on the
GPU.
Refer to https://github.com/pytorch/pytorch/issues/66930 for more information.
- Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
- Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.