Triton Inference Server Release 21.02

The Triton Inference Server container image, release 21.02, is available on NGC and is open source on GitHub.

Contents of the Triton Inference Server container

The Triton Inference Server Docker image contains the inference server executable and related shared libraries in /opt/tritonserver.

Driver Requirements

Release 21.02 is based on NVIDIA CUDA 11.2.0, which requires NVIDIA Driver release 460.27.04 or later. However, if you are running on Data Center GPUs (formerly Tesla), for example, T4, you may use NVIDIA driver release 418.40 (or later R418), 440.33 (or later R440), 450.51(or later R450). The CUDA driver's compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades and NVIDIA CUDA and Drivers Support.

GPU Requirements

Release 21.02 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the NVIDIA Pascal, Volta, Turing, and Ampere Architecture GPU families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This Inference Server release includes the following key features and enhancements.

  • Refer to the 21.02 column of the Frameworks Support Matrix for container image versions that the 21.02 inference server container is based on.
  • Fixed a bug in TensorRT backend that could, in rare cases, lead to corruption of output tensors.
  • Fixed a performance issue in the HTTP/REST client that occurred when the client does not explicitly request specific outputs. In this case all outputs are now returned as binary data where previously they were returned as JSON.
  • Added an example Java and Scala client based on GRPC-generated API.
  • Extended perf_analyzer to be able to work with TFServing and TorchServe.
  • The legacy custom backend API is deprecated and will be removed in a future release. The Triton Backend API should be used as the API for custom backends. The Triton Backend API remains fully supported and that support will continue indefinitely.
  • Model Analyzer parameters and test model configurations can be specified with JSON configuration file.
  • Model Analyzer will report performance metrics for end-to-end latency and CPU memory usage.
  • Ubuntu 20.04 with January 2021 updates.

NVIDIA Triton Inference Server Container Versions

The following table shows what versions of Ubuntu, CUDA, Triton Inference Server, and TensorRT are supported in each of the NVIDIA containers for Triton Inference Server. For older container versions, refer to the Frameworks Support Matrix.

Known Issues