Triton Inference Server Container Release Notes

The actual inference server is packaged within the Triton Inference Server container. This document walks you through the process of getting up and running with the Triton inference server container; from the prerequisites to running the container. Additionally, the release notes provide a list of key features, packaged software included in the container, software enhancements and improvements, any known issues, and how to run the Triton Inference Server 2.17.0 (V2 API) for the 21.12 and earlier releases. The Triton inference server container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.

To see a single view of the supported software and specific versions that come packaged with the frameworks based on the container image, see the Frameworks Support Matrix.

For previously released Triton inference server documentation, see Triton Inference Server Archives.

Table of Contents