Triton Inference Server Container Release Notes
The actual inference server is packaged in the Triton Inference Server container. This document provides information about how to set up and run the Triton inference server container, from the prerequisites to running the container. The release notes also provide a list of key features, packaged software in the container, software enhancements and improvements, known issues, and how to run the Triton Inference Server 2.43.0 (V2 API) for the 24.02 and earlier releases. The Triton inference server container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream. The libraries and contributions have all been tested, tuned, and optimized.
For a complete view of the supported software and specific versions that are packaged with the frameworks based on the container image, see the Frameworks Support Matrix.
For previously released Triton inference server documentation, see Triton Inference Server Archives.