TensorRT Inference Server Container Release Notes

The actual inference server is packaged within the TensorRT Inference Server container. This document walks you through the process of getting up and running with the TensorRT inference server container; from the prerequisites to running the container. Additionally, the release notes provide a list of key features, packaged software included in the container, software enhancements and improvements, any known issues, and how to run the TensorRT Inference Server 1.10.0 for the 20.01 and earlier releases. The TensorRT inference server container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.

To see a single view of the supported software and specific versions that come packaged with the frameworks based on the container image, see the Frameworks Support Matrix.

For previously released TensorRT inference server documentation, see TensorRT Inference Server Archives.