TensorRT Inference Server Release 19.01 Beta

The TensorRT Inference Server container image, release 19.01, is available as a beta release and is open source on GitHub.

Contents of the TensorRT inference server

This container image contains the TensorRT Inference Server executable and related shared libraries in /opt/tensorrtserver.

Driver Requirements

Release 19.01 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.

GPU Requirements

Release 19.01 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, and Turing families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This Inference Server release includes the following key features and enhancements.
  • The inference server container image version 19.01 is based on NVIDIA TensorRT Inference Server 0.10.0 beta, TensorFlow 1.12.0, and Caffe2 0.8.2.
  • Latest version of NVIDIA cuDNN 7.4.2
  • Custom backend support. The inference server allows individual models to be implemented with custom backends instead of by a deep learning framework. With a custom backend, a model can implement any logic desired, while still benefiting from the GPU support, concurrent execution, dynamic batching and other features provided by the inference server.
  • Ubuntu 16.04 with December 2018 updates

Known Issues

This is a beta release of the Inference Server. All features are expected to be available, however, some aspects of functionality and performance will likely be limited compared to a non-beta release.