Triton Inference Server Release 19.02 Beta
The TensorRT Inference Server container image, release 19.02, is available as a beta release and is open source on GitHub.
Contents of the Triton inference server
This container image contains the TensorRT Inference Server executable and related shared libraries in /opt/tensorrtserver.
Driver Requirements
Release 19.02 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
GPU Requirements
Release 19.02 supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, and Turing families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.
Key Features and Enhancements
- The inference server container image version 19.02 is based on NVIDIA TensorRT Inference Server 0.11.0 beta, TensorFlow 1.13.0-rc0, and Caffe2 0.8.2.
- Variable-size input and output tensors are now supported.
- STRING datatype is now supported for input and output tensors for TensorFlow models and custom backends.
- The inference server can now be run on systems without GPUs or that do not have CUDA installed.
- Ubuntu 16.04 with January 2019 updates
Known Issues
- This is a beta release of the Inference Server. All features are expected to be available, however, some aspects of functionality and performance will likely be limited compared to a non-beta release.
- If using or upgrading to a 3-part-version driver, for example, a driver that takes the format of xxx.yy.zz, you will receive a Failed to detect NVIDIA driver version. message. This is due to a known bug in the entry point script's parsing of 3-part driver versions. This message is non-fatal and can be ignored. This will be fixed in the 19.04 release.