Triton Inference Server Release 18.11 Beta
The inference server container image, previously referred to as Inference Server, release 18.11, is available as a beta release.
Contents of the Triton inference server
This container image contains the Triton inference server executable in /opt/tensorrtserver.
The container also includes the following:
- Ubuntu 16.04 including Python 3.5
- NVIDIA CUDA 10.0.130 including CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) 10.0.130
- NVIDIA CUDA® Deep Neural Network library™ (cuDNN) 7.4.1
- NCCL 2.3.7 (optimized for NVLink™ )
- OpenMPI 3.1.2
- TensorRT 5.0.2
Driver Requirements
Release 18.11 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This Inference Server release includes the following key features and
enhancements.
- The Inference Server container image version 18.11 is based on NVIDIA Inference Server 0.8.0 beta, TensorFlow 1.12.0-rc2, and Caffe2 0.8.2.
- Models may now be added to and removed from the model repository without requiring an inference server restart.
- Fixed an issue with models that don’t support batching. For models that don’t support batching, set the model configuration to max_batch_size = 0.
- Added a metric to indicate GPU energy consumption.
- Latest version of NCCL 2.3.7.
- Latest version of NVIDIA cuDNN 7.4.1.
- Latest version of TensorRT 5.0.2
- Ubuntu 16.04 with October 2018 updates