Triton Inference Server Release 18.10 Beta
The Inference Server container image, previously referred to as Inference Server, release 18.10, is available as a beta release.
Contents of the Triton inference server
This container image contains the Triton inference server executable in /opt/tensorrtserver.
The container also includes the following:
- Ubuntu 16.04 including Python 3.5
- NVIDIA CUDA 10.0.130 including CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) 10.0.130
- NVIDIA CUDA® Deep Neural Network library™ (cuDNN) 7.3.0
- NCCL 2.3.6 (optimized for NVLink™ )
- OpenMPI 3.1.2
- TensorRT 5.0.0 RC
Driver Requirements
Release 18.10 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This Inference Server release includes the following key features and
enhancements.
- The Inference Server container image version 18.10 is based on NVIDIA Inference Server 0.7.0 beta, TensorFlow 1.10.0, and Caffe2 0.8.2.
- Latest version of NCCL 2.3.6.
- Latest version of OpenMPI 3.1.2.
- Dynamic batching support is added for all model types. Dynamic batching can be enabled and configured on a per-model bases.
- An improved inference request scheduler provides better handling of inference requests.
- Added new metrics to indicate GPU power limit, GPU utilization, and model executions (which is useful for determining the impact of dynamic batching).
- Prometheus metrics are now tagged with GPU UUID, model name, and model version as appropriate, so that metric values can be correlated to specific GPUs and models.
- Request latencies reported by status API and metrics are more clear in what they report, for example total request time, queuing time, and inference compute time are now reported.
- Ubuntu 16.04 with September 2018 updates