Triton Inference Server Release 18.09 Beta
The Inference Server container image, previously referred to as Inference Server, release 18.09, is available as a beta release.
Contents of the Triton inference server
This container image contains the Triton inference server executable in /opt/tensorrtserver.
The container also includes the following:
- Ubuntu 16.04 including Python 3.5
- NVIDIA CUDA 10.0.130 including CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) 10.0.130
- NVIDIA CUDA® Deep Neural Network library™ (cuDNN) 7.3.0
- NCCL 2.3.4 (optimized for NVLink™ )
- OpenMPI 2.0
- TensorRT 5.0.0 RC
Driver Requirements
Release 18.09 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.
Key Features and Enhancements
This Inference Server release includes the following key features and
enhancements.
- The Inference Server container image version 18.09 is based on NVIDIA Inference Server 0.6.0 beta, TensorFlow 1.10.0, and Caffe2 0.8.1.
- Latest version of cuDNN 7.3.0.
- Latest version of CUDA 10.0.130 which includes support for DGX-2, Turing, and Jetson Xavier.
- Latest version of cuBLAS 10.0.130.
- Latest version of NCCL 2.3.4.
- Latest version of TensorRT 5.0.0 RC.
- Google Cloud Storage paths are now allowed when specifying the location of the model store. For example, --model-store=gs://<bucket>/<mode store path>.
- Additional Prometheus metrics are exposed on the metrics endpoint: GPU power usage; GPU power limit; per-model request, queue and compute time.
- The C++ and Python client API now supports asynchronous requests.
- Ubuntu 16.04 with August 2018 updates
Known Issues
- This is a beta release of the Inference Server. All features are expected to be available, however, some aspects of functionality and performance will likely be limited compared to a non-beta release.
- Starting with the 18.09 release, the directory holding the Triton inference server components has changed from /opt/inference_server to /opt/tensorrtserver and the Triton inference server executable name has changed from inference_server to trtserver.