TensorRT inference server Release 18.09 Beta

The TensorRT inference server container image, previously referred to as inference server, release 18.09, is available as a beta release.

Contents of the TensorRT inference server

This container image contains the TensorRT inference server executable in /opt/tensorrtserver.

Driver Requirements

Release 18.09 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you can use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.

Key Features and Enhancements

This TensorRT inference server release includes the following key features and enhancements.
  • The TensorRT inference server container image version 18.09 is based on NVIDIA TensorRT inference server 0.6.0 beta, TensorFlow 1.10.0, and Caffe2 0.8.1.
  • Latest version of cuDNN 7.3.0.
  • Latest version of CUDA 10.0 which includes support for DGX-2, Turing, and Jetson Xavier.
  • Latest version of cuBLAS 10.0.
  • Latest version of NCCL 2.3.4.
  • Latest version of TensorRT 5.0.0 RC.
  • Google Cloud Storage paths are now allowed when specifying the location of the model store. For example, --model-store=gs://<bucket>/<mode store path>.
  • Additional Prometheus metrics are exposed on the metrics endpoint: GPU power usage; GPU power limit; per-model request, queue and compute time.
  • The C++ and Python client API now supports asynchronous requests.
  • Ubuntu 16.04 with August 2018 updates

Known Issues

  • This is a beta release of the TensorRT inference server. All features are expected to be available, however, some aspects of functionality and performance will likely be limited compared to a non-beta release.
  • Starting with the 18.09 release, the directory holding the TensorRT inference server components has changed from /opt/inference_server to /opt/tensorrtserver and the TensorRT inference server executable name has changed from inference_server to trtserver.