TensorRT inference server Release 18.09 Beta

The NVIDIA container image of the TensorRT inference server, release 18.09, is available as a beta release.

Contents of the TensorRT inference server

This container image contains the TensorRT inference server executable in /opt/tensorrtserver.

Driver Requirements

Release 18.09 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you can use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.

Key Features and Enhancements

This TensorRT inference server release includes the following key features and enhancements.
  • The TensorRT inference server container image version 18.09 is based on NVIDIA TensorRT Inference Server 0.6.0 Beta and TensorFlow 1.10.0.
  • Latest version of cuDNN 7.3.0.
  • Latest version of CUDA 10.0 which includes support for DGX-2, Turing, and Jetson Xavier.
  • Latest version of cuBLAS 10.0.
  • Latest version of NCCL 2.3.4.
  • Latest version of TensorRT 5.0.0 RC.
  • Google Cloud Storage paths are now allowed when specifying the location of the model store. For example, --model-store=gs://<bucket>/<mode store path>.
  • Additional Prometheus metrics are exposed on the metrics endpoint: GPU power usage; GPU power limit; per-model request, queue and compute time.
  • The C++ and Python client API now supports asynchronous requests.
  • Ubuntu 16.04 with August 2018 updates

Known Issues

Starting with the 18.09 release the directory holding the inference server components has changed from /opt/inference_server to /opt/tensorrtserver and the inference server executable name has changed from inference_server to trtserver.