TensorRT Inference Server Release 18.11 Beta

The inference server container image, previously referred to as Inference Server, release 18.11, is available as a beta release.

Contents of the TensorRT inference server

This container image contains the TensorRT inference server executable in /opt/tensorrtserver.

Driver Requirements

Release 18.11 is based on CUDA 10, which requires NVIDIA Driver release 410.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384. For more information, see CUDA Compatibility and Upgrades.

Key Features and Enhancements

This Inference Server release includes the following key features and enhancements.
  • The Inference Server container image version 18.11 is based on NVIDIA Inference Server 0.8.0 beta, TensorFlow 1.12.0-rc2, and Caffe2 0.8.2.
  • Models may now be added to and removed from the model repository without requiring an inference server restart.
  • Fixed an issue with models that don’t support batching. For models that don’t support batching, set the model configuration to max_batch_size = 0.
  • Added a metric to indicate GPU energy consumption.
  • Latest version of NCCL 2.3.7.
  • Latest version of NVIDIA cuDNN 7.4.1.
  • Latest version of TensorRT 5.0.2
  • Ubuntu 16.04 with October 2018 updates

Known Issues

This is a beta release of the Inference Server. All features are expected to be available, however, some aspects of functionality and performance will likely be limited compared to a non-beta release.