TensorRT inference server Release 18.05 Beta

The NVIDIA container image of the TensorRT inference server, release 18.05, is available as a beta release.

Contents of the TensorRT inference server

This container image contains the TensorRT inference server executable in /opt/inference_server.

The container also includes the following:

Driver Requirements

Release 18.05 is based on CUDA 9, which requires NVIDIA Driver release 384.xx.

Key Features and Enhancements

This TensorRT inference server release includes the following key features and enhancements.
  • The TensorRT inference server container image version 18.05 is based on NVIDIA Inference Server 0.2.0 Beta and TensorFlow 1.7.0.
  • Multiple model support. The Inference Server can manage any number and mix of TensorFlow to TensorRT models (limited by system disk and memory resources).
  • TensorFlow to TensorRT integrated model support. The Inference Server can manage TensorFlow models that have been optimized with TensorRT.
  • Multi-GPU support. The Inference Server can distribute inferencing across all system GPUs. Systems with heterogeneous GPUs are supported.
  • Multi-tenancy support. Multiple models (or multiple instances of the same model) can run simultaneously on the same GPU.
  • Batching support
  • Ubuntu 16.04 with April 2018 updates

Known Issues

There are no known issues in this release.