TensorRT inference server Release 18.06 Beta

The NVIDIA container image of the TensorRT inference server, release 18.06, is available as a beta release.

Contents of the TensorRT inference server

This container image contains the TensorRT inference server executable in /opt/inference_server.

The container also includes the following:

Driver Requirements

Release 18.06 is based on CUDA 9, which requires NVIDIA Driver release 384.xx.

Key Features and Enhancements

This TensorRT inference server release includes the following key features and enhancements.
  • The TensorRT inference server container image version 18.06 is based on NVIDIA Inference Server 0.3.0 Beta and TensorFlow 1.8.0 and Caffe2 0.8.1.
  • Support added for Caffe2 NetDef models.
  • Support added for CPU-only servers in addition to servers that have one or more GPUs. The Inference Server can simultaneously use both CPUs and GPUs for inferencing.
  • Logging format and control is unified across all inferencing backends: TensorFlow, TensorRT, and Caffe2.
  • Gracefully exits upon receiving SIGTERM or SIGINT. Any in-flight inferences are allowed to complete before exiting, subject to a timeout.
  • Server status is enhanced to report the readiness and availability of the server and of each model (and model version).
  • Ubuntu 16.04 with May 2018 updates

Known Issues

There are no known issues in this release.