Accelerating TensorFlow 1.9 With TensorRT 4.0.1 Using The 18.08 Container

These release notes are for accelerating TensorFlow 1.9 with TensorRT version 4.0.1 using the TensorFlow 18.08 container. For specific details about TensorRT, see the TensorRT 4.0.1 Release Notes.

Key Features and Enhancements

This release includes the following key features and enhancements.
  • TensorRT conversion has been integrated into optimization pass. The tensorflow/contrib/tensorrt/test/test_tftrt.py script has an example showing the use of optimization pass.

Compatibility

Limitations Of Accelerating TensorFlow With TensorRT

There are some limitations you may experience after accelerating TensorFlow 1.9 with TensorRT 4.0.1, such as:
  • TensorRT conversion relies on static shape inference, where the frozen graph should provide explicit dimension on all ranks other than the first batch dimension.

  • Batchsize for converted TensorRT engines are fixed at conversion time. Inference can only run with batchsize smaller than the specified number.

  • Current supported models are limited to CNNs. Object detection models and RNNs are not yet supported.

  • Current optimization pass does not support INT8 yet.

Known Issues

  • Input tensors are required to have rank 4 for quantization mode (INT8 precision).