TensorRT Release 22.04

The NVIDIA container image for TensorRT, release 22.04, is available on NGC.

Contents of the TensorRT container

This container includes the following:
  • The TensorRT C++ samples and C++ API documentation.
    • Build the samples can be by running make in the /workspace/tensorrt/samples directory.
    • The resulting executables are in the /workspace/tensorrt/bin directory.
    • The C++ API documentation is in the /workspace/tensorrt/doc/cpp directory.
  • The TensorRT Python samples and Python API documentation.
    • The Python samples are in the /workspace/tensorrt/samples/python directory.

      Refer to the respective README documents for more samples.

    • Many Python samples can be run by using python <script.py> -d /workspace/tensorrt/data.
      For example:
      python onnx_resnet50.py -d /workspace/tensorrt/data
    • The Python API documentation is in the /workspace/tensorrt/doc/python directory.
  • TensorRT 8.2.4.2.
    The ONNX parser and plug-in libraries that are bundled with this container are built from TensorRT Open Source Software:
Here are the major updates to the 22.04 TensorRT Open Source Software release:
  • Bug fixes and refactored the PyramidROIAlign plugin.
  • Fixed the MultilevelCropAndResize plugin crashes on Windows.
  • Added a Detectron2 Mask R-CNN R50-FPN Python sample.
  • Removed sampleNMT.
The container also includes the following:

Driver Requirements

Release 22.04 is based on CUDA 11.6.2, which requires NVIDIA Driver release 510 or later. However, if you are running on a Data Center GPU (for example, T4 or any other Tesla board), use NVIDIA driver release 418.40 (or later R418), 440.33 (or later R440), 450.51 (or later R450), 460.27 (or later R460), or 470.57 (or later R470). The CUDA driver's compatibility package only supports particular drivers. For a complete list of supported drivers, see CUDA Application Compatibility. For more information, see CUDA Compatibility and Upgrades and NVIDIA CUDA and Drivers Support.

GPU Requirements

Release 22.04 supports CUDA compute capability 3.5 and later. This corresponds to GPUs in the Kepler, Maxwell, Pascal, Volta, Turing, and NVIDIA Ampere Architecture GPU families. For a list of GPUs to which this compute capability corresponds, see CUDA GPUs. For additional support details, see the Deep Learning Frameworks Support Matrix.

Key Features and Enhancements

This TensorRT container release includes the following key features and enhancements.
  • TensorRT container image version 22.04 is based on TensorRT 8.2.4.2.

    For a list of the features and enhancements that were introduced in TensorRT 8.2.4.2, refer to the TensorRT 8.2.4.2 release notes.

  • Ubuntu 20.04 with March 2022 updates.

Announcements

  • Starting with the 21.12 release, a beta version of the TensorRT container is available for the Arm SBSA platform.

    For example, when you pull the nvcr.io/nvidia/tensorrt:22.04-py3 Docker image on an Arm SBSA machine, the Arm-specific image is automatically fetched.

  • NVIDIA Deep Learning Profiler (DLProf) v1.8, which was included in the 21.12 container, was the last release of DLProf.

    Starting with the 22.01 container, DLProf is longer included. It can still be manually installed by using a pip wheel on the nvidia-pyindex.

Obtaining Missing Data Files

Some samples require data files that are not included in the TensorRT container because of licensing restrictions, or because they are too large. Samples that do not include the required data files include a README.md file in the corresponding source directory that provides information about how to obtain the necessary data files.

Installing Required Python Modules

  • To complete some of the samples, you might want to first run the Python setup script.
  • If you need to install the missing Python modules and their dependencies, run the /opt/tensorrt/python/python_setup.sh script.

Installing Open Source Components

A script has been added to clone, build, and replace the provided plug-in, the Caffe parser, and the ONNX parser libraries with the open source ones that are based on the 22.04 tag on the official TensorRT open source repository.

To install the open source components in the container, run the following script:

/opt/tensorrt/install_opensource.sh

For more information, see GitHub: TensorRT 22.04.

Limitations

Known Issues

None.