Extending TensorRT with Custom Layers#

NVIDIA TensorRT supports many layers, and its functionality is continually extended; however, there can be cases in which the layers supported do not cater to a model’s specific needs. In such cases, TensorRT can be extended by implementing custom layers, often called plugins.

TensorRT contains standard plugins that can be loaded into your application. For a list of open-source plugins, refer to GitHub: TensorRT plugins.

To use standard TensorRT plugins in your application, the libnvinfer_plugin.so (nvinfer_plugin.dll on Windows) library must be loaded, and all plugins must be registered by calling initLibNvInferPlugins in your application code. For more information about these plugins, refer to the NvInferPlugin.h file.

You can write and add your own if these plugins do not meet your needs.