TensorRT  6.0.0.6
nvinfer1::IPluginExt Class Referenceabstract

Plugin class for user-implemented layers. More...

#include <NvInferRTExt.h>

Inheritance diagram for nvinfer1::IPluginExt:
nvinfer1::IPlugin

Public Member Functions

virtual int getTensorRTVersion () const TRTNOEXCEPT
 Return the API version with which this plugin was built. More...
 
virtual bool supportsFormat (DataType type, PluginFormat format) const TRTNOEXCEPT=0
 Check format support. More...
 
virtual void configureWithFormat (const Dims *inputDims, int nbInputs, const Dims *outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) TRTNOEXCEPT=0
 Configure the layer. More...
 
- Public Member Functions inherited from nvinfer1::IPlugin
virtual int getNbOutputs () const TRTNOEXCEPT=0
 Get the number of outputs from the layer. More...
 
virtual Dims getOutputDimensions (int index, const Dims *inputs, int nbInputDims) TRTNOEXCEPT=0
 Get the dimension of an output tensor. More...
 
virtual int initialize () TRTNOEXCEPT=0
 Initialize the layer for execution. This is called when the engine is created. More...
 
virtual void terminate () TRTNOEXCEPT=0
 Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More...
 
virtual size_t getWorkspaceSize (int maxBatchSize) const TRTNOEXCEPT=0
 Find the workspace size required by the layer. More...
 
virtual int enqueue (int batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream) TRTNOEXCEPT=0
 Execute the layer. More...
 
virtual size_t getSerializationSize () TRTNOEXCEPT=0
 Find the size of the serialization buffer required. More...
 
virtual void serialize (void *buffer) TRTNOEXCEPT=0
 Serialize the layer. More...
 

Protected Member Functions

void configure (const Dims *, int, const Dims *, int, int) _TENSORRT_FINAL TRTNOEXCEPT
 Derived classes should not implement this. In a C++11 API it would be override final.
 

Detailed Description

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.

Member Function Documentation

virtual void nvinfer1::IPluginExt::configureWithFormat ( const Dims inputDims,
int  nbInputs,
const Dims outputDims,
int  nbOutputs,
DataType  type,
PluginFormat  format,
int  maxBatchSize 
)
pure virtual

Configure the layer.

This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size.

Parameters
inputDimsThe input tensor dimensions.
nbInputsThe number of inputs.
outputDimsThe output tensor dimensions.
nbOutputsThe number of outputs.
typeThe data type selected for the engine.
formatThe format selected for the engine.
maxBatchSizeThe maximum batch size.

The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions).

virtual int nvinfer1::IPluginExt::getTensorRTVersion ( ) const
inlinevirtual

Return the API version with which this plugin was built.

Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.

virtual bool nvinfer1::IPluginExt::supportsFormat ( DataType  type,
PluginFormat  format 
) const
pure virtual

Check format support.

Parameters
typeDataType requested.
formatPluginFormat requested.
Returns
true if the plugin supports the type-format combination.

This function is called by the implementations of INetworkDefinition, IBuilder, and ICudaEngine. In particular, it is called when creating an engine and when deserializing an engine.


The documentation for this class was generated from the following file: