TensorRT  7.1.3.0
nvinfer1::IPluginV2Ext Class Referenceabstract

Plugin class for user-implemented layers. More...

#include <NvInferRuntimeCommon.h>

Inheritance diagram for nvinfer1::IPluginV2Ext:
nvinfer1::IPluginV2 nvinfer1::IPluginV2DynamicExt nvinfer1::IPluginV2IOExt

Public Member Functions

virtual nvinfer1::DataType getOutputDataType (int index, const nvinfer1::DataType *inputTypes, int nbInputs) const =0
 Return the DataType of the plugin output at the requested index. The default behavior should be to return the type of the first input, or DataType::kFLOAT if the layer has no inputs. The returned data type must have a format that is supported by the plugin. More...
 
virtual bool isOutputBroadcastAcrossBatch (int outputIndex, const bool *inputIsBroadcasted, int nbInputs) const =0
 Return true if output tensor is broadcast across a batch. More...
 
virtual bool canBroadcastInputAcrossBatch (int inputIndex) const =0
 Return true if plugin can use input that is broadcast across batch without replication. More...
 
virtual void configurePlugin (const Dims *inputDims, int nbInputs, const Dims *outputDims, int nbOutputs, const DataType *inputTypes, const DataType *outputTypes, const bool *inputIsBroadcast, const bool *outputIsBroadcast, PluginFormat floatFormat, int maxBatchSize)=0
 Configure the layer with input and output data types. More...
 
virtual void attachToContext (cudnnContext *, cublasContext *, IGpuAllocator *)
 Attach the plugin object to an execution context and grant the plugin the access to some context resource. More...
 
virtual void detachFromContext ()
 Detach the plugin object from its execution context. More...
 
virtual IPluginV2Extclone () const _TENSORRT_OVERRIDE=0
 Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object should also be pre-configured. The returned object should allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication.
 
- Public Member Functions inherited from nvinfer1::IPluginV2
virtual const char * getPluginType () const =0
 Return the plugin type. Should match the plugin name returned by the corresponding plugin creator.
 
virtual const char * getPluginVersion () const =0
 Return the plugin version. Should match the plugin version returned by the corresponding plugin creator.
 
virtual int getNbOutputs () const =0
 Get the number of outputs from the layer. More...
 
virtual Dims getOutputDimensions (int index, const Dims *inputs, int nbInputDims)=0
 Get the dimension of an output tensor. More...
 
virtual bool supportsFormat (DataType type, PluginFormat format) const =0
 Check format support. More...
 
virtual int initialize ()=0
 Initialize the layer for execution. This is called when the engine is created. More...
 
virtual void terminate ()=0
 Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More...
 
virtual size_t getWorkspaceSize (int maxBatchSize) const =0
 Find the workspace size required by the layer. More...
 
virtual int enqueue (int batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream)=0
 Execute the layer. More...
 
virtual size_t getSerializationSize () const =0
 Find the size of the serialization buffer required. More...
 
virtual void serialize (void *buffer) const =0
 Serialize the layer. More...
 
virtual void destroy ()=0
 Destroy the plugin object. This will be called when the network, builder or engine is destroyed.
 
virtual void setPluginNamespace (const char *pluginNamespace)=0
 Set the namespace that this plugin object belongs to. Ideally, all plugin objects from the same plugin library should have the same namespace.
 
virtual const char * getPluginNamespace () const =0
 Return the namespace of the plugin object.
 

Protected Member Functions

int getTensorRTVersion () const _TENSORRT_OVERRIDE
 Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPlguinV2. More...
 
void configureWithFormat (const Dims *, int, const Dims *, int, DataType, PluginFormat, int) _TENSORRT_OVERRIDE
 Derived classes should not implement this. In a C++11 API it would be override final.
 

Detailed Description

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. This interface provides additional capabilities to the IPluginV2 interface by supporting different output data types and broadcast across batch.

See also
IPluginV2

Member Function Documentation

◆ attachToContext()

virtual void nvinfer1::IPluginV2Ext::attachToContext ( cudnnContext *  ,
cublasContext *  ,
IGpuAllocator  
)
inlinevirtual

Attach the plugin object to an execution context and grant the plugin the access to some context resource.

Parameters
cudnnThe cudnn context handle of the execution context
cublasThe cublas context handle of the execution context
allocatorThe allocator used by the execution context

This function is called automatically for each plugin when a new execution context is created. If the plugin needs per-context resource, it can be allocated here. The plugin can also get context-owned CUDNN and CUBLAS context here.

◆ canBroadcastInputAcrossBatch()

virtual bool nvinfer1::IPluginV2Ext::canBroadcastInputAcrossBatch ( int  inputIndex) const
pure virtual

Return true if plugin can use input that is broadcast across batch without replication.

Parameters
inputIndexIndex of input that could be broadcast.

For each input whose tensor is semantically broadcast across a batch, TensorRT calls this method before calling configurePlugin. If canBroadcastInputAcrossBatch returns true, TensorRT will not replicate the input tensor; i.e., there will be a single copy that the plugin should share across the batch. If it returns false, TensorRT will replicate the input tensor so that it appears like a non-broadcasted tensor.

This method is called only for inputs that can be broadcast.

Implemented in nvinfer1::IPluginV2DynamicExt.

◆ configurePlugin()

virtual void nvinfer1::IPluginV2Ext::configurePlugin ( const Dims inputDims,
int  nbInputs,
const Dims outputDims,
int  nbOutputs,
const DataType inputTypes,
const DataType outputTypes,
const bool *  inputIsBroadcast,
const bool *  outputIsBroadcast,
PluginFormat  floatFormat,
int  maxBatchSize 
)
pure virtual

Configure the layer with input and output data types.

This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, data types and maximum batch size.

Parameters
inputDimsThe input tensor dimensions.
nbInputsThe number of inputs.
outputDimsThe output tensor dimensions.
nbOutputsThe number of outputs.
inputTypesThe data types selected for the plugin inputs.
outputTypesThe data types selected for the plugin outputs.
inputIsBroadcastTrue for each input that the plugin must broadcast across the batch.
outputIsBroadcastTrue for each output that TensorRT will broadcast across the batch.
floatFormatThe format selected for the engine for the floating point inputs/outputs.
maxBatchSizeThe maximum batch size.

The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions). When inputIsBroadcast or outputIsBroadcast is true, the outermost batch size for that input or output should be treated as if it is one. inputIsBroadcast[i] is true only if the input is semantically broadcast across the batch and canBroadcastInputAcrossBatch(i) returned true. outputIsBroadcast[i] is true only if isOutputBroadcastAcrossBatch(i) returned true.

Warning
for the floatFormat field, the values PluginFormat::kCHW4, PluginFormat::kCHW16, and PluginFormat::kCHW32 will not be passed in, this is to keep backward compatibility with TensorRT 5.x series. Use PluginV2IOExt or PluginV2DynamicExt for other PluginFormats.

Implemented in nvinfer1::IPluginV2IOExt, and nvinfer1::IPluginV2DynamicExt.

◆ detachFromContext()

virtual void nvinfer1::IPluginV2Ext::detachFromContext ( )
inlinevirtual

Detach the plugin object from its execution context.

This function is called automatically for each plugin when a execution context is destroyed. If the plugin owns per-context resource, it can be released here.

◆ getOutputDataType()

virtual nvinfer1::DataType nvinfer1::IPluginV2Ext::getOutputDataType ( int  index,
const nvinfer1::DataType inputTypes,
int  nbInputs 
) const
pure virtual

Return the DataType of the plugin output at the requested index. The default behavior should be to return the type of the first input, or DataType::kFLOAT if the layer has no inputs. The returned data type must have a format that is supported by the plugin.

See also
supportsFormat()
Warning
DataType:kBOOL not supported.

◆ getTensorRTVersion()

int nvinfer1::IPluginV2Ext::getTensorRTVersion ( ) const
inlineprotectedvirtual

Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPlguinV2.

Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.

Reimplemented from nvinfer1::IPluginV2.

Reimplemented in nvinfer1::IPluginV2IOExt.

◆ isOutputBroadcastAcrossBatch()

virtual bool nvinfer1::IPluginV2Ext::isOutputBroadcastAcrossBatch ( int  outputIndex,
const bool *  inputIsBroadcasted,
int  nbInputs 
) const
pure virtual

Return true if output tensor is broadcast across a batch.

Parameters
outputIndexThe index of the output
inputIsBroadcastedThe ith element is true if the tensor for the ith input is broadcast across a batch.
nbInputsThe number of inputs

The values in inputIsBroadcasted refer to broadcasting at the semantic level, i.e. are unaffected by whether method canBroadcastInputAcrossBatch requests physical replication of the values.

Implemented in nvinfer1::IPluginV2DynamicExt.


The documentation for this class was generated from the following file: