nvinfer1::IPluginExt Class Referenceabstract

Plugin class for user-implemented layers. More...

#include <NvInferRuntime.h>

Inheritance diagram for nvinfer1::IPluginExt:

Public Member Functions

virtual int32_t getTensorRTVersion () const
 Return the API version with which this plugin was built. More...
virtual bool supportsFormat (DataType type, PluginFormat format) const =0
 Check format support. More...
virtual void configureWithFormat (const Dims *inputDims, int32_t nbInputs, const Dims *outputDims, int32_t nbOutputs, DataType type, PluginFormat format, int32_t maxBatchSize)=0
 Configure the layer. More...
- Public Member Functions inherited from nvinfer1::IPlugin
virtual int32_t getNbOutputs () const =0
 Get the number of outputs from the layer. More...
virtual Dims getOutputDimensions (int32_t index, const Dims *inputs, int32_t nbInputDims)=0
 Get the dimension of an output tensor. More...
virtual int32_t initialize ()=0
 Initialize the layer for execution. This is called when the engine is created. More...
virtual void terminate ()=0
 Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More...
virtual size_t getWorkspaceSize (int32_t maxBatchSize) const =0
 Find the workspace size required by the layer. More...
virtual int32_t enqueue (int32_t batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream)=0
 Execute the layer. More...
virtual size_t getSerializationSize ()=0
 Find the size of the serialization buffer required. More...
virtual void serialize (void *buffer)=0
 Serialize the layer. More...

Protected Member Functions

void configure (const Dims *, int32_t, const Dims *, int32_t, int32_t) _TENSORRT_FINAL
 Derived classes should not implement this. In a C++11 API it would be override final.

Detailed Description

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.

Member Function Documentation

◆ configureWithFormat()

virtual void nvinfer1::IPluginExt::configureWithFormat ( const Dims inputDims,
int32_t  nbInputs,
const Dims outputDims,
int32_t  nbOutputs,
DataType  type,
PluginFormat  format,
int32_t  maxBatchSize 
pure virtual

Configure the layer.

This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size.

inputDimsThe input tensor dimensions.
nbInputsThe number of inputs.
outputDimsThe output tensor dimensions.
nbOutputsThe number of outputs.
typeThe data type selected for the engine.
formatThe format selected for the engine.
maxBatchSizeThe maximum batch size.

The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions).

DataType:kBOOL not supported.

◆ getTensorRTVersion()

virtual int32_t nvinfer1::IPluginExt::getTensorRTVersion ( ) const

Return the API version with which this plugin was built.

Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.

◆ supportsFormat()

virtual bool nvinfer1::IPluginExt::supportsFormat ( DataType  type,
PluginFormat  format 
) const
pure virtual

Check format support.

typeDataType requested.
formatPluginFormat requested.
true if the plugin supports the type-format combination.

This function is called by the implementations of INetworkDefinition, IBuilder, and ICudaEngine. In particular, it is called when creating an engine and when deserializing an engine.

DataType:kBOOL not supported.

The documentation for this class was generated from the following file: