TensorRT 8.6.1
|
Plugin class for user-implemented layers. More...
#include <NvInferRuntimePlugin.h>
Public Member Functions | |
virtual nvinfer1::DataType | getOutputDataType (int32_t index, nvinfer1::DataType const *inputTypes, int32_t nbInputs) const noexcept=0 |
Return the DataType of the plugin output at the requested index. More... | |
virtual bool | isOutputBroadcastAcrossBatch (int32_t outputIndex, bool const *inputIsBroadcasted, int32_t nbInputs) const noexcept=0 |
Return true if output tensor is broadcast across a batch. More... | |
virtual bool | canBroadcastInputAcrossBatch (int32_t inputIndex) const noexcept=0 |
Return true if plugin can use input that is broadcast across batch without replication. More... | |
virtual void | configurePlugin (Dims const *inputDims, int32_t nbInputs, Dims const *outputDims, int32_t nbOutputs, DataType const *inputTypes, DataType const *outputTypes, bool const *inputIsBroadcast, bool const *outputIsBroadcast, PluginFormat floatFormat, int32_t maxBatchSize) noexcept=0 |
Configure the layer with input and output data types. More... | |
IPluginV2Ext ()=default | |
~IPluginV2Ext () override=default | |
virtual void | attachToContext (cudnnContext *, cublasContext *, IGpuAllocator *) noexcept |
Attach the plugin object to an execution context and grant the plugin the access to some context resource. More... | |
virtual void | detachFromContext () noexcept |
Detach the plugin object from its execution context. More... | |
IPluginV2Ext * | clone () const noexcept override=0 |
Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object should also be pre-configured. The returned object should allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication. More... | |
![]() | |
virtual AsciiChar const * | getPluginType () const noexcept=0 |
Return the plugin type. Should match the plugin name returned by the corresponding plugin creator. More... | |
virtual AsciiChar const * | getPluginVersion () const noexcept=0 |
Return the plugin version. Should match the plugin version returned by the corresponding plugin creator. More... | |
virtual int32_t | getNbOutputs () const noexcept=0 |
Get the number of outputs from the layer. More... | |
virtual Dims | getOutputDimensions (int32_t index, Dims const *inputs, int32_t nbInputDims) noexcept=0 |
Get the dimension of an output tensor. More... | |
virtual bool | supportsFormat (DataType type, PluginFormat format) const noexcept=0 |
Check format support. More... | |
virtual int32_t | initialize () noexcept=0 |
Initialize the layer for execution. This is called when the engine is created. More... | |
virtual void | terminate () noexcept=0 |
Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More... | |
virtual size_t | getWorkspaceSize (int32_t maxBatchSize) const noexcept=0 |
Find the workspace size required by the layer. More... | |
virtual int32_t | enqueue (int32_t batchSize, void const *const *inputs, void *const *outputs, void *workspace, cudaStream_t stream) noexcept=0 |
Execute the layer. More... | |
virtual size_t | getSerializationSize () const noexcept=0 |
Find the size of the serialization buffer required. More... | |
virtual void | serialize (void *buffer) const noexcept=0 |
Serialize the layer. More... | |
virtual void | destroy () noexcept=0 |
Destroy the plugin object. This will be called when the network, builder or engine is destroyed. More... | |
virtual void | setPluginNamespace (AsciiChar const *pluginNamespace) noexcept=0 |
Set the namespace that this plugin object belongs to. Ideally, all plugin objects from the same plugin library should have the same namespace. More... | |
virtual AsciiChar const * | getPluginNamespace () const noexcept=0 |
Return the namespace of the plugin object. More... | |
Protected Member Functions | |
int32_t | getTensorRTVersion () const noexcept override |
Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPluginV2. More... | |
void | configureWithFormat (Dims const *, int32_t, Dims const *, int32_t, DataType, PluginFormat, int32_t) noexcept override |
Derived classes should not implement this. In a C++11 API it would be override final. More... | |
Plugin class for user-implemented layers.
Plugins are a mechanism for applications to implement custom layers. This interface provides additional capabilities to the IPluginV2 interface by supporting different output data types and broadcast across batch.
|
default |
|
overridedefault |
|
inlinevirtualnoexcept |
Attach the plugin object to an execution context and grant the plugin the access to some context resource.
cudnn | The CUDNN context handle of the execution context |
cublas | The cublas context handle of the execution context |
allocator | The allocator used by the execution context |
This function is called automatically for each plugin when a new execution context is created. If the context was created without resources, this method is not called until the resources are assigned. It is also called if new resources are assigned to the context.
If the plugin needs per-context resource, it can be allocated here. The plugin can also get context-owned CUDNN and CUBLAS context here.
Usage considerations
|
pure virtualnoexcept |
Return true if plugin can use input that is broadcast across batch without replication.
inputIndex | Index of input that could be broadcast. |
For each input whose tensor is semantically broadcast across a batch, TensorRT calls this method before calling configurePlugin. If canBroadcastInputAcrossBatch returns true, TensorRT will not replicate the input tensor; i.e., there will be a single copy that the plugin should share across the batch. If it returns false, TensorRT will replicate the input tensor so that it appears like a non-broadcasted tensor.
This method is called only for inputs that can be broadcast.
Usage considerations
|
overridepure virtualnoexcept |
Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object should also be pre-configured. The returned object should allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication.
Usage considerations
Implements nvinfer1::IPluginV2.
Implemented in nvinfer1::IPluginV2DynamicExt.
|
pure virtualnoexcept |
Configure the layer with input and output data types.
This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, data types and maximum batch size.
inputDims | The input tensor dimensions. |
nbInputs | The number of inputs. |
outputDims | The output tensor dimensions. |
nbOutputs | The number of outputs. |
inputTypes | The data types selected for the plugin inputs. |
outputTypes | The data types selected for the plugin outputs. |
inputIsBroadcast | True for each input that the plugin must broadcast across the batch. |
outputIsBroadcast | True for each output that TensorRT will broadcast across the batch. |
floatFormat | The format selected for the engine for the floating point inputs/outputs. |
maxBatchSize | The maximum batch size. |
The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions). When inputIsBroadcast or outputIsBroadcast is true, the outermost batch size for that input or output should be treated as if it is one. Index 'i' of inputIsBroadcast is true only if the input is semantically broadcast across the batch and calling canBroadcastInputAcrossBatch with argument 'i' returns true. Index 'i' of outputIsBroadcast is true only if calling isOutputBroadcastAcrossBatch with argument 'i' returns true.
Usage considerations
|
inlineoverrideprotectedvirtualnoexcept |
Derived classes should not implement this. In a C++11 API it would be override final.
Implements nvinfer1::IPluginV2.
|
inlinevirtualnoexcept |
Detach the plugin object from its execution context.
This function is called automatically for each plugin when a execution context is destroyed or the context resources are unassigned from the context.
If the plugin owns per-context resource, it can be released here.
Usage considerations
|
pure virtualnoexcept |
Return the DataType of the plugin output at the requested index.
The default behavior should be to return the type of the first input, or DataType::kFLOAT if the layer has no inputs. The returned data type must have a format that is supported by the plugin.
Usage considerations
|
inlineoverrideprotectedvirtualnoexcept |
Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPluginV2.
Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.
Usage considerations
Reimplemented from nvinfer1::IPluginV2.
Reimplemented in nvinfer1::IPluginV2IOExt.
|
pure virtualnoexcept |
Return true if output tensor is broadcast across a batch.
outputIndex | The index of the output |
inputIsBroadcasted | The ith element is true if the tensor for the ith input is broadcast across a batch. |
nbInputs | The number of inputs |
The values in inputIsBroadcasted refer to broadcasting at the semantic level, i.e. are unaffected by whether method canBroadcastInputAcrossBatch requests physical replication of the values.
Usage considerations