TensorRT 10.0.0
nvinfer1::IPluginV2Ext Class Referenceabstract

Plugin class for user-implemented layers. More...

#include <NvInferRuntimePlugin.h>

Inheritance diagram for nvinfer1::IPluginV2Ext:
nvinfer1::IPluginV2 nvinfer1::IPluginV2DynamicExt nvinfer1::IPluginV2IOExt

Public Member Functions

virtual nvinfer1::DataType getOutputDataType (int32_t index, nvinfer1::DataType const *inputTypes, int32_t nbInputs) const noexcept=0
 Return the DataType of the plugin output at the requested index. More...
 
virtual TRT_DEPRECATED bool isOutputBroadcastAcrossBatch (int32_t outputIndex, bool const *inputIsBroadcasted, int32_t nbInputs) const noexcept=0
 Return true if the output tensor is broadcast across a batch. More...
 
virtual TRT_DEPRECATED bool canBroadcastInputAcrossBatch (int32_t inputIndex) const noexcept=0
 Return true if the plugin can use an input tensor that is broadcast across batch without replication. More...
 
virtual void configurePlugin (Dims const *inputDims, int32_t nbInputs, Dims const *outputDims, int32_t nbOutputs, DataType const *inputTypes, DataType const *outputTypes, bool const *inputIsBroadcast, bool const *outputIsBroadcast, PluginFormat floatFormat, int32_t maxBatchSize) noexcept=0
 Configure the layer with input and output data types. More...
 
 IPluginV2Ext ()=default
 
 ~IPluginV2Ext () override=default
 
virtual void attachToContext (cudnnContext *, cublasContext *, IGpuAllocator *) noexcept
 Attach the plugin object to an execution context and grant the plugin the access to some context resources. More...
 
virtual void detachFromContext () noexcept
 Detach the plugin object from its execution context. More...
 
IPluginV2Extclone () const noexcept override=0
 Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object must also be pre-configured. The returned object must allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication. More...
 
- Public Member Functions inherited from nvinfer1::IPluginV2
virtual AsciiChar const * getPluginType () const noexcept=0
 Return the plugin type. Should match the plugin name returned by the corresponding plugin creator. More...
 
virtual AsciiChar const * getPluginVersion () const noexcept=0
 Return the plugin version. Should match the plugin version returned by the corresponding plugin creator. More...
 
virtual int32_t getNbOutputs () const noexcept=0
 Get the number of outputs from the layer. More...
 
virtual Dims getOutputDimensions (int32_t index, Dims const *inputs, int32_t nbInputDims) noexcept=0
 Get the dimension of an output tensor. More...
 
virtual bool supportsFormat (DataType type, PluginFormat format) const noexcept=0
 Check format support. More...
 
virtual int32_t initialize () noexcept=0
 Initialize the layer for execution. This is called when the engine is created. More...
 
virtual void terminate () noexcept=0
 Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More...
 
virtual size_t getWorkspaceSize (int32_t maxBatchSize) const noexcept=0
 Find the workspace size required by the layer. More...
 
virtual int32_t enqueue (int32_t batchSize, void const *const *inputs, void *const *outputs, void *workspace, cudaStream_t stream) noexcept=0
 Execute the layer. More...
 
virtual size_t getSerializationSize () const noexcept=0
 Find the size of the serialization buffer required to store the plugin configuration in a binary file. More...
 
virtual void serialize (void *buffer) const noexcept=0
 Serialize the layer. More...
 
virtual void destroy () noexcept=0
 Destroy the plugin object. This will be called when the network, builder or engine is destroyed. More...
 
virtual void setPluginNamespace (AsciiChar const *pluginNamespace) noexcept=0
 Set the namespace that this plugin object belongs to. Ideally, all plugin objects from the same plugin library must have the same namespace. More...
 
virtual AsciiChar const * getPluginNamespace () const noexcept=0
 Return the namespace of the plugin object. More...
 

Protected Member Functions

int32_t getTensorRTVersion () const noexcept override
 Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPluginV2. More...
 
void configureWithFormat (Dims const *, int32_t, Dims const *, int32_t, DataType, PluginFormat, int32_t) noexcept override
 Derived classes must not implement this. In a C++11 API it would be override final. More...
 

Detailed Description

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. This interface provides additional capabilities to the IPluginV2 interface by supporting different output data types and broadcast across batches.

See also
IPluginV2
Deprecated:
Deprecated in TensorRT 8.5. Implement IPluginV2DynamicExt or IPluginV2IOExt depending on your requirement.

Constructor & Destructor Documentation

◆ IPluginV2Ext()

nvinfer1::IPluginV2Ext::IPluginV2Ext ( )
default

◆ ~IPluginV2Ext()

nvinfer1::IPluginV2Ext::~IPluginV2Ext ( )
overridedefault

Member Function Documentation

◆ attachToContext()

virtual void nvinfer1::IPluginV2Ext::attachToContext ( cudnnContext *  ,
cublasContext *  ,
IGpuAllocator  
)
inlinevirtualnoexcept

Attach the plugin object to an execution context and grant the plugin the access to some context resources.

Parameters
cudnnThe cuDNN context handle of the execution context. Will be a valid cuDNN context handle, or nullptr if TacticSource::kCUDNN is disabled.
cublasThe cuBLAS context handle of the execution context. Will be a valid cuBLAS context handle, or nullptr if TacticSource::kCUBLAS is disabled.
allocatorThe allocator used by the execution context

This function is called automatically for each plugin when a new execution context is created. If the context was created without resources, this method is not called until the resources are assigned. It is also called if new resources are assigned to the context.

If the plugin needs per-context resource, it can be allocated here. The plugin can also get context-owned cuDNN and cuBLAS context here.

Note
The TacticSource::kCUDNN and TacticSource::kCUBLAS flag is disabled by default. The allocator pointer is unique to each building or execution context instance having overlapping lifetimes. It can be used as a key to manage resources across plugin instances sharing the same context. Plugins attached to different contexts will have different handles as their execution will not overlap.
See also
TacticSources
getPluginCudnnHandle(void* executionContextIdentifier)
getPluginCublasHandle(void* excecutionContextIdentifier)
Note
In the automotive safety context, the cuDNN and cuBLAS parameters will be nullptr because cuDNN and cuBLAS are not used by the safe runtime.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.

◆ canBroadcastInputAcrossBatch()

virtual TRT_DEPRECATED bool nvinfer1::IPluginV2Ext::canBroadcastInputAcrossBatch ( int32_t  inputIndex) const
pure virtualnoexcept

Return true if the plugin can use an input tensor that is broadcast across batch without replication.

Parameters
inputIndexIndex of input that could be broadcast. Will be in the valid range between 0 and nbInputs - 1 where nbInputs is the maximum number of input tensors supported by this plugin.
Returns
true if the index is in the valid range and the plugin is able to broadcast a single copy of this input tensor across the batch. False otherwise.

For each input whose tensor is semantically broadcast across a batch, TensorRT calls this method before calling configurePlugin. If canBroadcastInputAcrossBatch returns true, TensorRT will not replicate the input tensor; i.e., there will be a single copy that the plugin must share across the batch. If it returns false, TensorRT will replicate the input tensor so that it appears like a non-broadcasted tensor.

This method is called only for inputs that can be broadcast.


Usage considerations

  • Allowed context for the API call

    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.
    Deprecated:
    Deprecated in TensorRT 10.0. Implicit batch support is removed in TensorRT 10.0.

◆ clone()

IPluginV2Ext * nvinfer1::IPluginV2Ext::clone ( ) const
overridepure virtualnoexcept

Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object must also be pre-configured. The returned object must allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication.

Returns
A pointer to a cloned plugin object if cloning was successful, otherwise nullptr.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.

Implements nvinfer1::IPluginV2.

Implemented in nvinfer1::IPluginV2DynamicExt.

◆ configurePlugin()

virtual void nvinfer1::IPluginV2Ext::configurePlugin ( Dims const *  inputDims,
int32_t  nbInputs,
Dims const *  outputDims,
int32_t  nbOutputs,
DataType const *  inputTypes,
DataType const *  outputTypes,
bool const *  inputIsBroadcast,
bool const *  outputIsBroadcast,
PluginFormat  floatFormat,
int32_t  maxBatchSize 
)
pure virtualnoexcept

Configure the layer with input and output data types.

This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, data types and maximum batch size.

Parameters
inputDimsThe input tensor dimensions. Will be an array of length nbInputs.
nbInputsThe number of inputs. Will be a non-negative integer.
outputDimsThe output tensor dimensions. Will be an array of length nbOutputs.
nbOutputsThe number of outputs. Will be a positive integer.
inputTypesThe data types selected for the plugin inputs. Will be an array of length nbInputs.
outputTypesThe data types selected for the plugin outputs. Will be an array of length nbOutputs.
inputIsBroadcastTrue for each input that the plugin must broadcast across the batch. Will be an array of length nbInputs.
outputIsBroadcastTrue for each output that TensorRT will broadcast across the batch. Will be an array of length nbOutputs.
floatFormatThe format selected for the engine for the floating point inputs/outputs.
maxBatchSizeThe maximum batch size. Will be a positive integer.

The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions). When inputIsBroadcast or outputIsBroadcast is true, the outermost batch size for that input or output must be treated as if it is one. Index 'i' of inputIsBroadcast is true only if the input is semantically broadcast across the batch and calling canBroadcastInputAcrossBatch with argument 'i' returns true. Index 'i' of outputIsBroadcast is true only if calling isOutputBroadcastAcrossBatch with argument 'i' returns true.

Warning
for the floatFormat field, the values PluginFormat::kCHW4, PluginFormat::kCHW16, and PluginFormat::kCHW32 will not be passed in, this is to keep backward compatibility with TensorRT 5.x series. Use PluginV2IOExt or PluginV2DynamicExt for other PluginFormats.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin. However, TensorRT will not call this method from two threads simultaneously on a given clone of a plugin.

◆ configureWithFormat()

void nvinfer1::IPluginV2Ext::configureWithFormat ( Dims const *  ,
int32_t  ,
Dims const *  ,
int32_t  ,
DataType  ,
PluginFormat  ,
int32_t   
)
inlineoverrideprotectedvirtualnoexcept

Derived classes must not implement this. In a C++11 API it would be override final.

IPluginV2Ext::configureWithFormat() is a NOP operation for all classes derived from IPluginV2Ext. These classes call configurePlugin() instead.

Implements nvinfer1::IPluginV2.

◆ detachFromContext()

virtual void nvinfer1::IPluginV2Ext::detachFromContext ( )
inlinevirtualnoexcept

Detach the plugin object from its execution context.

This function is called automatically for each plugin when an execution context is destroyed or the context resources are unassigned from the context.

If the plugin owns per-context resource, it can be released here.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.

◆ getOutputDataType()

virtual nvinfer1::DataType nvinfer1::IPluginV2Ext::getOutputDataType ( int32_t  index,
nvinfer1::DataType const *  inputTypes,
int32_t  nbInputs 
) const
pure virtualnoexcept

Return the DataType of the plugin output at the requested index.

Parameters
indexThe output tensor index in the valid range between 0 and getNbOutputs()-1.
inputTypesThe data types of the input tensors, stored in an array of length nbInputs.
nbInputsThe number of input tensors. Will be a non-negative integer.
Returns
The data type of the output tensor with the provided index if the input tensors have the data types provided in inputTypes, provided the output tensor index is in the valid range. DataType::kFLOAT must be returned if the index is not in the valid range.

The default behavior must be to return the type of the first input, or DataType::kFLOAT if the layer has no inputs. The returned data type must have a format that is supported by the plugin.

See also
supportsFormat()
Warning
DataType:kBOOL and DataType::kUINT8 are not supported.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.

◆ getTensorRTVersion()

int32_t nvinfer1::IPluginV2Ext::getTensorRTVersion ( ) const
inlineoverrideprotectedvirtualnoexcept

Return the API version with which this plugin was built. The upper byte reserved by TensorRT and is used to differentiate this from IPluginV2.

Returns
In the lower three bytes, the TensorRT version in the format (major * 100 + minor) * 100 + patch. In the upper byte, the value 1.

Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.


Usage considerations

  • Allowed context for the API call
    • Thread-safe: Yes, the implementation provided here is safe to call from any thread.

Reimplemented from nvinfer1::IPluginV2.

Reimplemented in nvinfer1::IPluginV2IOExt.

◆ isOutputBroadcastAcrossBatch()

virtual TRT_DEPRECATED bool nvinfer1::IPluginV2Ext::isOutputBroadcastAcrossBatch ( int32_t  outputIndex,
bool const *  inputIsBroadcasted,
int32_t  nbInputs 
) const
pure virtualnoexcept

Return true if the output tensor is broadcast across a batch.

Parameters
outputIndexThe index of the output tensor, which will be in the valid range between 0 and nbOutputs()-1.
inputIsBroadcastedA boolean array of length nbInputs. The i-th element will be true if and only if the tensor for the ith input is broadcast across a batch.
nbInputsThe number of inputs. Will be a non-negative integer.

The values in inputIsBroadcasted refer to broadcasting at the semantic level, i.e. are unaffected by whether method canBroadcastInputAcrossBatch requests physical replication of the values.


Usage considerations

  • Allowed context for the API call

    • Thread-safe: Yes, this method is required to be thread-safe and may be called from multiple threads when building networks on multiple devices sharing the same plugin.
    Deprecated:
    Deprecated in TensorRT 10.0. Implicit batch support is removed in TensorRT 10.0.

The documentation for this class was generated from the following file:

  Copyright © 2024 NVIDIA Corporation
  Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact