TensorRT
7.0.0.11
|
Plugin class for user-implemented layers. More...
#include <NvInferRuntimeCommon.h>
Public Member Functions | |
virtual void | configurePlugin (const PluginTensorDesc *in, int nbInput, const PluginTensorDesc *out, int nbOutput)=0 |
Configure the layer. More... | |
virtual bool | supportsFormatCombination (int pos, const PluginTensorDesc *inOut, int nbInputs, int nbOutputs) const =0 |
Return true if plugin supports the format and datatype for the input/output indexed by pos. More... | |
Public Member Functions inherited from nvinfer1::IPluginV2Ext | |
virtual nvinfer1::DataType | getOutputDataType (int index, const nvinfer1::DataType *inputTypes, int nbInputs) const =0 |
Return the DataType of the plugin output at the requested index. The default behavior should be to return the type of the first input, or DataType::kFLOAT if the layer has no inputs. The returned data type must have a format that is supported by the plugin. More... | |
virtual bool | isOutputBroadcastAcrossBatch (int outputIndex, const bool *inputIsBroadcasted, int nbInputs) const =0 |
Return true if output tensor is broadcast across a batch. More... | |
virtual bool | canBroadcastInputAcrossBatch (int inputIndex) const =0 |
Return true if plugin can use input that is broadcast across batch without replication. More... | |
virtual void | attachToContext (cudnnContext *, cublasContext *, IGpuAllocator *) |
Attach the plugin object to an execution context and grant the plugin the access to some context resource. More... | |
virtual void | detachFromContext () |
Detach the plugin object from its execution context. More... | |
virtual IPluginV2Ext * | clone () const _TENSORRT_OVERRIDE=0 |
Clone the plugin object. This copies over internal plugin parameters as well and returns a new plugin object with these parameters. If the source plugin is pre-configured with configurePlugin(), the returned object should also be pre-configured. The returned object should allow attachToContext() with a new execution context. Cloned plugin objects can share the same per-engine immutable resource (e.g. weights) with the source object (e.g. via ref-counting) to avoid duplication. | |
Public Member Functions inherited from nvinfer1::IPluginV2 | |
virtual const char * | getPluginType () const =0 |
Return the plugin type. Should match the plugin name returned by the corresponding plugin creator. | |
virtual const char * | getPluginVersion () const =0 |
Return the plugin version. Should match the plugin version returned by the corresponding plugin creator. | |
virtual int | getNbOutputs () const =0 |
Get the number of outputs from the layer. More... | |
virtual Dims | getOutputDimensions (int index, const Dims *inputs, int nbInputDims)=0 |
Get the dimension of an output tensor. More... | |
virtual int | initialize ()=0 |
Initialize the layer for execution. This is called when the engine is created. More... | |
virtual void | terminate ()=0 |
Release resources acquired during plugin layer initialization. This is called when the engine is destroyed. More... | |
virtual size_t | getWorkspaceSize (int maxBatchSize) const =0 |
Find the workspace size required by the layer. More... | |
virtual int | enqueue (int batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream)=0 |
Execute the layer. More... | |
virtual size_t | getSerializationSize () const =0 |
Find the size of the serialization buffer required. More... | |
virtual void | serialize (void *buffer) const =0 |
Serialize the layer. More... | |
virtual void | destroy ()=0 |
Destroy the plugin object. This will be called when the network, builder or engine is destroyed. | |
virtual void | setPluginNamespace (const char *pluginNamespace)=0 |
Set the namespace that this plugin object belongs to. Ideally, all plugin objects from the same plugin library should have the same namespace. | |
virtual const char * | getPluginNamespace () const =0 |
Return the namespace of the plugin object. | |
Protected Member Functions | |
TRT_DEPRECATED int | getTensorRTVersion () const _TENSORRT_OVERRIDE |
Return the API version with which this plugin was built. The upper byte is reserved by TensorRT and is used to differentiate this from IPlguinV2 and IPluginV2Ext. More... | |
TRT_DEPRECATED void | configureWithFormat (const Dims *, int, const Dims *, int, DataType, PluginFormat, int) _TENSORRT_OVERRIDE _TENSORRT_FINAL |
Deprecated interface inheriting from base class. Derived classes should not implement this. In a C++11 API it would be override final. | |
TRT_DEPRECATED void | configurePlugin (const Dims *, int, const Dims *, int, const DataType *, const DataType *, const bool *, const bool *, PluginFormat, int) _TENSORRT_OVERRIDE _TENSORRT_FINAL |
Deprecated interface inheriting from base class. Derived classes should not implement this. In a C++11 API it would be override final. | |
TRT_DEPRECATED bool | supportsFormat (DataType, PluginFormat) const _TENSORRT_OVERRIDE _TENSORRT_FINAL |
Deprecated interface inheriting from base class. Derived classes should not implement this. In a C++11 API it would be override final. | |
Plugin class for user-implemented layers.
Plugins are a mechanism for applications to implement custom layers. This interface provides additional capabilities to the IPluginV2Ext interface by extending different I/O data types and tensor formats.
|
pure virtual |
Configure the layer.
This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of I/O PluginTensorDesc and the maximum batch size.
in | The input tensors attributes that are used for configuration. |
nbInput | Number of input tensors. |
out | The output tensors attributes that are used for configuration. |
nbOutput | Number of output tensors. |
|
inlineprotectedvirtual |
Return the API version with which this plugin was built. The upper byte is reserved by TensorRT and is used to differentiate this from IPlguinV2 and IPluginV2Ext.
Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.
Reimplemented from nvinfer1::IPluginV2Ext.
|
pure virtual |
Return true if plugin supports the format and datatype for the input/output indexed by pos.
For this method inputs are numbered 0..(nbInputs-1) and outputs are numbered nbInputs..(nbInputs+nbOutputs-1). Using this numbering, pos is an index into InOut, where 0 <= pos < nbInputs+nbOutputs-1.
TensorRT invokes this method to ask if the input/output indexed by pos supports the format/datatype specified by inOut[pos].format and inOut[pos].type. The override should return true if that format/datatype at inOut[pos] are supported by the plugin. If support is conditional on other input/output formats/datatypes, the plugin can make its result conditional on the formats/datatypes in inOut[0..pos-1], which will be set to values that the plugin supports. The override should not inspect inOut[pos+1..nbInputs+nbOutputs-1], which will have invalid values. In other words, the decision for pos must be based on inOut[0..pos] only.
Some examples:
return inOut.format[pos] == TensorFormat::kLINEAR && inOut.type[pos] == DataType::kHALF;
return inOut.format[pos] == TensorFormat::kLINEAR && (inOut.type[pos] == pos < 2 ? DataType::kHALF : DataType::kFLOAT);
return pos == 0 || (inOut.format[pos] == inOut.format[0] && inOut.type[pos] == inOut.type[0]);
Warning: TensorRT will stop asking for formats once it finds kFORMAT_COMBINATION_LIMIT on combinations.