IPluginV2

class tensorrt.IPluginV2

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. When combined with IPluginCreator it provides a mechanism to register plugins and look up the Plugin Registry during de-serialization.

Variables:
  • num_outputsint The number of outputs from the layer. This is used by the implementations of INetworkDefinition and Builder . In particular, it is called prior to any call to initialize() .
  • tensorrt_versionint The API version with which this plugin was built.
  • plugin_typestr The plugin type. Should match the plugin name returned by the corresponding plugin creator
  • plugin_versionstr The plugin version. Should match the plugin version returned by the corresponding plugin creator.
  • plugin_namespacestr The namespace that this plugin object belongs to. Ideally, all plugin objects from the same plugin library should have the same namespace.
  • serialization_sizeint The size of the serialization buffer required.
clone(self: tensorrt.tensorrt.IPluginV2) → tensorrt.tensorrt.IPluginV2

Clone the plugin object. This copies over internal plugin parameters and returns a new plugin object with these parameters.

configure_with_format(self: tensorrt.tensorrt.IPluginV2, input_shapes: List[tensorrt.tensorrt.Dims], output_shapes: List[tensorrt.tensorrt.Dims], dtype: tensorrt.tensorrt.DataType, format: nvinfer1::TensorFormat, max_batch_size: int) → None

Configure the layer.

This function is called by the Builder prior to initialize() . It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size.

The dimensions passed here do not include the outermost batch size (i.e. for 2D image networks, they will be 3D CHW dimensions).

Parameters:
  • input_shapes – The shapes of the input tensors.
  • output_shapes – The shapes of the output tensors.
  • dtype – The data type selected for the engine.
  • format – The format selected for the engine.
  • max_batch_size – The maximum batch size.
destroy(self: tensorrt.tensorrt.IPluginV2) → None

Destroy the plugin object. This will be called when the INetworkDefinition , Builder or ICudaEngine is destroyed.

execute_async(self: tensorrt.tensorrt.IPluginV2, batch_size: int, inputs: List[capsule], outputs: List[capsule], workspace: capsule, stream_handle: int) → int

Execute the layer asynchronously.

Parameters:
  • batch_size – The number of inputs in the batch.
  • inputs – The memory for the input tensors.
  • outputs – The memory for the output tensors.
  • workspace – Workspace for execution.
  • stream_handle – The stream in which to execute the kernels.
Returns:

0 for success, else non-zero (which will cause engine termination).

get_output_shape(self: tensorrt.tensorrt.IPluginV2, index: int, input_shapes: List[tensorrt.tensorrt.Dims]) → tensorrt.tensorrt.Dims

Get the dimension of an output tensor.

Parameters:
  • index – The index of the output tensor.
  • input_shapes

    The shapes of the input tensors.

    This function is called by the implementations of INetworkDefinition and Builder . In particular, it is called prior to any call to initialize() .

get_workspace_size(self: tensorrt.tensorrt.IPluginV2, max_batch_size: int) → int

Find the workspace size required by the layer.

This function is called during engine startup, after initialize() . The workspace size returned should be sufficient for any batch size up to the maximum.

Parameters:max_batch_sizeint The maximum possible batch size during inference.
Returns:The workspace size.
initialize(self: tensorrt.tensorrt.IPluginV2) → int

Initialize the layer for execution. This is called when the engine is created.

Returns:0 for success, else non-zero (which will cause engine termination).
serialize(self: tensorrt.tensorrt.IPluginV2) → memoryview

Serialize the plugin.

supports_format(self: tensorrt.tensorrt.IPluginV2, dtype: tensorrt.tensorrt.DataType, format: nvinfer1::TensorFormat) → bool

Check format support.

This function is called by the implementations of INetworkDefinition , Builder , and ICudaEngine . In particular, it is called when creating an engine and when deserializing an engine.

Parameters:
  • dtype – Data type requested.
  • format – TensorFormat requested.
Returns:

True if the plugin supports the type-format combination.

terminate(self: tensorrt.tensorrt.IPluginV2) → None

Release resources acquired during plugin layer initialization. This is called when the engine is destroyed.