IPlugin¶
-
tensorrt.
PluginFormat
¶ Format of the input/output tensors.
Members:
NCHW : NCHW
NC2HW2 : NCHW with 2-element packed channels
NHWC8 : NHWC with 8-element packed channels (C must be a multiple of 8)
-
class
tensorrt.
IPlugin
¶ Plugin class for user-implemented layers. Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.
Variables: - num_outputs –
int
The number of outputs from the layer. This is used by the implementations ofINetworkDefinition
andBuilder
. In particular, it is called prior to any call toinitialize()
. - serialization_size –
int
The size of the serialization buffer required.
-
configure
(self: tensorrt.tensorrt.IPlugin, input_shapes: List[tensorrt.tensorrt.Dims], output_shapes: List[tensorrt.tensorrt.Dims], max_batch_size: int) → None¶ Configure the layer.
This function is called by the
Builder
prior toinitialize()
. It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size. The type is assumed to be FP32 and format NCHW.Parameters: - input_shapes – The shapes of the input tensors.
- output_shapes – The shapes of the output tensors.
- max_batch_size –
The maximum batch size.
The shapes passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions).
This method is not called for
IPluginExt
classes;configure_with_format()
is called instead.
-
execute_async
(self: tensorrt.tensorrt.IPlugin, batch_size: int, inputs: List[capsule], outputs: List[capsule], workspace: capsule, stream_handle: int) → int¶ Execute the layer asynchronously.
Parameters: - batch_size – The number of inputs in the batch.
- inputs – The memory for the input tensors.
- outputs – The memory for the output tensors.
- workspace – Workspace for execution.
- stream_handle – The stream in which to execute the kernels.
Returns: 0 for success, else non-zero (which will cause engine termination).
-
get_output_shape
(self: tensorrt.tensorrt.IPlugin, index: int, input_shapes: List[tensorrt.tensorrt.Dims]) → tensorrt.tensorrt.Dims¶ Get the dimension of an output tensor.
Parameters: - index – The index of the output tensor.
- input_shapes –
The shapes of the input tensors.
This function is called by the implementations of
INetworkDefinition
andBuilder
. In particular, it is called prior to any call toinitialize()
.
-
get_workspace_size
(self: tensorrt.tensorrt.IPlugin, max_batch_size: int) → int¶ Find the workspace size required by the layer.
This function is called during engine startup, after
initialize()
. The workspace size returned should be sufficient for any batch size up to the maximum.Parameters: max_batch_size – int
The maximum possible batch size during inference.Returns: The workspace size.
-
initialize
(self: tensorrt.tensorrt.IPlugin) → int¶ Initialize the layer for execution. This is called when the engine is created.
Returns: 0 for success, else non-zero (which will cause engine termination).
-
serialize
(self: tensorrt.tensorrt.IPlugin, buffer: capsule) → None¶ Serialize the layer.
Parameters: buffer – A buffer of size at least serialization_size
.
-
terminate
(self: tensorrt.tensorrt.IPlugin) → None¶ Release resources acquired during plugin layer initialization. This is called when the engine is destroyed.
- num_outputs –