IPlugin¶
-
tensorrt.
PluginFormat
¶ Format of the input/output tensors.
This enum is extended to be used by both plugins and reformat-free network I/O tensors.
For more information about data formats, see the topic “Data Format Description” located in the TensorRT Developer Guide (https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html).
Members:
- LINEAR :
Row major linear format.
For a tensor with dimensions {N, C, H, W} or {numbers, channels, columns, rows}, the dimensional index corresponds to {3, 2, 1, 0} and thus the order is W major.
- CHW2 :
Two wide channel vectorized row major format.
This format is bound to FP16. It is only available for dimensions >= 3.
For a tensor with dimensions {N, C, H, W}, the memory layout is equivalent to a C array with dimensions [N][(C+1)/2][H][W][2], with the tensor coordinates (n, c, h, w) mapping to array subscript [n][c/2][h][w][c%2].
- HWC8 :
Eight channel format where C is padded to a multiple of 8.
This format is bound to FP16. It is only available for dimensions >= 3.
For a tensor with dimensions {N, H, W, C}, the memory layout is equivalent to the array with dimensions [N][H][W][(C+7)/8*8], with the tensor coordinates (n, h, w, c) mapping to array subscript [n][h][w][c].
- CHW4 :
Four wide channel vectorized row major format. This format is bound to INT8. It is only available for dimensions >= 3.
For a tensor with dimensions {N, C, H, W}, the memory layout is equivalent to a C array with dimensions [N][(C+3)/4][H][W][4], with the tensor coordinates (n, c, h, w) mapping to array subscript [n][c/4][h][w][c%4].
- CHW16 :
Sixteen wide channel vectorized row major format.
This format is bound to FP16. It is only available for dimensions >= 3.
For a tensor with dimensions {N, C, H, W}, the memory layout is equivalent to a C array with dimensions [N][(C+15)/16][H][W][16], with the tensor coordinates (n, c, h, w) mapping to array subscript [n][c/16][h][w][c%16].
- CHW32 :
Thirty-two wide channel vectorized row major format.
This format is bound to INT8. It is only available for dimensions >= 3.
For a tensor with dimensions {N, C, H, W}, the memory layout is equivalent to a C array with dimensions [N][(C+31)/32][H][W][32], with the tensor coordinates (n, c, h, w) mapping to array subscript [n][c/32][h][w][c%32].
-
class
tensorrt.
IPlugin
¶ Plugin class for user-implemented layers. Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.
Variables: - num_outputs –
int
The number of outputs from the layer. This is used by the implementations ofINetworkDefinition
andBuilder
. In particular, it is called prior to any call toinitialize()
. - serialization_size –
int
The size of the serialization buffer required.
-
configure
(self: tensorrt.tensorrt.IPlugin, input_shapes: List[tensorrt.tensorrt.Dims], output_shapes: List[tensorrt.tensorrt.Dims], max_batch_size: int) → None¶ Configure the layer.
This function is called by the
Builder
prior toinitialize()
. It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size. The type is assumed to be FP32 and format NCHW.Parameters: - input_shapes – The shapes of the input tensors.
- output_shapes – The shapes of the output tensors.
- max_batch_size –
The maximum batch size.
The shapes passed here do not include the outermost batch size (i.e. for 2D image networks, they will be 3D CHW dimensions).
This method is not called for
IPluginExt
classes;configure_with_format()
is called instead.
-
execute_async
(self: tensorrt.tensorrt.IPlugin, batch_size: int, inputs: List[capsule], outputs: List[capsule], workspace: capsule, stream_handle: int) → int¶ Execute the layer asynchronously.
Parameters: - batch_size – The number of inputs in the batch.
- inputs – The memory for the input tensors.
- outputs – The memory for the output tensors.
- workspace – Workspace for execution.
- stream_handle – The stream in which to execute the kernels.
Returns: 0 for success, else non-zero (which will cause engine termination).
-
get_output_shape
(self: tensorrt.tensorrt.IPlugin, index: int, input_shapes: List[tensorrt.tensorrt.Dims]) → tensorrt.tensorrt.Dims¶ Get the dimension of an output tensor.
Parameters: - index – The index of the output tensor.
- input_shapes –
The shapes of the input tensors.
This function is called by the implementations of
INetworkDefinition
andBuilder
. In particular, it is called prior to any call toinitialize()
.
-
get_workspace_size
(self: tensorrt.tensorrt.IPlugin, max_batch_size: int) → int¶ Find the workspace size required by the layer.
This function is called during engine startup, after
initialize()
. The workspace size returned should be sufficient for any batch size up to the maximum.Parameters: max_batch_size – int
The maximum possible batch size during inference.Returns: The workspace size.
-
initialize
(self: tensorrt.tensorrt.IPlugin) → int¶ Initialize the layer for execution. This is called when the engine is created.
Returns: 0 for success, else non-zero (which will cause engine termination).
-
serialize
(self: tensorrt.tensorrt.IPlugin, buffer: capsule) → None¶ Serialize the layer.
Parameters: buffer – A buffer of size at least serialization_size
.
-
terminate
(self: tensorrt.tensorrt.IPlugin) → None¶ Release resources acquired during plugin layer initialization. This is called when the engine is destroyed.
- num_outputs –