Format of the input/output tensors.
NHWC8 : NHWC with 8-element packed channels (C must be a multiple of 8)
NCHW : NCHW
NC2HW2 : NCHW with 2-element packed channels
Plugin class for user-implemented layers. Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.
configure(self: tensorrt.tensorrt.IPlugin, input_shapes: List[tensorrt.tensorrt.Dims], output_shapes: List[tensorrt.tensorrt.Dims], max_batch_size: int) → None¶
Configure the layer.
This function is called by the
initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size. The type is assumed to be FP32 and format NCHW.
- input_shapes – The shapes of the input tensors.
- output_shapes – The shapes of the output tensors.
- max_batch_size –
The maximum batch size.
The shapes passed here do not include the outermost batch size (i.e. for 2D image networks, they will be 3D CHW dimensions).
This method is not called for
configure_with_format()is called instead.
execute_async(self: tensorrt.tensorrt.IPlugin, batch_size: int, inputs: List[capsule], outputs: List[capsule], workspace: capsule, stream_handle: int) → int¶
Execute the layer asynchronously.
- batch_size – The number of inputs in the batch.
- inputs – The memory for the input tensors.
- outputs – The memory for the output tensors.
- workspace – Workspace for execution.
- stream_handle – The stream in which to execute the kernels.
0 for success, else non-zero (which will cause engine termination).
get_output_shape(self: tensorrt.tensorrt.IPlugin, index: int, input_shapes: List[tensorrt.tensorrt.Dims]) → tensorrt.tensorrt.Dims¶
Get the dimension of an output tensor.
get_workspace_size(self: tensorrt.tensorrt.IPlugin, max_batch_size: int) → int¶
Find the workspace size required by the layer.
This function is called during engine startup, after
initialize(). The workspace size returned should be sufficient for any batch size up to the maximum.
Parameters: max_batch_size –
intThe maximum possible batch size during inference.
Returns: The workspace size.
initialize(self: tensorrt.tensorrt.IPlugin) → int¶
Initialize the layer for execution. This is called when the engine is created.
Returns: 0 for success, else non-zero (which will cause engine termination).
serialize(self: tensorrt.tensorrt.IPlugin, buffer: capsule) → None¶
Serialize the layer.
Parameters: buffer – A buffer of size at least
terminate(self: tensorrt.tensorrt.IPlugin) → None¶
Release resources acquired during plugin layer initialization. This is called when the engine is destroyed.