INetworkDefinition
- class tensorrt.INetworkDefinition
Represents a TensorRT Network from which the Builder can build an Engine
- Variables
num_layers –
intThe number of layers in the network.num_inputs –
intThe number of inputs of the network.num_outputs –
intThe number of outputs of the network.name –
strThe name of the network. This is used so that it can be associated with a built engine. The name must be at most 128 characters in length. TensorRT makes no use of this string except storing it as part of the engine so that it may be retrieved at runtime. A name unique to the builder will be generated by default.has_implicit_batch_dimension –
bool[DEPRECATED] Deprecated in TensorRT 10.0. Always flase since the implicit batch dimensions support has been removed.error_recorder –
IErrorRecorderApplication-implemented error reporting interface for TensorRT objects.
- Flags
- int
A bitset of the
NetworkDefinitionCreationFlags set for this network.
- __del__(self: tensorrt.tensorrt.INetworkDefinition) None
- __exit__(exc_type, exc_value, traceback)
Context managers are deprecated and have no effect. Objects are automatically freed when the reference count reaches 0.
- __getitem__(self: tensorrt.tensorrt.INetworkDefinition, arg0: int) tensorrt.tensorrt.ILayer
- __init__(*args, **kwargs)
- __len__(self: tensorrt.tensorrt.INetworkDefinition) int
- add_activation(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, type: tensorrt.tensorrt.ActivationType) tensorrt.tensorrt.IActivationLayer
Add an activation layer to the network. See
IActivationLayerfor more information.- Parameters
input – The input tensor to the layer.
type – The type of activation function to apply.
- Returns
The new activation layer, or
Noneif it could not be created.
- add_assertion(self: tensorrt.tensorrt.INetworkDefinition, condition: tensorrt.tensorrt.ITensor, message: str) tensorrt.tensorrt.IAssertionLayer
Add a assertion layer. See
IAssertionLayerfor more information.- Parameters
condition – The condition tensor to the layer.
message – The message to print if the assertion fails.
- Returns
The new assertion layer, or
Noneif it could not be created.
- add_cast(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, to_type: tensorrt.tensorrt.DataType) tensorrt.tensorrt.ICastLayer
Add a cast layer. See
ICastLayerfor more information.- Parameters
input – The input tensor to the layer.
to_type – The data type the output tensor should be cast into.
- Returns
The new cast layer, or
Noneif it could not be created.
- add_concatenation(self: tensorrt.tensorrt.INetworkDefinition, inputs: List[tensorrt.tensorrt.ITensor]) tensorrt.tensorrt.IConcatenationLayer
Add a concatenation layer to the network. Note that all tensors must have the same dimension except for the Channel dimension. See
IConcatenationLayerfor more information.- Parameters
inputs – The input tensors to the layer.
- Returns
The new concatenation layer, or
Noneif it could not be created.
- add_constant(self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) tensorrt.tensorrt.IConstantLayer
Add a constant layer to the network. See
IConstantLayerfor more information.- Parameters
shape – The shape of the constant.
weights – The constant value, represented as weights.
- Returns
The new constant layer, or
Noneif it could not be created.
- add_convolution_nd(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, num_output_maps: int, kernel_shape: tensorrt.tensorrt.Dims, kernel: tensorrt.tensorrt.Weights, bias: tensorrt.tensorrt.Weights = None) tensorrt.tensorrt.IConvolutionLayer
Add a multi-dimension convolution layer to the network. See
IConvolutionLayerfor more information.- Parameters
input – The input tensor to the convolution.
num_output_maps – The number of output feature maps for the convolution.
kernel_shape – The dimensions of the convolution kernel.
kernel – The kernel weights for the convolution.
bias – The optional bias weights for the convolution.
- Returns
The new convolution layer, or
Noneif it could not be created.
- add_deconvolution_nd(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, num_output_maps: int, kernel_shape: tensorrt.tensorrt.Dims, kernel: tensorrt.tensorrt.Weights, bias: tensorrt.tensorrt.Weights = None) tensorrt.tensorrt.IDeconvolutionLayer
Add a multi-dimension deconvolution layer to the network. See
IDeconvolutionLayerfor more information.- Parameters
input – The input tensor to the layer.
num_output_maps – The number of output feature maps.
kernel_shape – The dimensions of the convolution kernel.
kernel – The kernel weights for the convolution.
bias – The optional bias weights for the convolution.
- Returns
The new deconvolution layer, or
Noneif it could not be created.
- add_dequantize(*args, **kwargs)
Overloaded function.
add_dequantize(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, scale: tensorrt.tensorrt.ITensor) -> tensorrt.tensorrt.IDequantizeLayer
Add a dequantization layer to the network. See
IDequantizeLayerfor more information.- arg input
A tensor to quantize.
- arg scale
A tensor with the scale coefficients.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.float32).
- returns
The new dequantization layer, or
Noneif it could not be created.
add_dequantize(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, scale: tensorrt.tensorrt.ITensor, output_type: tensorrt.tensorrt.DataType) -> tensorrt.tensorrt.IDequantizeLayer
Add a dequantization layer to the network. See
IDequantizeLayerfor more information.- arg input
A tensor to quantize.
- arg scale
A tensor with the scale coefficients.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.float32).
- returns
The new dequantization layer, or
Noneif it could not be created.
- add_einsum(self: tensorrt.tensorrt.INetworkDefinition, inputs: List[tensorrt.tensorrt.ITensor], equation: str) tensorrt.tensorrt.IEinsumLayer
Adds an Einsum layer to the network. See
IEinsumLayerfor more information.- Parameters
inputs – The input tensors to the layer.
equation – The Einsum equation of the layer.
- Returns
the new Einsum layer, or
Noneif it could not be created.
- add_elementwise(self: tensorrt.tensorrt.INetworkDefinition, input1: tensorrt.tensorrt.ITensor, input2: tensorrt.tensorrt.ITensor, op: tensorrt.tensorrt.ElementWiseOperation) tensorrt.tensorrt.IElementWiseLayer
Add an elementwise layer to the network. See
IElementWiseLayerfor more information.- Parameters
input1 – The first input tensor to the layer.
input2 – The second input tensor to the layer.
op – The binary operation that the layer applies.
The input tensors must have the same number of dimensions. For each dimension, their lengths must match, or one of them must be one. In the latter case, the tensor is broadcast along that axis.
The output tensor has the same number of dimensions as the inputs. For each dimension, its length is the maximum of the lengths of the corresponding input dimension.
- Returns
The new element-wise layer, or
Noneif it could not be created.
- add_fill(*args, **kwargs)
Overloaded function.
add_fill(self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, op: tensorrt.tensorrt.FillOperation) -> tensorrt.tensorrt.IFillLayer
Add a fill layer. See
IFillLayerfor more information.- arg dimensions
The output tensor dimensions.
- arg op
The fill operation that the layer applies.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.float32).
- returns
The new fill layer, or
Noneif it could not be created.
add_fill(self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, op: tensorrt.tensorrt.FillOperation, output_type: tensorrt.tensorrt.DataType) -> tensorrt.tensorrt.IFillLayer
Add a fill layer. See
IFillLayerfor more information.- arg dimensions
The output tensor dimensions.
- arg op
The fill operation that the layer applies.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.float32).
- returns
The new fill layer, or
Noneif it could not be created.
- add_gather(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, indices: tensorrt.tensorrt.ITensor, axis: int) tensorrt.tensorrt.IGatherLayer
Add a gather layer to the network. See
IGatherLayerfor more information.- Parameters
input – The tensor to gather values from.
indices – The tensor to get indices from to populate the output tensor.
axis – The non-batch dimension axis in the data tensor to gather on.
- Returns
The new gather layer, or
Noneif it could not be created.
- add_gather_v2(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, indices: tensorrt.tensorrt.ITensor, mode: tensorrt.tensorrt.GatherMode) tensorrt.tensorrt.IGatherLayer
Add a gather layer to the network. See
IGatherLayerfor more information.- Parameters
input – The tensor to gather values from.
indices – The tensor to get indices from to populate the output tensor.
mode – The gather mode.
- Returns
The new gather layer, or
Noneif it could not be created.
- add_grid_sample(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, grid: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IGridSampleLayer
Creates a GridSample layer with a trt.InterpolationMode.LINEAR, unaligned corners, and trt.SampleMode.FILL for 4d-shape input tensors. See
IGridSampleLayerfor more information.- Parameters
input – The input tensor to the layer.
grid – The grid tensor to the layer.
- Variables
interpolation_mode – class:InterpolationMode The interpolation mode to use in the layer. Default is LINEAR.
align_corners – class:bool the align mode to use in the layer. Default is False.
padding_mode –
SampleModeThe padding mode to use in the layer. Default is FILL.
- Returns
The new grid sample layer, or
Noneif it could not be created.
- add_identity(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IIdentityLayer
Add an identity layer. See
IIdentityLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
The new identity layer, or
Noneif it could not be created.
- add_if_conditional(self: tensorrt.tensorrt.INetworkDefinition) tensorrt.tensorrt.IIfConditional
Adds an if-conditional to the network, which provides a way to specify subgraphs that will be conditionally executed using lazy evaluation. See
IIfConditionalfor more information.- Returns
The new if-condtional, or
Noneif it could not be created.
- add_input(self: tensorrt.tensorrt.INetworkDefinition, name: str, dtype: tensorrt.tensorrt.DataType, shape: tensorrt.tensorrt.Dims) tensorrt.tensorrt.ITensor
Adds an input to the network.
- Parameters
name – The name of the tensor.
dtype – The data type of the tensor. Currently, tensorrt.int8 is not supported for inputs.
shape – The dimensions of the tensor. The total volume must be less than 2^30 elements.
- Returns
The newly added Tensor.
- add_loop(self: tensorrt.tensorrt.INetworkDefinition) tensorrt.tensorrt.ILoop
Adds a loop to the network, which provides a way to specify a recurrent subgraph. See
ILoopfor more information.- Returns
The new loop layer, or
Noneif it could not be created.
- add_lrn(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, window: int, alpha: float, beta: float, k: float) tensorrt.tensorrt.ILRNLayer
Add a LRN layer to the network. See
ILRNLayerfor more information.- Parameters
input – The input tensor to the layer.
window – The size of the window.
alpha – The alpha value for the LRN computation.
beta – The beta value for the LRN computation.
k – The k value for the LRN computation.
- Returns
The new LRN layer, or
Noneif it could not be created.
- add_matrix_multiply(self: tensorrt.tensorrt.INetworkDefinition, input0: tensorrt.tensorrt.ITensor, op0: tensorrt.tensorrt.MatrixOperation, input1: tensorrt.tensorrt.ITensor, op1: tensorrt.tensorrt.MatrixOperation) tensorrt.tensorrt.IMatrixMultiplyLayer
Add a matrix multiply layer to the network. See
IMatrixMultiplyLayerfor more information.- Parameters
input0 – The first input tensor (commonly A).
op0 – Whether to treat input0 as matrices, transposed matrices, or vectors.
input1 – The second input tensor (commonly B).
op1 – Whether to treat input1 as matrices, transposed matrices, or vectors.
- Returns
The new matrix multiply layer, or
Noneif it could not be created.
- add_nms(self: tensorrt.tensorrt.INetworkDefinition, boxes: tensorrt.tensorrt.ITensor, scores: tensorrt.tensorrt.ITensor, max_output_boxes_per_class: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.INMSLayer
Add a non-maximum suppression layer to the network. See
INMSLayerfor more information.- Parameters
boxes – The input boxes tensor to the layer.
scores – The input scores tensor to the layer.
max_output_boxes_per_class – The maxOutputBoxesPerClass tensor to the layer.
- Variables
bounding_box_format –
BoundingBoxFormatThe bounding box format used by the layer. Default is CORNER_PAIRS.topk_box_limit –
intThe maximum number of filtered boxes considered for selection per batch item. Default is 2000 for SM 5.3 and 6.2 devices, and 5000 otherwise. The TopK box limit must be less than or equal to {2000 for SM 5.3 and 6.2 devices, 5000 otherwise}.
- Returns
The new NMS layer, or
Noneif it could not be created.
- add_non_zero(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.INonZeroLayer
Adds an NonZero layer to the network. See
INonZeroLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
the new NonZero layer, or
Noneif it could not be created.
- add_normalization(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, scale: tensorrt.tensorrt.ITensor, bias: tensorrt.tensorrt.ITensor, axesMask: int) tensorrt.tensorrt.INormalizationLayer
Adds a Normalization layer to the network. See
Normalizationfor more information.- Parameters
input – The input tensor to the layer.
scale – The scale tensor used to scale the normalized output.
bias – The bias tensor used to scale the normalized output.
axesMask – The axes on which to perform mean calculations. The bit in position i of bitmask axes corresponds to explicit dimension i of the result. E.g., the least significant bit corresponds to the first explicit dimension and the next to least significant bit corresponds to the second explicit dimension.
- Returns
the new Normalization layer, or
Noneif it could not be created.
- add_one_hot(self: tensorrt.tensorrt.INetworkDefinition, indices: tensorrt.tensorrt.ITensor, values: tensorrt.tensorrt.ITensor, depth: tensorrt.tensorrt.ITensor, axis: int) tensorrt.tensorrt.IOneHotLayer
Add a OneHot layer to the network. See
IOneHotLayerfor more information.- Parameters
indices – The tensor to get indices from to populate the output tensor.
values – The tensor to get off (cold) value and on (hot) value
depth – The tensor to get depth (number of classes) of one-hot encoding
axis – The axis to append the one-hot encoding to
- Returns
The new OneHot layer, or
Noneif it could not be created.
- add_padding_nd(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, pre_padding: tensorrt.tensorrt.Dims, post_padding: tensorrt.tensorrt.Dims) tensorrt.tensorrt.IPaddingLayer
Add a multi-dimensional padding layer to the network. See
IPaddingLayerfor more information.- Parameters
input – The input tensor to the layer.
pre_padding – The padding to apply to the start of the tensor.
post_padding – The padding to apply to the end of the tensor.
- Returns
The new padding layer, or
Noneif it could not be created.
- add_parametric_relu(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, slopes: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IParametricReLULayer
Add a parametric ReLU layer. See
IParametricReLULayerfor more information.- Parameters
input – The input tensor to the layer.
slopes – The slopes tensor (input elements are multiplied with the slopes where the input is negative).
- Returns
The new parametric ReLU layer, or
Noneif it could not be created.
- add_plugin_v2(self: tensorrt.tensorrt.INetworkDefinition, inputs: List[tensorrt.tensorrt.ITensor], plugin: tensorrt.tensorrt.IPluginV2) tensorrt.tensorrt.IPluginV2Layer
Add a plugin layer to the network using an
IPluginV2interface. SeeIPluginV2for more information.- Parameters
inputs – The input tensors to the layer.
plugin – The layer plugin.
- Returns
The new plugin layer, or
Noneif it could not be created.
- add_plugin_v3(self: tensorrt.tensorrt.INetworkDefinition, inputs: List[tensorrt.tensorrt.ITensor], shape_inputs: List[tensorrt.tensorrt.ITensor], plugin: tensorrt.tensorrt.IPluginV3) tensorrt.tensorrt.IPluginV3Layer
Add a plugin layer to the network using an
IPluginV3interface. SeeIPluginV3for more information.- Parameters
inputs – The input tensors to the layer.
shape_inputs – The shape input tensors to the layer.
plugin – The layer plugin.
- Returns
The new plugin layer, or
Noneif it could not be created.
- add_pooling_nd(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, type: tensorrt.tensorrt.PoolingType, window_size: tensorrt.tensorrt.Dims) tensorrt.tensorrt.IPoolingLayer
Add a multi-dimension pooling layer to the network. See
IPoolingLayerfor more information.- Parameters
input – The input tensor to the layer.
type – The type of pooling to apply.
window_size – The size of the pooling window.
- Returns
The new pooling layer, or
Noneif it could not be created.
- add_quantize(*args, **kwargs)
Overloaded function.
add_quantize(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, scale: tensorrt.tensorrt.ITensor) -> tensorrt.tensorrt.IQuantizeLayer
Add a quantization layer to the network. See
IQuantizeLayerfor more information.- arg input
A tensor to quantize.
- arg scale
A tensor with the scale coefficients.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.int8).
- returns
The new quantization layer, or
Noneif it could not be created.
add_quantize(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, scale: tensorrt.tensorrt.ITensor, output_type: tensorrt.tensorrt.DataType) -> tensorrt.tensorrt.IQuantizeLayer
Add a quantization layer to the network. See
IQuantizeLayerfor more information.- arg input
A tensor to quantize.
- arg scale
A tensor with the scale coefficients.
- arg output_type
The datatype of the output tensor. Specifying output_type is optional (default value tensorrt.int8).
- returns
The new quantization layer, or
Noneif it could not be created.
- add_ragged_softmax(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, bounds: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IRaggedSoftMaxLayer
Add a ragged softmax layer to the network. See
IRaggedSoftMaxLayerfor more information.- Parameters
input – The ZxS input tensor.
bounds – The Zx1 bounds tensor.
- Returns
The new ragged softmax layer, or
Noneif it could not be created.
- add_reduce(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, op: tensorrt.tensorrt.ReduceOperation, axes: int, keep_dims: bool) tensorrt.tensorrt.IReduceLayer
Add a reduce layer to the network. See
IReduceLayerfor more information.- Parameters
input – The input tensor to the layer.
op – The reduction operation to perform.
axes – The reduction dimensions. The bit in position i of bitmask axes corresponds to explicit dimension i of the result. E.g., the least significant bit corresponds to the first explicit dimension and the next to least significant bit corresponds to the second explicit dimension.
keep_dims – The boolean that specifies whether or not to keep the reduced dimensions in the output of the layer.
- Returns
The new reduce layer, or
Noneif it could not be created.
- add_resize(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IResizeLayer
Add a resize layer. See
IResizeLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
The new resize layer, or
Noneif it could not be created.
- add_reverse_sequence(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, sequence_lens: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IReverseSequenceLayer
Adds a ReverseSequence layer to the network. See
IReverseSequenceLayerfor more information.- Parameters
input – The input tensor to the layer.
sequence_lens – 1D tensor specifying lengths of sequences to reverse in a batch. The length of
sequence_lensmust be equal to the size of the dimension ininputspecified bybatch_axis.
- Returns
the new ReverseSequence layer, or
Noneif it could not be created.
- add_scale(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, mode: tensorrt.tensorrt.ScaleMode, shift: tensorrt.tensorrt.Weights = None, scale: tensorrt.tensorrt.Weights = None, power: tensorrt.tensorrt.Weights = None) tensorrt.tensorrt.IScaleLayer
Add a scale layer to the network. See
IScaleLayerfor more information.- Parameters
input – The input tensor to the layer. This tensor is required to have a minimum of 3 dimensions.
mode – The scaling mode.
shift – The shift value.
scale – The scale value.
power – The power value.
If the weights are available, then the size of weights are dependent on the ScaleMode. For UNIFORM, the number of weights is equal to 1. For CHANNEL, the number of weights is equal to the channel dimension. For ELEMENTWISE, the number of weights is equal to the volume of the input.
- Returns
The new scale layer, or
Noneif it could not be created.
- add_scale_nd(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, mode: tensorrt.tensorrt.ScaleMode, shift: tensorrt.tensorrt.Weights = None, scale: tensorrt.tensorrt.Weights = None, power: tensorrt.tensorrt.Weights = None, channel_axis: int) tensorrt.tensorrt.IScaleLayer
Add a multi-dimension scale layer to the network. See
IScaleLayerfor more information.- Parameters
input – The input tensor to the layer. This tensor is required to have a minimum of 3 dimensions.
mode – The scaling mode.
shift – The shift value.
scale – The scale value.
power – The power value.
channel_axis – The channel dimension axis.
If the weights are available, then the size of weights are dependent on the ScaleMode. For UNIFORM, the number of weights is equal to 1. For CHANNEL, the number of weights is equal to the channel dimension. For ELEMENTWISE, the number of weights is equal to the volume of the input.
- Returns
The new scale layer, or
Noneif it could not be created.
- add_scatter(self: tensorrt.tensorrt.INetworkDefinition, data: tensorrt.tensorrt.ITensor, indices: tensorrt.tensorrt.ITensor, updates: tensorrt.tensorrt.ITensor, mode: tensorrt.tensorrt.ScatterMode) tensorrt.tensorrt.IScatterLayer
Add a scatter layer to the network. See
IScatterLayerfor more information.- Parameters
data – The tensor to get default values from.
indices – The tensor to get indices from to populate the output tensor.
updates – The tensor to get values from to populate the output tensor.
mode – operation mode see IScatterLayer for more info
- Returns
The new Scatter layer, or
Noneif it could not be created.
- add_select(self: tensorrt.tensorrt.INetworkDefinition, condition: tensorrt.tensorrt.ITensor, then_input: tensorrt.tensorrt.ITensor, else_input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.ISelectLayer
Add a select layer. See
ISelectLayerfor more information.- Parameters
condition – The condition tensor to the layer.
then_input – The then input tensor to the layer.
else_input – The else input tensor to the layer.
- Returns
The new select layer, or
Noneif it could not be created.
- add_shape(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IShapeLayer
Add a shape layer to the network. See
IShapeLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
The new shape layer, or
Noneif it could not be created.
- add_shuffle(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.IShuffleLayer
Add a shuffle layer to the network. See
IShuffleLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
The new shuffle layer, or
Noneif it could not be created.
- add_slice(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, start: tensorrt.tensorrt.Dims, shape: tensorrt.tensorrt.Dims, stride: tensorrt.tensorrt.Dims) tensorrt.tensorrt.ISliceLayer
Add a slice layer to the network. See
ISliceLayerfor more information.- Parameters
input – The input tensor to the layer.
start – The start offset.
shape – The output shape.
stride – The slicing stride. Positive, negative, zero stride values, and combinations of them in different dimensions are allowed.
- Returns
The new slice layer, or
Noneif it could not be created.
- add_softmax(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor) tensorrt.tensorrt.ISoftMaxLayer
Add a softmax layer to the network. See
ISoftMaxLayerfor more information.- Parameters
input – The input tensor to the layer.
- Returns
The new softmax layer, or
Noneif it could not be created.
- add_topk(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, op: tensorrt.tensorrt.TopKOperation, k: int, axes: int) tensorrt.tensorrt.ITopKLayer
Add a TopK layer to the network. See
ITopKLayerfor more information.The TopK layer has two outputs of the same dimensions. The first contains data values, the second contains index positions for the values. Output values are sorted, largest first for operation
TopKOperation.MAXand smallest first for operationTopKOperation.MIN.Currently only values of K up to 3840 are supported.
- Parameters
input – The input tensor to the layer.
op – Operation to perform.
k – Number of elements to keep.
axes – The reduction dimensions. The bit in position i of bitmask axes corresponds to explicit dimension i of the result. E.g., the least significant bit corresponds to the first explicit dimension and the next to least significant bit corresponds to the second explicit dimension. Currently axes must specify exactly one dimension, and it must be one of the last four dimensions.
- Returns
The new TopK layer, or
Noneif it could not be created.
- add_unary(self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, op: tensorrt.tensorrt.UnaryOperation) tensorrt.tensorrt.IUnaryLayer
Add a unary layer to the network. See
IUnaryLayerfor more information.- Parameters
input – The input tensor to the layer.
op – The operation to apply.
- Returns
The new unary layer, or
Noneif it could not be created.
- are_weights_marked_refittable(self: tensorrt.tensorrt.INetworkDefinition, name: str) bool
Whether the weight has been marked as refittable.
- Parameters
name – The name of the weights to check.
- property builder
The builder from which this INetworkDefinition was created.
See
IBuilderfor more information.
- get_flag(self: tensorrt.tensorrt.INetworkDefinition, flag: tensorrt.NetworkDefinitionCreationFlag) bool
Returns true if the specified
NetworkDefinitionCreationFlagis set.- Parameters
flag – The
NetworkDefinitionCreationFlag.- Returns
Whether the flag is set.
- get_input(self: tensorrt.tensorrt.INetworkDefinition, index: int) tensorrt.tensorrt.ITensor
Get the input tensor specified by the given index.
- Parameters
index – The index of the input tensor.
- Returns
The tensor, or
Noneif it is out of range.
- get_layer(self: tensorrt.tensorrt.INetworkDefinition, index: int) tensorrt.tensorrt.ILayer
Get the layer specified by the given index.
- Parameters
index – The index of the layer.
- Returns
The layer, or
Noneif it is out of range.
- get_output(self: tensorrt.tensorrt.INetworkDefinition, index: int) tensorrt.tensorrt.ITensor
Get the output tensor specified by the given index.
- Parameters
index – The index of the output tensor.
- Returns
The tensor, or
Noneif it is out of range.
- is_debug_tensor(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) bool
Check if a tensor is marked as debug.
- Parameters
tensor – The tensor to be checked.
- mark_debug(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) bool
Mark a tensor as a debug tensor in the network.
- Parameters
tensor – The tensor to be marked as debug tensor.
- Returns
True on success, False otherwise.
- mark_output(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) None
Mark a tensor as an output.
- Parameters
tensor – The tensor to mark.
- mark_output_for_shapes(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) bool
Enable tensor’s value to be computed by
IExecutionContext.get_shape_binding().- Parameters
tensor – The tensor to unmark as an output tensor. The tensor must be of type
int32and have no more than one dimension.- Returns
Trueif successful,Falseif tensor is already marked as an output.
- mark_weights_refittable(self: tensorrt.tensorrt.INetworkDefinition, name: str) bool
Mark a weight as refittable.
- Parameters
name – The weight to mark.
- remove_tensor(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) None
Remove a tensor from the network.
- Parameters
tensor – The tensor to remove
It is illegal to remove a tensor that is the input or output of a layer. if this method is called with such a tensor, a warning will be emitted on the log and the call will be ignored.
- set_weights_name(self: tensorrt.tensorrt.INetworkDefinition, weights: tensorrt.tensorrt.Weights, name: str) bool
Associate a name with all current uses of the given weights.
The name must be set after the Weights are used in the network. Lookup is associative. The name applies to all Weights with matching type, value pointer, and count. If Weights with a matching value pointer, but different type or count exists in the network, an error message is issued, the name is rejected, and return false. If the name has already been used for other weights, return false. None causes the weights to become unnamed, i.e. clears any previous name.
- Parameters
weights – The weights to be named.
name – The name to associate with the weights.
- Returns
true on success.
- unmark_debug(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) bool
Unmark a tensor as a debug tensor in the network.
- Parameters
tensor – The tensor to be unmarked as debug tensor.
- Returns
True on success, False otherwise.
- unmark_output(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) None
Unmark a tensor as a network output.
- Parameters
tensor – The tensor to unmark as an output tensor.
- unmark_output_for_shapes(self: tensorrt.tensorrt.INetworkDefinition, tensor: tensorrt.tensorrt.ITensor) bool
Undo
mark_output_for_shapes().- Parameters
tensor – The tensor to unmark as an output tensor.
- Returns
Trueif successful,Falseif tensor is not marked as an output.
- unmark_weights_refittable(self: tensorrt.tensorrt.INetworkDefinition, name: str) bool
Unmark a weight as refittable.
- Parameters
name – The weight to unmark.