A tensor in a network definition.
To remove a tensor from a network definition, use INetworkDefinition::removeTensor().
When using the DLA, the cumulative size of all Tensors that are not marked as Network Input or Output tensors, must be less than 1GB in size to fit into a single subgraph. If the build option kGPU_FALLBACK is specified, then multiple subgraphs can be created, with each subgraph limited to less than 1GB of internal tensors data.
 Warning
 The volume of the tensor must be less than 2^31 elements. If the tensor is a shape tensor, its volume must not exceed 64.

Do not inherit from this class, as doing so will break forwardcompatibility of the API and ABI.
bool nvinfer1::ITensor::isExecutionTensor 
( 
 ) 
const 

inlinenoexcept 
Whether the tensor is an execution tensor.
Tensors are usually execution tensors. The exceptions are tensors used solely for shape calculations or whose contents not needed to compute the outputs.
The result of isExecutionTensor() is reliable only when network construction is complete. For example, if a partially built network has no path from a tensor to a network output, isExecutionTensor() returns false. Completing the path would cause it to become true.
If a tensor is an execution tensor and becomes an engine input or output, then ICudaEngine::isExecutionBinding will be true for that tensor.
A tensor with isShapeTensor() == false and isExecutionTensor() == false can still show up as an input to the engine if its dimensions are required. In that case, only its dimensions need to be set at runtime and a nullptr can be passed instead of a pointer to its contents.
bool nvinfer1::ITensor::isShapeTensor 
( 
 ) 
const 

inlinenoexcept 
Whether the tensor is a shape tensor.
A shape tensor is a tensor that is related to shape calculations. It must have type Int32, Bool, or Float, and its shape must be determinable at build time. Furthermore, it must be needed as a shape tensor, either marked as a network shape output via markOutputForShapes(), or as a layer input that is required to be a shape tensor, such as the second input to IShuffleLayer. Some layers are "polymorphic" in this respect. For example, the inputs to IElementWiseLayer must be shape tensors if the output is a shape tensor.
The TensorRT Developer Guide give the formal rules for what tensors are shape tensors.
The result of isShapeTensor() is reliable only when network construction is complete. For example, if a partially built network sums two tensors T1 and T2 to create tensor T3, and none are yet needed as shape tensors, isShapeTensor() returns false for all three tensors. Setting the second input of IShuffleLayer to be T3 would cause all three tensors to be shape tensors, because IShuffleLayer requires that its second optional input be a shape tensor, and IElementWiseLayer is "polymorphic".
If a tensor is a shape tensor and becomes an engine input or output, then ICudaEngine::isShapeBinding will be true for that tensor. Such a shape tensor must have type Int32.
It is possible for a tensor to be both a shape tensor and an execution tensor.
 Returns
 True if tensor is a shape tensor, false otherwise.
 See also
 INetworkDefinition::markOutputForShapes(), ICudaEngine::isShapeBinding()
void nvinfer1::ITensor::setAllowedFormats 
( 
TensorFormats 
formats  ) 


inlinenoexcept 
Set allowed formats for this tensor. By default all formats are allowed. Shape tensors (for which isShapeTensor() returns true) may only have row major linear format.
When running network on DLA and the build option kGPU_FALLBACK is not specified, if DLA format(kCHW4 with Int8, kCHW4 with FP16, kCHW16 with FP16, kCHW32 with Int8) is set, the input format is treated as native DLA format with line stride requirement. Input/output binding with these format should have correct layout during inference.
 Parameters

formats  A bitmask of TensorFormat values that are supported for this tensor. 
 See also
 ITensor::getAllowedFormats()

TensorFormats
void nvinfer1::ITensor::setBroadcastAcrossBatch 
( 
bool 
broadcastAcrossBatch  ) 


inlinenoexcept 
Set whether to enable broadcast of tensor across the batch.
When a tensor is broadcast across a batch, it has the same value for every member in the batch. Memory is only allocated once for the single member.
This method is only valid for network input tensors, since the flags of layer output tensors are inferred based on layer inputs and parameters. If this state is modified for a tensor in the network, the states of all dependent tensors will be recomputed. If the tensor is for an explicit batch network, then this function does nothing.
 Warning
 The broadcast flag is ignored when using explicit batch network mode.
 Parameters

broadcastAcrossBatch  Whether to enable broadcast of tensor across the batch. 
 See also
 getBroadcastAcrossBatch()
void nvinfer1::ITensor::setDimensionName 
( 
int32_t 
index, 


char const * 
name 

) 
 

inlinenoexcept 
Name a dimension of an input tensor.
Associate a runtime dimension of an input tensor with a symbolic name. Dimensions with the same nonempty name must be equal at runtime. Knowing this equality for runtime dimensions may help the TensorRT optimizer. Both runtime and buildtime dimensions can be named.
For example, setDimensionName(0, "n") associates the symbolic name "n" with the leading dimension.
This method copies the name string. If the function is called again, with the same index, it will overwrite the previous name. If nullptr is passed as name, it will clear the name of the dimension.
 Parameters

index  index of the dimension 
name  of the dimension, as a pointer to a nullterminated character sequence. 
 Warning
 The string name must be nullterminated, and be at most 4096 bytes including the terminator.
 See also
 getDimensionName()
void nvinfer1::ITensor::setDimensions 
( 
Dims 
dimensions  ) 


inlinenoexcept 
Set the dimensions of a tensor.
For a network input, the dimensions are assigned by the application. For a network output, the dimensions are computed based on the layer parameters and the inputs to the layer. If a tensor size or a parameter is modified in the network, the dimensions of all dependent tensors will be recomputed.
This call is only legal for network input tensors, since the dimensions of layer output tensors are inferred based on layer inputs and parameters. The volume must be less than 2^31 elements.
 Parameters

dimensions  The dimensions of the tensor. 
 See also
 getDimensions()
bool nvinfer1::ITensor::setDynamicRange 
( 
float 
min, 


float 
max 

) 
 

inlinenoexcept 
Set dynamic range for the tensor.
Currently, only symmetric ranges are supported. Therefore, the larger of the absolute values of the provided bounds is used.
 Returns
 Whether the dynamic range was set successfully.
Requires that min and max be finite, and min <= max.