TensorRT 10.0.0
nvinfer1::IExecutionContext Class Reference

Context for executing inference using an engine, with functionally unsafe features. More...

#include <NvInferRuntime.h>

Inheritance diagram for nvinfer1::IExecutionContext:
nvinfer1::INoCopy

Public Member Functions

virtual ~IExecutionContext () noexcept=default
 
void setDebugSync (bool sync) noexcept
 Set the debug sync flag. More...
 
bool getDebugSync () const noexcept
 Get the debug sync flag. More...
 
void setProfiler (IProfiler *profiler) noexcept
 Set the profiler. More...
 
IProfilergetProfiler () const noexcept
 Get the profiler. More...
 
ICudaEngine const & getEngine () const noexcept
 Get the associated engine. More...
 
void setName (char const *name) noexcept
 Set the name of the execution context. More...
 
char const * getName () const noexcept
 Return the name of the execution context. More...
 
void setDeviceMemory (void *memory) noexcept
 Set the device memory for use by this execution context. More...
 
Dims getTensorStrides (char const *tensorName) const noexcept
 Return the strides of the buffer for the given tensor name. More...
 
int32_t getOptimizationProfile () const noexcept
 Get the index of the currently selected optimization profile. More...
 
bool setInputShape (char const *tensorName, Dims const &dims) noexcept
 Set shape of given input. More...
 
Dims getTensorShape (char const *tensorName) const noexcept
 Return the shape of the given input or output. More...
 
bool allInputDimensionsSpecified () const noexcept
 Whether all dynamic dimensions of input tensors have been specified. More...
 
TRT_DEPRECATED bool allInputShapesSpecified () const noexcept
 Whether all input shape bindings have been specified. More...
 
void setErrorRecorder (IErrorRecorder *recorder) noexcept
 Set the ErrorRecorder for this interface. More...
 
IErrorRecordergetErrorRecorder () const noexcept
 Get the ErrorRecorder assigned to this interface. More...
 
bool executeV2 (void *const *bindings) noexcept
 Synchronously execute a network. More...
 
bool setOptimizationProfileAsync (int32_t profileIndex, cudaStream_t stream) noexcept
 Select an optimization profile for the current context with async semantics. More...
 
void setEnqueueEmitsProfile (bool enqueueEmitsProfile) noexcept
 Set whether enqueue emits layer timing to the profiler. More...
 
bool getEnqueueEmitsProfile () const noexcept
 Get the enqueueEmitsProfile state. More...
 
bool reportToProfiler () const noexcept
 Calculate layer timing info for the current optimization profile in IExecutionContext and update the profiler after one iteration of inference launch. More...
 
bool setTensorAddress (char const *tensorName, void *data) noexcept
 Set memory address for given input or output tensor. More...
 
void const * getTensorAddress (char const *tensorName) const noexcept
 Get memory address bound to given input or output tensor, or nullptr if the provided name does not map to an input or output tensor. More...
 
bool setOutputTensorAddress (char const *tensorName, void *data) noexcept
 Set the memory address for a given output tensor. More...
 
bool setInputTensorAddress (char const *tensorName, void const *data) noexcept
 Set memory address for given input. More...
 
void * getOutputTensorAddress (char const *tensorName) const noexcept
 Get memory address for given output. More...
 
int32_t inferShapes (int32_t nbMaxNames, char const **tensorNames) noexcept
 Run shape calculations. More...
 
size_t updateDeviceMemorySizeForShapes () noexcept
 Recompute the internal activation buffer sizes based on the current input shapes, and return the total amount of memory required. More...
 
bool setInputConsumedEvent (cudaEvent_t event) noexcept
 Mark input as consumed. More...
 
cudaEvent_t getInputConsumedEvent () const noexcept
 The event associated with consuming the input. More...
 
bool setOutputAllocator (char const *tensorName, IOutputAllocator *outputAllocator) noexcept
 Set output allocator to use for output tensor of given name. Pass nullptr to outputAllocator to unset. The allocator is called by enqueueV3(). More...
 
IOutputAllocatorgetOutputAllocator (char const *tensorName) const noexcept
 Get output allocator associated with output tensor of given name, or nullptr if the provided name does not map to an output tensor. More...
 
int64_t getMaxOutputSize (char const *tensorName) const noexcept
 Get upper bound on an output tensor's size, in bytes, based on the current optimization profile and input dimensions. More...
 
bool setTemporaryStorageAllocator (IGpuAllocator *allocator) noexcept
 Specify allocator to use for internal temporary storage. More...
 
IGpuAllocatorgetTemporaryStorageAllocator () const noexcept
 Get allocator set by setTemporaryStorageAllocator. More...
 
bool enqueueV3 (cudaStream_t stream) noexcept
 Enqueue inference on a stream. More...
 
void setPersistentCacheLimit (size_t size) noexcept
 Set the maximum size for persistent cache usage. More...
 
size_t getPersistentCacheLimit () const noexcept
 Get the maximum size for persistent cache usage. More...
 
bool setNvtxVerbosity (ProfilingVerbosity verbosity) noexcept
 Set the verbosity of the NVTX markers in the execution context. More...
 
ProfilingVerbosity getNvtxVerbosity () const noexcept
 Get the NVTX verbosity of the execution context. More...
 
void setAuxStreams (cudaStream_t *auxStreams, int32_t nbStreams) noexcept
 Set the auxiliary streams that TensorRT should launch kernels on in the next enqueueV3() call. More...
 
bool setDebugListener (IDebugListener *listener) noexcept
 Set DebugListener for this execution context. More...
 
IDebugListenergetDebugListener () noexcept
 Get the DebugListener of this execution context. More...
 
bool setTensorDebugState (char const *name, bool flag) noexcept
 Set debug state of tensor given the tensor name. More...
 
bool setAllTensorsDebugState (bool flag) noexcept
 
bool getDebugState (char const *name) const noexcept
 

Protected Attributes

apiv::VExecutionContext * mImpl
 

Additional Inherited Members

- Protected Member Functions inherited from nvinfer1::INoCopy
 INoCopy ()=default
 
virtual ~INoCopy ()=default
 
 INoCopy (INoCopy const &other)=delete
 
INoCopyoperator= (INoCopy const &other)=delete
 
 INoCopy (INoCopy &&other)=delete
 
INoCopyoperator= (INoCopy &&other)=delete
 

Detailed Description

Context for executing inference using an engine, with functionally unsafe features.

Multiple execution contexts may exist for one ICudaEngine instance, allowing the same engine to be used for the execution of multiple batches simultaneously. If the engine supports dynamic shapes, each execution context in concurrent use must use a separate optimization profile.

Warning
Do not inherit from this class, as doing so will break forward-compatibility of the API and ABI.

Constructor & Destructor Documentation

◆ ~IExecutionContext()

virtual nvinfer1::IExecutionContext::~IExecutionContext ( )
virtualdefaultnoexcept

Member Function Documentation

◆ allInputDimensionsSpecified()

bool nvinfer1::IExecutionContext::allInputDimensionsSpecified ( ) const
inlinenoexcept

Whether all dynamic dimensions of input tensors have been specified.

Returns
True if all dynamic dimensions of input tensors have been specified by calling setInputShape().

Trivially true if network has no dynamically shaped input tensors.

Does not work with name-base interfaces eg. IExecutionContext::setInputShape(). Use IExecutionContext::inferShapes() instead.

◆ allInputShapesSpecified()

TRT_DEPRECATED bool nvinfer1::IExecutionContext::allInputShapesSpecified ( ) const
inlinenoexcept

Whether all input shape bindings have been specified.

Returns
True if all input shape bindings have been specified by setInputShapeBinding().

Trivially true if network has no input shape bindings.

Does not work with name-base interfaces eg. IExecutionContext::setInputShape(). Use IExecutionContext::inferShapes() instead.

Deprecated:
Deprecated in TensorRT 10.0. setInputShapeBinding() is removed since TensorRT 10.0.

◆ enqueueV3()

bool nvinfer1::IExecutionContext::enqueueV3 ( cudaStream_t  stream)
inlinenoexcept

Enqueue inference on a stream.

Parameters
streamA cuda stream on which the inference kernels will be enqueued.
Returns
True if the kernels were enqueued successfully, false otherwise.

Modifying or releasing memory that has been registered for the tensors before stream synchronization or the event passed to setInputConsumedEvent has been being triggered results in undefined behavior. Input tensor can be released after the setInputConsumedEvent whereas output tensors require stream synchronization.

Warning
Using default stream may lead to performance issues due to additional cudaDeviceSynchronize() calls by TensorRT to ensure correct synchronizations. Please use non-default stream instead.
If the Engine is streaming weights, enqueueV3 will become synchronous, and the graph will not be capturable.

◆ executeV2()

bool nvinfer1::IExecutionContext::executeV2 ( void *const *  bindings)
inlinenoexcept

Synchronously execute a network.

This method requires an array of input and output buffers. The mapping from indices to tensor names can be queried using ICudaEngine::getIOTensorName().

Parameters
bindingsAn array of pointers to input and output buffers for the network.
Returns
True if execution succeeded.
See also
ICudaEngine::getIOTensorName()

◆ getDebugListener()

IDebugListener * nvinfer1::IExecutionContext::getDebugListener ( )
inlinenoexcept

Get the DebugListener of this execution context.

Returns
DebugListener of this execution context.

◆ getDebugState()

bool nvinfer1::IExecutionContext::getDebugState ( char const *  name) const
inlinenoexcept

Get the debug state.

Returns
true if there is a debug tensor with the given name and it has debug state turned on.

◆ getDebugSync()

bool nvinfer1::IExecutionContext::getDebugSync ( ) const
inlinenoexcept

Get the debug sync flag.

See also
setDebugSync()

◆ getEngine()

ICudaEngine const & nvinfer1::IExecutionContext::getEngine ( ) const
inlinenoexcept

Get the associated engine.

See also
ICudaEngine

◆ getEnqueueEmitsProfile()

bool nvinfer1::IExecutionContext::getEnqueueEmitsProfile ( ) const
inlinenoexcept

Get the enqueueEmitsProfile state.

Returns
The enqueueEmitsProfile state.
See also
IExecutionContext::setEnqueueEmitsProfile()

◆ getErrorRecorder()

IErrorRecorder * nvinfer1::IExecutionContext::getErrorRecorder ( ) const
inlinenoexcept

Get the ErrorRecorder assigned to this interface.

Retrieves the assigned error recorder object for the given class. A nullptr will be returned if an error handler has not been set.

Returns
A pointer to the IErrorRecorder object that has been registered.
See also
setErrorRecorder()

◆ getInputConsumedEvent()

cudaEvent_t nvinfer1::IExecutionContext::getInputConsumedEvent ( ) const
inlinenoexcept

The event associated with consuming the input.

Returns
The cuda event. Nullptr will be returned if the event is not set yet.

◆ getMaxOutputSize()

int64_t nvinfer1::IExecutionContext::getMaxOutputSize ( char const *  tensorName) const
inlinenoexcept

Get upper bound on an output tensor's size, in bytes, based on the current optimization profile and input dimensions.

If the profile or input dimensions are not yet set, or the provided name does not map to an output, returns -1.

Parameters
tensorNameThe name of an output tensor.
Returns
Upper bound in bytes.
Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.

◆ getName()

char const * nvinfer1::IExecutionContext::getName ( ) const
inlinenoexcept

Return the name of the execution context.

See also
setName()

◆ getNvtxVerbosity()

ProfilingVerbosity nvinfer1::IExecutionContext::getNvtxVerbosity ( ) const
inlinenoexcept

Get the NVTX verbosity of the execution context.

Returns
The current NVTX verbosity of the execution context.
See also
setNvtxVerbosity()

◆ getOptimizationProfile()

int32_t nvinfer1::IExecutionContext::getOptimizationProfile ( ) const
inlinenoexcept

Get the index of the currently selected optimization profile.

If the profile index has not been set yet (implicitly to 0 if no other execution context has been set to profile 0, or explicitly for all subsequent contexts), an invalid value of -1 will be returned and all calls to enqueueV3()/executeV2() will fail until a valid profile index has been set. This behavior is deprecated in TensorRT 8.6, all profiles will default to optimization profile 0 and -1 will no longer be returned.

◆ getOutputAllocator()

IOutputAllocator * nvinfer1::IExecutionContext::getOutputAllocator ( char const *  tensorName) const
inlinenoexcept

Get output allocator associated with output tensor of given name, or nullptr if the provided name does not map to an output tensor.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
IOutputAllocator

◆ getOutputTensorAddress()

void * nvinfer1::IExecutionContext::getOutputTensorAddress ( char const *  tensorName) const
inlinenoexcept

Get memory address for given output.

Parameters
tensorNameThe name of an output tensor.
Returns
Raw output data pointer (void*) for given output tensor, or nullptr if the provided name does not map to an output tensor.

If only a (void const*) pointer is needed, an alternative is to call method getTensorAddress().

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
getTensorAddress()

◆ getPersistentCacheLimit()

size_t nvinfer1::IExecutionContext::getPersistentCacheLimit ( ) const
inlinenoexcept

Get the maximum size for persistent cache usage.

Returns
The size of the persistent cache limit
See also
setPersistentCacheLimit

◆ getProfiler()

IProfiler * nvinfer1::IExecutionContext::getProfiler ( ) const
inlinenoexcept

Get the profiler.

See also
IProfiler setProfiler()

◆ getTemporaryStorageAllocator()

IGpuAllocator * nvinfer1::IExecutionContext::getTemporaryStorageAllocator ( ) const
inlinenoexcept

Get allocator set by setTemporaryStorageAllocator.

Returns a nullptr if a nullptr was passed with setTemporaryStorageAllocator().

◆ getTensorAddress()

void const * nvinfer1::IExecutionContext::getTensorAddress ( char const *  tensorName) const
inlinenoexcept

Get memory address bound to given input or output tensor, or nullptr if the provided name does not map to an input or output tensor.

Parameters
tensorNameThe name of an input or output tensor.

Use method getOutputTensorAddress() if a non-const pointer for an output tensor is required.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
getOutputTensorAddress()

◆ getTensorShape()

Dims nvinfer1::IExecutionContext::getTensorShape ( char const *  tensorName) const
inlinenoexcept

Return the shape of the given input or output.

Parameters
tensorNameThe name of an input or output tensor.

Return Dims{-1, {}} if the provided name does not map to an input or output tensor. Otherwise return the shape of the input or output tensor.

A dimension in an input tensor will have a -1 wildcard value if all the following are true:

  • setInputShape() has not yet been called for this tensor
  • The dimension is a runtime dimension that is not implicitly constrained to be a single value.

A dimension in an output tensor will have a -1 wildcard value if the dimension depends on values of execution tensors OR if all the following are true:

  • It is a runtime dimension.
  • setInputShape() has NOT been called for some input tensor(s) with a runtime shape.
  • setTensorAddress() has NOT been called for some input tensor(s) with isShapeInferenceIO() = true.

An output tensor may also have -1 wildcard dimensions if its shape depends on values of tensors supplied to enqueueV3().

If the request is for the shape of an output tensor with runtime dimensions, all input tensors with isShapeInferenceIO() = true should have their value already set, since these values might be needed to compute the output shape.

Examples of an input dimension that is implicitly constrained to a single value:

  • The optimization profile specifies equal min and max values.
  • The dimension is named and only one value meets the optimization profile requirements for dimensions with that name.
Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.

◆ getTensorStrides()

Dims nvinfer1::IExecutionContext::getTensorStrides ( char const *  tensorName) const
inlinenoexcept

Return the strides of the buffer for the given tensor name.

The strides are in units of elements, not components or bytes. For example, for TensorFormat::kHWC8, a stride of one spans 8 scalars.

Note that strides can be different for different execution contexts with dynamic shapes.

If the provided name does not map to an input or output tensor, or there are dynamic dimensions that have not been set yet, return Dims{-1, {}}

Parameters
tensorNameThe name of an input or output tensor.
Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.

◆ inferShapes()

int32_t nvinfer1::IExecutionContext::inferShapes ( int32_t  nbMaxNames,
char const **  tensorNames 
)
inlinenoexcept

Run shape calculations.

Parameters
nbMaxNamesMaximum number of names to write to tensorNames. When the return value is a positive value n and tensorNames != nullptr, the names of min(n,nbMaxNames) insufficiently specified input tensors are written to tensorNames.
tensorNamesBuffer in which to place names of insufficiently specified input tensors.
Returns
0 on success. Positive value n if n input tensors were not sufficiently specified. -1 for other errors.

An input tensor is insufficiently specified if either of the following is true:

  • It has dynamic dimensions and its runtime dimensions have not yet been specified via IExecutionContext::setInputShape.
  • isShapeInferenceIO(t)=true and the tensor's address has not yet been set.

If an output tensor has isShapeInferenceIO(t)=true and its address has been specified, then its value is written.

Returns -1 if tensorNames == nullptr and nbMaxNames != 0. Returns -1 if nbMaxNames < 0. Returns -1 if a tensor's dimensions are invalid, e.g. a tensor ends up with a negative dimension.

◆ reportToProfiler()

bool nvinfer1::IExecutionContext::reportToProfiler ( ) const
inlinenoexcept

Calculate layer timing info for the current optimization profile in IExecutionContext and update the profiler after one iteration of inference launch.

If IExecutionContext::getEnqueueEmitsProfile() returns true, the enqueue function will calculate layer timing implicitly if a profiler is provided. This function returns true and does nothing.

If IExecutionContext::getEnqueueEmitsProfile() returns false, the enqueue function will record the CUDA event timers if a profiler is provided. But it will not perform the layer timing calculation. IExecutionContext::reportToProfiler() needs to be called explicitly to calculate layer timing for the previous inference launch.

In the CUDA graph launch scenario, it will record the same set of CUDA events as in regular enqueue functions if the graph is captured from an IExecutionContext with profiler enabled. This function needs to be called after graph launch to report the layer timing info to the profiler.

Warning
profiling CUDA graphs is only available from CUDA 11.1 onwards.
reportToProfiler uses the stream of the previous enqueue call, so the stream must be live otherwise behavior is undefined.
Returns
true if the call succeeded, else false (e.g. profiler not provided, in CUDA graph capture mode, etc.)
See also
IExecutionContext::setEnqueueEmitsProfile()
IExecutionContext::getEnqueueEmitsProfile()

◆ setAllTensorsDebugState()

bool nvinfer1::IExecutionContext::setAllTensorsDebugState ( bool  flag)
inlinenoexcept

Turn the debug state of all debug tensors on or off.

Parameters
flagtrue if turning on debug state, false if turning off debug state.
Returns
true if successful, false otherwise.

The default is off.

◆ setAuxStreams()

void nvinfer1::IExecutionContext::setAuxStreams ( cudaStream_t *  auxStreams,
int32_t  nbStreams 
)
inlinenoexcept

Set the auxiliary streams that TensorRT should launch kernels on in the next enqueueV3() call.

If set, TensorRT will launch the kernels that are supposed to run on the auxiliary streams using the streams provided by the user with this API. If this API is not called before the enqueueV3() call, then TensorRT will use the auxiliary streams created by TensorRT internally.

TensorRT will always insert event synchronizations between the main stream provided via enqueueV3() call and the auxiliary streams:

  • At the beginning of the enqueueV3() call, TensorRT will make sure that all the auxiliary streams wait on the activities on the main stream.
  • At the end of the enqueueV3() call, TensorRT will make sure that the main stream wait on the activities on all the auxiliary streams.
Parameters
auxStreamsThe pointer to an array of cudaStream_t with the array length equal to nbStreams.
nbStreamsThe number of auxiliary streams provided. If nbStreams is greater than engine->getNbAuxStreams(), then only the first engine->getNbAuxStreams() streams will be used. If nbStreams is less than engine->getNbAuxStreams(), such as setting nbStreams to 0, then TensorRT will use the provided streams for the first nbStreams auxiliary streams, and will create additional streams internally for the rest of the auxiliary streams.
Note
The provided auxiliary streams must not be the default stream and must all be different to avoid deadlocks.
See also
enqueueV3(), IBuilderConfig::setMaxAuxStreams(), ICudaEngine::getNbAuxStreams()

◆ setDebugListener()

bool nvinfer1::IExecutionContext::setDebugListener ( IDebugListener listener)
inlinenoexcept

Set DebugListener for this execution context.

Parameters
listenerDebugListener for this execution context.
Returns
true if succeed, false if failure.

◆ setDebugSync()

void nvinfer1::IExecutionContext::setDebugSync ( bool  sync)
inlinenoexcept

Set the debug sync flag.

If this flag is set to true, the engine will log the successful execution for each kernel during executeV2(). It has no effect when using enqueueV3().

See also
getDebugSync()

◆ setDeviceMemory()

void nvinfer1::IExecutionContext::setDeviceMemory ( void *  memory)
inlinenoexcept

Set the device memory for use by this execution context.

The memory must be aligned with cuda memory alignment property (using cudaGetDeviceProperties()), and its size must be large enough for performing inference with the given network inputs. getDeviceMemorySize() and getDeviceMemorySizeForProfile() report upper bounds of the size. Setting memory to nullptr is acceptable if the reported size is 0. If using enqueueV3() to run the network, the memory is in use from the invocation of enqueueV3() until network execution is complete. If using executeV2(), it is in use until executeV2() returns. Releasing or otherwise using the memory for other purposes during this time will result in undefined behavior.

See also
ICudaEngine::getDeviceMemorySize()
ICudaEngine::getDeviceMemorySizeForProfile()
ExecutionContextAllocationStrategy
ICudaEngine::createExecutionContext()
ICudaEngine::createExecutionContextWithoutDeviceMemory()

◆ setEnqueueEmitsProfile()

void nvinfer1::IExecutionContext::setEnqueueEmitsProfile ( bool  enqueueEmitsProfile)
inlinenoexcept

Set whether enqueue emits layer timing to the profiler.

If set to true (default), enqueue is synchronous and does layer timing profiling implicitly if there is a profiler attached. If set to false, enqueue will be asynchronous if there is a profiler attached. An extra method reportToProfiler() needs to be called to obtain the profiling data and report to the profiler attached.

See also
IExecutionContext::getEnqueueEmitsProfile()
IExecutionContext::reportToProfiler()

◆ setErrorRecorder()

void nvinfer1::IExecutionContext::setErrorRecorder ( IErrorRecorder recorder)
inlinenoexcept

Set the ErrorRecorder for this interface.

Assigns the ErrorRecorder to this interface. The ErrorRecorder will track all errors during execution. This function will call incRefCount of the registered ErrorRecorder at least once. Setting recorder to nullptr unregisters the recorder with the interface, resulting in a call to decRefCount if a recorder has been registered.

If an error recorder is not set, messages will be sent to the global log stream.

Parameters
recorderThe error recorder to register with this interface.
See also
getErrorRecorder()

◆ setInputConsumedEvent()

bool nvinfer1::IExecutionContext::setInputConsumedEvent ( cudaEvent_t  event)
inlinenoexcept

Mark input as consumed.

Parameters
eventThe cuda event that is triggered after all input tensors have been consumed.
Warning
The set event must be valid during the inferece.
Returns
True on success, false if error occurred.

Passing event==nullptr removes whatever event was set, if any.

◆ setInputShape()

bool nvinfer1::IExecutionContext::setInputShape ( char const *  tensorName,
Dims const &  dims 
)
inlinenoexcept

Set shape of given input.

Parameters
tensorNameThe name of an input tensor.
dimsThe shape of an input tensor.
Returns
True on success, false if the provided name does not map to an input tensor, or if some other error occurred.

Each dimension must agree with the network dimension unless the latter was -1.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.

◆ setInputTensorAddress()

bool nvinfer1::IExecutionContext::setInputTensorAddress ( char const *  tensorName,
void const *  data 
)
inlinenoexcept

Set memory address for given input.

Parameters
tensorNameThe name of an input tensor.
dataThe pointer (void const*) to the const data owned by the user.
Returns
True on success, false if the provided name does not map to an input tensor, does not meet alignment requirements, or some other error occurred.

Input addresses can also be set using method setTensorAddress, which requires a (void*).

See description of method setTensorAddress() for alignment and data type constraints.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
setTensorAddress()

◆ setName()

void nvinfer1::IExecutionContext::setName ( char const *  name)
inlinenoexcept

Set the name of the execution context.

This method copies the name string.

Warning
The string name must be null-terminated, and be at most 4096 bytes including the terminator.
See also
getName()

◆ setNvtxVerbosity()

bool nvinfer1::IExecutionContext::setNvtxVerbosity ( ProfilingVerbosity  verbosity)
inlinenoexcept

Set the verbosity of the NVTX markers in the execution context.

Building with kDETAILED verbosity will generally increase latency in enqueueV3(). Call this method to select NVTX verbosity in this execution context at runtime.

The default is the verbosity with which the engine was built, and the verbosity may not be raised above that level.

This function does not affect how IEngineInspector interacts with the engine.

Parameters
verbosityThe verbosity of the NVTX markers.
Returns
True if the NVTX verbosity is set successfully. False if the provided verbosity level is higher than the profiling verbosity of the corresponding engine.
See also
getNvtxVerbosity()
ICudaEngine::getProfilingVerbosity()

◆ setOptimizationProfileAsync()

bool nvinfer1::IExecutionContext::setOptimizationProfileAsync ( int32_t  profileIndex,
cudaStream_t  stream 
)
inlinenoexcept

Select an optimization profile for the current context with async semantics.

Parameters
profileIndexIndex of the profile. The value must lie between 0 and getEngine().getNbOptimizationProfiles() - 1
streamA cuda stream on which the cudaMemcpyAsyncs may be enqueued

When an optimization profile is switched via this API, TensorRT may require that data is copied via cudaMemcpyAsync. It is the application’s responsibility to guarantee that synchronization between the profile sync stream and the enqueue stream occurs.

The selected profile will be used in subsequent calls to executeV2()/enqueueV3(). If the associated CUDA engine has inputs with dynamic shapes, the optimization profile must be set with its corresponding profileIndex before calling execute or enqueue. The newly created execution context will be assigned optimization profile 0.

If the associated CUDA engine does not have inputs with dynamic shapes, this method need not be called, in which case the default profile index of 0 will be used.

setOptimizationProfileAsync() must be called before calling setInputShape() for all dynamic input tensors or input shape tensors, which in turn must be called before executeV2()/enqueueV3().

Warning
This function will trigger layer resource updates on the next call of executeV2()/enqueueV3(), possibly resulting in performance bottlenecks.
Not synchronizing the stream used at enqueue with the stream used to set optimization profile asynchronously using this API will result in undefined behavior.
Returns
true if the call succeeded, else false (e.g. input out of range)
See also
ICudaEngine::getNbOptimizationProfiles()

◆ setOutputAllocator()

bool nvinfer1::IExecutionContext::setOutputAllocator ( char const *  tensorName,
IOutputAllocator outputAllocator 
)
inlinenoexcept

Set output allocator to use for output tensor of given name. Pass nullptr to outputAllocator to unset. The allocator is called by enqueueV3().

Parameters
tensorNameThe name of an output tensor.
outputAllocatorIOutputAllocator for the tensors.
Returns
True if success, false if the provided name does not map to an output or, if some other error occurred.
Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
enqueueV3() IOutputAllocator

◆ setOutputTensorAddress()

bool nvinfer1::IExecutionContext::setOutputTensorAddress ( char const *  tensorName,
void *  data 
)
inlinenoexcept

Set the memory address for a given output tensor.

Parameters
tensorNameThe name of an output tensor.
dataThe pointer to the buffer to which to write the output.
Returns
True on success, false if the provided name does not map to an output tensor, does not meet alignment requirements, or some other error occurred.

Output addresses can also be set using method setTensorAddress. This method is provided for applications which prefer to use different methods for setting input and output tensors.

See setTensorAddress() for alignment and data type constraints.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
setTensorAddress()

◆ setPersistentCacheLimit()

void nvinfer1::IExecutionContext::setPersistentCacheLimit ( size_t  size)
inlinenoexcept

Set the maximum size for persistent cache usage.

This function sets the maximum persistent L2 cache that this execution context may use for activation caching. Activation caching is not supported on all architectures - see "How TensorRT uses Memory" in the developer guide for details

Parameters
sizethe size of persistent cache limitation in bytes. The default is 0 Bytes.
See also
getPersistentCacheLimit

◆ setProfiler()

void nvinfer1::IExecutionContext::setProfiler ( IProfiler profiler)
inlinenoexcept

Set the profiler.

See also
IProfiler getProfiler()

◆ setTemporaryStorageAllocator()

bool nvinfer1::IExecutionContext::setTemporaryStorageAllocator ( IGpuAllocator allocator)
inlinenoexcept

Specify allocator to use for internal temporary storage.

This allocator is used only by enqueueV3() for temporary storage whose size cannot be predicted ahead of enqueueV3(). It is not used for output tensors, because memory allocation for those is allocated by the allocator set by setOutputAllocator(). All memory allocated is freed by the time enqueueV3() returns.

Parameters
allocatorpointer to allocator to use. Pass nullptr to revert to using TensorRT's default allocator.
Returns
True on success, false if error occurred.
See also
enqueueV3() setOutputAllocator()

◆ setTensorAddress()

bool nvinfer1::IExecutionContext::setTensorAddress ( char const *  tensorName,
void *  data 
)
inlinenoexcept

Set memory address for given input or output tensor.

Parameters
tensorNameThe name of an input or output tensor.
dataThe pointer (void*) to the data owned by the user.
Returns
True on success, false if error occurred.

An address defaults to nullptr. Pass data=nullptr to reset to the default state.

Return false if the provided name does not map to an input or output tensor.

If an input pointer has type (void const*), use setInputTensorAddress() instead.

Before calling enqueueV3(), each input must have a non-null address and each output must have a non-null address or an IOutputAllocator to set it later.

If the TensorLocation of the tensor is kHOST, the pointer must point to a host buffer of sufficient size. If the TensorLocation of the tensor is kDEVICE, the pointer must point to a device buffer of sufficient size and alignment, or be nullptr if the tensor is an output tensor that will be allocated by IOutputAllocator.

If getTensorShape(name) reports a -1 for any dimension of an output after all input shapes have been set, then to find out the dimensions, use setOutputAllocator() to associate an IOutputAllocator to which the dimensions will be reported when known.

Calling both setTensorAddress and setOutputAllocator() for the same output is allowed, and can be useful for preallocating memory, and then reallocating if it's not big enough.

The pointer must have at least 256-byte alignment.

Warning
The string tensorName must be null-terminated, and be at most 4096 bytes including the terminator.
See also
setInputTensorAddress() setOutputTensorAddress() getTensorShape() setOutputAllocator() IOutputAllocator

◆ setTensorDebugState()

bool nvinfer1::IExecutionContext::setTensorDebugState ( char const *  name,
bool  flag 
)
inlinenoexcept

Set debug state of tensor given the tensor name.

Turn the debug state of a tensor on or off. A tensor with the parameter tensor name must exist in the network, and the tensor must have been marked as a debug tensor during build time. Otherwise, an error is thrown.

Parameters
nameName of target tensor.
flagTrue if turning on debug state, false if turning off debug state of tensor The default is off.
Returns
True if successful, false otherwise.

◆ updateDeviceMemorySizeForShapes()

size_t nvinfer1::IExecutionContext::updateDeviceMemorySizeForShapes ( )
inlinenoexcept

Recompute the internal activation buffer sizes based on the current input shapes, and return the total amount of memory required.

Users can allocate the device memory based on the size returned and provided the memory to TRT with IExecutionContext::setDeviceMemory(). Must specify all input shapes and the optimization profile to use before calling this function, otherwise the partition will be invalidated.

Returns
Total amount of memory required on success, 0 if error occurred.
See also
IExecutionContext::setDeviceMemory()

Member Data Documentation

◆ mImpl

apiv::VExecutionContext* nvinfer1::IExecutionContext::mImpl
protected

The documentation for this class was generated from the following file:

  Copyright © 2024 NVIDIA Corporation
  Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact