Runtime
- class tensorrt.Runtime(self: tensorrt.tensorrt.Runtime, logger: tensorrt.tensorrt.ILogger) None
Allows a serialized
ICudaEngineto be deserialized.- Variables
error_recorder –
IErrorRecorderApplication-implemented error reporting interface for TensorRT objects.gpu_allocator –
IGpuAllocatorThe GPU allocator to be used by theRuntime. All GPU memory acquired will use this allocator. If set to None, the default allocator will be used (Default: cudaMalloc/cudaFree).DLA_core –
intThe DLA core that the engine executes on. Must be between 0 and N-1 where N is the number of available DLA cores.num_DLA_cores –
intThe number of DLA engines available to this builder.logger –
ILoggerThe logger provided when creating the refitter.max_threads –
intThe maximum thread that can be used by theRuntime.
- Parameters
logger – The logger to use.
- __del__(self: tensorrt.tensorrt.Runtime) None
- __exit__(exc_type, exc_value, traceback)
Context managers are deprecated and have no effect. Objects are automatically freed when the reference count reaches 0.
- __init__(self: tensorrt.tensorrt.Runtime, logger: tensorrt.tensorrt.ILogger) None
- Parameters
logger – The logger to use.
- deserialize_cuda_engine(self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) tensorrt.tensorrt.ICudaEngine
Deserialize an
ICudaEnginefrom a stream.- Parameters
serialized_engine – The
bufferthat holds the serializedICudaEngine.- Returns
The
ICudaEngine, or None if it could not be deserialized.