Runtime
- class tensorrt.Runtime(self: tensorrt.tensorrt.Runtime, logger: tensorrt.tensorrt.ILogger) None
Allows a serialized
ICudaEngine
to be deserialized.- Variables
error_recorder –
IErrorRecorder
Application-implemented error reporting interface for TensorRT objects.gpu_allocator –
IGpuAllocator
The GPU allocator to be used by theRuntime
. All GPU memory acquired will use this allocator. If set to None, the default allocator will be used (Default: cudaMalloc/cudaFree).DLA_core –
int
The DLA core that the engine executes on. Must be between 0 and N-1 where N is the number of available DLA cores.num_DLA_cores –
int
The number of DLA engines available to this builder.logger –
ILogger
The logger provided when creating the refitter.
- Parameters
logger – The logger to use.
- __del__(self: tensorrt.tensorrt.Runtime) None
- __exit__(exc_type, exc_value, traceback)
Context managers are deprecated and have no effect. Objects are automatically freed when the reference count reaches 0.
- __init__(self: tensorrt.tensorrt.Runtime, logger: tensorrt.tensorrt.ILogger) None
- Parameters
logger – The logger to use.
- deserialize_cuda_engine(self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) tensorrt.tensorrt.ICudaEngine
Deserialize an
ICudaEngine
from a stream.- Parameters
serialized_engine – The
buffer
that holds the serializedICudaEngine
.- Returns
The
ICudaEngine
, or None if it could not be deserialized.