TensorRT
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
nvinfer1::IRuntime Class Referenceabstract

Allows a serialized engine to be deserialized. More...

#include <NvInfer.h>

Public Member Functions

virtual nvinfer1::ICudaEnginedeserializeCudaEngine (const void *blob, std::size_t size, IPluginFactory *pluginFactory)=0
 Deserialize an engine from a stream. More...
 
virtual void destroy ()=0
 Destroy this object.
 
virtual void setGpuAllocator (IGpuAllocator *allocator)=0
 Set the GPU allocator. More...
 

Detailed Description

Allows a serialized engine to be deserialized.

Member Function Documentation

virtual nvinfer1::ICudaEngine* nvinfer1::IRuntime::deserializeCudaEngine ( const void *  blob,
std::size_t  size,
IPluginFactory pluginFactory 
)
pure virtual

Deserialize an engine from a stream.

Parameters
blobThe memory that holds the serialized engine.
sizeThe size of the memory.
pluginFactoryThe plugin factory, if any plugins are used by the network, otherwise nullptr.
Returns
The engine, or nullptr if it could not be deserialized.
virtual void nvinfer1::IRuntime::setGpuAllocator ( IGpuAllocator allocator)
pure virtual

Set the GPU allocator.

Parameters
allocatorSet the GPU allocator to be used by the runtime. All GPU memory acquired will use this allocator. If NULL is passed, the default allocator will be used.

Default: uses cudaMalloc/cudaFree.

If nullptr is passed, the default allocator will be used.


The documentation for this class was generated from the following file: