TensorRT
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
nvinfer1::IRuntime Class Referenceabstract

Allows a serialized engine to be deserialized. More...

#include <NvInfer.h>

Public Member Functions

virtual nvinfer1::ICudaEnginedeserializeCudaEngine (const void *blob, std::size_t size, IPluginFactory *pluginFactory)=0
 Deserialize an engine from a stream. More...
 
virtual void setDLACore (int dlaCore)=0
 Set the DLA core that the deserialized engine must execute on. More...
 
virtual int getDLACore () const =0
 Get the DLA core that the engine executes on. More...
 
virtual int getNbDLACores () const =0
 Returns number of DLA hardware cores accessible.
 
virtual void destroy ()=0
 Destroy this object.
 
virtual void setGpuAllocator (IGpuAllocator *allocator)=0
 Set the GPU allocator. More...
 

Detailed Description

Allows a serialized engine to be deserialized.

Member Function Documentation

virtual nvinfer1::ICudaEngine* nvinfer1::IRuntime::deserializeCudaEngine ( const void *  blob,
std::size_t  size,
IPluginFactory pluginFactory 
)
pure virtual

Deserialize an engine from a stream.

Parameters
blobThe memory that holds the serialized engine.
sizeThe size of the memory.
pluginFactoryThe plugin factory, if any plugins are used by the network, otherwise nullptr.
Returns
The engine, or nullptr if it could not be deserialized.
virtual int nvinfer1::IRuntime::getDLACore ( ) const
pure virtual

Get the DLA core that the engine executes on.

Returns
If setDLACore is called, returns DLA core from 0 to N-1, else returns 0.
virtual void nvinfer1::IRuntime::setDLACore ( int  dlaCore)
pure virtual

Set the DLA core that the deserialized engine must execute on.

Parameters
dlaCoreThe DLA core to execute the engine on (0 to N-1, where N is the maximum number of DLA's present on the device). Default value is 0.
See Also
getDLACore()
virtual void nvinfer1::IRuntime::setGpuAllocator ( IGpuAllocator allocator)
pure virtual

Set the GPU allocator.

Parameters
allocatorSet the GPU allocator to be used by the runtime. All GPU memory acquired will use this allocator. If NULL is passed, the default allocator will be used.

Default: uses cudaMalloc/cudaFree.

If nullptr is passed, the default allocator will be used.


The documentation for this class was generated from the following file: