Allows a serialized engine to be deserialized.
More...
#include <NvInfer.h>
Allows a serialized engine to be deserialized.
- Warning
- Do not inherit from this class, as doing so will break forward-compatibility of the API and ABI.
Deserialize an engine from a stream.
- Parameters
-
blob | The memory that holds the serialized engine. |
size | The size of the memory. |
pluginFactory | The plugin factory, if any plugins are used by the network, otherwise nullptr. |
- Returns
- The engine, or nullptr if it could not be deserialized.
virtual int nvinfer1::IRuntime::getDLACore |
( |
| ) |
const |
|
pure virtual |
Get the DLA core that the engine executes on.
- Returns
- If setDLACore is called, returns DLA core from 0 to N-1, else returns 0.
virtual void nvinfer1::IRuntime::setDLACore |
( |
int |
dlaCore | ) |
|
|
pure virtual |
Set the DLA core that the deserialized engine must execute on.
- Parameters
-
dlaCore | The DLA core to execute the engine on (0 to N-1, where N is the maximum number of DLA's present on the device). Default value is 0. |
- See Also
- getDLACore()
virtual void nvinfer1::IRuntime::setGpuAllocator |
( |
IGpuAllocator * |
allocator | ) |
|
|
pure virtual |
Set the GPU allocator.
- Parameters
-
allocator | Set the GPU allocator to be used by the runtime. All GPU memory acquired will use this allocator. If NULL is passed, the default allocator will be used. |
Default: uses cudaMalloc/cudaFree.
If nullptr is passed, the default allocator will be used.
The documentation for this class was generated from the following file: