Interface of Custom processor which is created and loaded at runtime through CreateCustomProcessorFunc.
Note: Full dimensions are used for all the inputs and output tensors. then IBatchBuffer::getBatchSize() usually return 0. This is matched with Triton-V2 shapes for public. to get buf_ptr, user always use IBatchBuffer::getBufPtr(idx=0).
Definition at line 34 of file infer_custom_process.h.
Public Member Functions | |
virtual | ~IInferCustomProcessor ()=default |
IInferCustomProcessor will be deleted by nvdsinferserver lib. More... | |
virtual void | supportInputMemType (InferMemType &type) |
Query the memory type, extraInputProcess() implementation supports. More... | |
virtual bool | requireInferLoop () const |
Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id. More... | |
virtual NvDsInferStatus | extraInputProcess (const std::vector< IBatchBuffer * > &primaryInputs, std::vector< IBatchBuffer * > &extraInputs, const IOptions *options)=0 |
Custom processor for extra input data. More... | |
virtual NvDsInferStatus | inferenceDone (const IBatchArray *outputs, const IOptions *inOptions)=0 |
Inference done callback for custom postpocessing. More... | |
virtual void | notifyError (NvDsInferStatus status)=0 |
Notification of an error to the interface implementation. More... | |
|
virtualdefault |
IInferCustomProcessor will be deleted by nvdsinferserver lib.
|
pure virtual |
Custom processor for extra input data.
primaryInputs,[input],the | primary image input |
extraInputs | [input/output], custom processing to generate extra tensor input data. The memory is pre-allocated. memory type is same as supportInputMemType returned types. |
options,[input]. | Associated options along with the input buffers. It has most of the common Deepstream metadata along with primary data. e.g. NvDsBatchMeta, NvDsObjectMeta, NvDsFrameMeta, stream ids and so on. See infer_ioptions.h to get all the potential key name and structures in the key-value table. |
|
pure virtual |
Inference done callback for custom postpocessing.
outputs,[input],the | inference output tensors. the tensor memory type could be controled by infer_config{ backend{ output_mem_type: MEMORY_TYPE_DEFAULT } }, The default output tensor memory type is decided by triton model. User can set other values from MEMORY_TYPE_CPU, MEMORY_TYPE_GPU. |
inOptions,[input],corresponding | options from input tensors. It is same as options in extraInputProcess(). |
|
pure virtual |
Notification of an error to the interface implementation.
status,[input],error | code |
|
inlinevirtual |
Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id.
User can process last frame's output tensor from inferenceDone() and feed into next frame's inference input tensor in extraInputProcess()
Definition at line 57 of file infer_custom_process.h.
|
inlinevirtual |
Query the memory type, extraInputProcess() implementation supports.
Memory will be allocated based on the return type and passed to extraInputProcess().
type,[output],must | be chosen from InferMemType::kCpu or InferMemType::kGpuCuda, |
Definition at line 47 of file infer_custom_process.h.
References nvdsinferserver::kCpu.