Interface of Custom processor which is created and loaded at runtime through CreateCustomProcessorFunc.
Note: Full dimensions are used for all the inputs and output tensors. then IBatchBuffer::getBatchSize() usually return 0. This is matched with Triton-V2 shapes for public. to get buf_ptr, user always use IBatchBuffer::getBufPtr(idx=0).
Definition at line 34 of file sources/includes/nvdsinferserver/infer_custom_process.h.
|
| virtual | ~IInferCustomProcessor ()=default |
| | IInferCustomProcessor will be deleted by nvdsinferserver lib. More...
|
| |
| virtual void | supportInputMemType (InferMemType &type) |
| | Query the memory type, extraInputProcess() implementation supports. More...
|
| |
| virtual bool | requireInferLoop () const |
| | Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id. More...
|
| |
| virtual NvDsInferStatus | extraInputProcess (const std::vector< IBatchBuffer * > &primaryInputs, std::vector< IBatchBuffer * > &extraInputs, const IOptions *options)=0 |
| | Custom processor for extra input data. More...
|
| |
| virtual NvDsInferStatus | inferenceDone (const IBatchArray *outputs, const IOptions *inOptions)=0 |
| | Inference done callback for custom postpocessing. More...
|
| |
| virtual void | notifyError (NvDsInferStatus status)=0 |
| | Notification of an error to the interface implementation. More...
|
| |
| virtual | ~IInferCustomProcessor ()=default |
| | IInferCustomProcessor will be deleted by nvdsinferserver lib. More...
|
| |
| virtual void | supportInputMemType (InferMemType &type) |
| | Query the memory type, extraInputProcess() implementation supports. More...
|
| |
| virtual bool | requireInferLoop () const |
| | Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id. More...
|
| |
| virtual NvDsInferStatus | extraInputProcess (const std::vector< IBatchBuffer * > &primaryInputs, std::vector< IBatchBuffer * > &extraInputs, const IOptions *options)=0 |
| | Custom processor for extra input data. More...
|
| |
| virtual NvDsInferStatus | inferenceDone (const IBatchArray *outputs, const IOptions *inOptions)=0 |
| | Inference done callback for custom postpocessing. More...
|
| |
| virtual void | notifyError (NvDsInferStatus status)=0 |
| | Notification of an error to the interface implementation. More...
|
| |
◆ ~IInferCustomProcessor() [1/2]
| virtual nvdsinferserver::IInferCustomProcessor::~IInferCustomProcessor |
( |
| ) |
|
|
virtualdefault |
◆ ~IInferCustomProcessor() [2/2]
| virtual nvdsinferserver::IInferCustomProcessor::~IInferCustomProcessor |
( |
| ) |
|
|
virtualdefault |
◆ extraInputProcess() [1/2]
Custom processor for extra input data.
- Parameters
-
| primaryInputs,[input],the | primary image input |
| extraInputs | [input/output], custom processing to generate extra tensor input data. The memory is pre-allocated. memory type is same as supportInputMemType returned types. |
| options,[input]. | Associated options along with the input buffers. It has most of the common Deepstream metadata along with primary data. e.g. NvDsBatchMeta, NvDsObjectMeta, NvDsFrameMeta, stream ids and so on. See infer_ioptions.h to get all the potential key name and structures in the key-value table. |
- Returns
- NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.
◆ extraInputProcess() [2/2]
Custom processor for extra input data.
- Parameters
-
| primaryInputs,[input],the | primary image input |
| extraInputs | [input/output], custom processing to generate extra tensor input data. The memory is pre-allocated. memory type is same as supportInputMemType returned types. |
| options,[input]. | Associated options along with the input buffers. It has most of the common Deepstream metadata along with primary data. e.g. NvDsBatchMeta, NvDsObjectMeta, NvDsFrameMeta, stream ids and so on. See infer_ioptions.h to get all the potential key name and structures in the key-value table. |
- Returns
- NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.
◆ inferenceDone() [1/2]
Inference done callback for custom postpocessing.
- Parameters
-
| outputs,[input],the | inference output tensors. the tensor memory type could be controled by infer_config{ backend{ output_mem_type: MEMORY_TYPE_DEFAULT } }, The default output tensor memory type is decided by triton model. User can set other values from MEMORY_TYPE_CPU, MEMORY_TYPE_GPU. |
| inOptions,[input],corresponding | options from input tensors. It is same as options in extraInputProcess(). |
- Returns
- NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.
◆ inferenceDone() [2/2]
Inference done callback for custom postpocessing.
- Parameters
-
| outputs,[input],the | inference output tensors. the tensor memory type could be controled by infer_config{ backend{ output_mem_type: MEMORY_TYPE_DEFAULT } }, The default output tensor memory type is decided by triton model. User can set other values from MEMORY_TYPE_CPU, MEMORY_TYPE_GPU. |
| inOptions,[input],corresponding | options from input tensors. It is same as options in extraInputProcess(). |
- Returns
- NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.
◆ notifyError() [1/2]
| virtual void nvdsinferserver::IInferCustomProcessor::notifyError |
( |
NvDsInferStatus |
status | ) |
|
|
pure virtual |
Notification of an error to the interface implementation.
- Parameters
-
◆ notifyError() [2/2]
| virtual void nvdsinferserver::IInferCustomProcessor::notifyError |
( |
NvDsInferStatus |
status | ) |
|
|
pure virtual |
Notification of an error to the interface implementation.
- Parameters
-
◆ requireInferLoop() [1/2]
| virtual bool nvdsinferserver::IInferCustomProcessor::requireInferLoop |
( |
| ) |
const |
|
inlinevirtual |
◆ requireInferLoop() [2/2]
| virtual bool nvdsinferserver::IInferCustomProcessor::requireInferLoop |
( |
| ) |
const |
|
inlinevirtual |
◆ supportInputMemType() [1/2]
| virtual void nvdsinferserver::IInferCustomProcessor::supportInputMemType |
( |
InferMemType & |
type | ) |
|
|
inlinevirtual |
◆ supportInputMemType() [2/2]
| virtual void nvdsinferserver::IInferCustomProcessor::supportInputMemType |
( |
InferMemType & |
type | ) |
|
|
inlinevirtual |
The documentation for this class was generated from the following file: