NVIDIA DeepStream SDK API Reference

6.4 Release
nvdsinferserver::IInferCustomProcessor Class Referenceabstract

Detailed Description

Interface of Custom processor which is created and loaded at runtime through CreateCustomProcessorFunc.

Note: Full dimensions are used for all the inputs and output tensors. then IBatchBuffer::getBatchSize() usually return 0. This is matched with Triton-V2 shapes for public. to get buf_ptr, user always use IBatchBuffer::getBufPtr(idx=0).

Definition at line 38 of file infer_custom_process.h.

Public Member Functions

virtual ~IInferCustomProcessor ()=default
 IInferCustomProcessor will be deleted by nvdsinferserver lib. More...
 
virtual void supportInputMemType (InferMemType &type)
 Query the memory type, extraInputProcess() implementation supports. More...
 
virtual bool requireInferLoop () const
 Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id. More...
 
virtual NvDsInferStatus extraInputProcess (const std::vector< IBatchBuffer * > &primaryInputs, std::vector< IBatchBuffer * > &extraInputs, const IOptions *options)=0
 Custom processor for extra input data. More...
 
virtual NvDsInferStatus inferenceDone (const IBatchArray *outputs, const IOptions *inOptions)=0
 Inference done callback for custom postpocessing. More...
 
virtual void notifyError (NvDsInferStatus status)=0
 Notification of an error to the interface implementation. More...
 

Constructor & Destructor Documentation

◆ ~IInferCustomProcessor()

virtual nvdsinferserver::IInferCustomProcessor::~IInferCustomProcessor ( )
virtualdefault

IInferCustomProcessor will be deleted by nvdsinferserver lib.

Member Function Documentation

◆ extraInputProcess()

virtual NvDsInferStatus nvdsinferserver::IInferCustomProcessor::extraInputProcess ( const std::vector< IBatchBuffer * > &  primaryInputs,
std::vector< IBatchBuffer * > &  extraInputs,
const IOptions options 
)
pure virtual

Custom processor for extra input data.

Parameters
primaryInputs,[input],theprimary image input
extraInputs[input/output], custom processing to generate extra tensor input data. The memory is pre-allocated. memory type is same as supportInputMemType returned types.
options,[input].Associated options along with the input buffers. It has most of the common Deepstream metadata along with primary data. e.g. NvDsBatchMeta, NvDsObjectMeta, NvDsFrameMeta, stream ids and so on. See infer_ioptions.h to get all the potential key name and structures in the key-value table.
Returns
NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.

◆ inferenceDone()

virtual NvDsInferStatus nvdsinferserver::IInferCustomProcessor::inferenceDone ( const IBatchArray outputs,
const IOptions inOptions 
)
pure virtual

Inference done callback for custom postpocessing.

Parameters
outputs,[input],theinference output tensors. the tensor memory type could be controled by infer_config{ backend{ output_mem_type: MEMORY_TYPE_DEFAULT } }, The default output tensor memory type is decided by triton model. User can set other values from MEMORY_TYPE_CPU, MEMORY_TYPE_GPU.
inOptions,[input],correspondingoptions from input tensors. It is same as options in extraInputProcess().
Returns
NvDsInferStatus, if successful implementation must return NVDSINFER_SUCCESS or an error value in case of error.

◆ notifyError()

virtual void nvdsinferserver::IInferCustomProcessor::notifyError ( NvDsInferStatus  status)
pure virtual

Notification of an error to the interface implementation.

Parameters
status,[input],errorcode

◆ requireInferLoop()

virtual bool nvdsinferserver::IInferCustomProcessor::requireInferLoop ( ) const
inlinevirtual

Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarantees extraInputProcess() and InferenceDone() running in order per each stream id.

User can process last frame's output tensor from inferenceDone() and feed into next frame's inference input tensor in extraInputProcess()

Returns
true if need loop(e.g. LSTM based processing); Else
false.

Definition at line 61 of file infer_custom_process.h.

◆ supportInputMemType()

virtual void nvdsinferserver::IInferCustomProcessor::supportInputMemType ( InferMemType type)
inlinevirtual

Query the memory type, extraInputProcess() implementation supports.

Memory will be allocated based on the return type and passed to extraInputProcess().

Parameters
type,[output],mustbe chosen from InferMemType::kCpu or InferMemType::kGpuCuda,

Definition at line 51 of file infer_custom_process.h.

References nvdsinferserver::kCpu.


The documentation for this class was generated from the following file: