|
|
NVIDIA DeepStream SDK API Reference
|
8.0 Release
|
Go to the documentation of this file.
20 #ifndef __NVDSINFERSERVER_EXTRA_PROCESSOR_H__
21 #define __NVDSINFERSERVER_EXTRA_PROCESSOR_H__
25 #include "infer_custom_process.h"
26 #include "infer_datatypes.h"
46 SharedDllHandle dlHandle,
const std::string& funcName,
const std::string& config);
52 BaseBackend& backend,
const std::set<std::string>& excludes, int32_t poolSize,
int gpuId);
75 bool requireLoop()
const {
return m_RequireInferLoop; }
88 uint32_t m_maxBatch = 0;
93 bool m_firstDimDynamicBatch =
false;
95 bool m_RequireInferLoop =
false;
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
std::shared_ptr< DlLibHandle > SharedDllHandle
std::unique_ptr< TensorMapPool > TensorMapPoolPtr
std::shared_ptr< BaseBatchArray > SharedBatchArray
std::shared_ptr< IInferCustomProcessor > InferCustomProcessorPtr
Header file of the common declarations for the nvinferserver library.
std::shared_ptr< CudaStream > SharedCuStream
Cuda based pointers.
std::unique_ptr< StreamManager > UniqStreamManager
MapBufferPool< std::string, UniqCudaTensorBuf > TensorMapPool
std::shared_ptr< IOptions > SharedOptions
std::vector< LayerDescription > LayerDescriptionList
Header file containing utility functions and classes used by the nvinferserver low level library.
Base class of inference backend processing.
Header file for inference processing backend base class.
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.