|
|
NVIDIA DeepStream SDK API Reference
|
9.0 Release
|
Go to the documentation of this file.
23 #ifndef __INFER_CUDA_CONTEXT_H__
24 #define __INFER_CUDA_CONTEXT_H__
26 #include <shared_mutex>
30 #include "infer_datatypes.h"
35 class CropSurfaceConverter;
36 class NetworkPreprocessor;
38 class CudaEventInPool;
44 class InferCudaContext :
public InferBaseContext {
77 const ic::InferenceConfig&
config, BaseBackend&
backend)
override;
85 std::vector<UniqPreprocessor>& processors)
override;
101 const ic::InferenceConfig&
config)
override;
149 int poolSize,
int gpuId);
187 const ic::InferenceConfig&
config, BaseBackend&
backend,
const std::string& primaryTensor);
SharedBufPool< std::unique_ptr< CudaEventInPool > > m_HostTensorEvents
Pool of CUDA events for host tensor copy.
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
void getNetworkInputInfo(NvDsInferNetworkInfo &networkInfo) override
Get the network input layer information.
InferDataType
Datatype of the tensor buffer.
UniqStreamManager m_MultiStreamManager
stream-id based management.
Postprocessor * m_FinalProcessor
Header file of the common declarations for the nvinferserver library.
Header file of the base class for inference context.
InferMediaFormat
Image formats.
NvDsInferStatus extraOutputTensorCheck(SharedBatchArray &outputs, SharedOptions inOptions) override
Post inference steps for the custom processor and LSTM controller.
NvDsInferStatus createPreprocessor(const ic::PreProcessParams ¶ms, std::vector< UniqPreprocessor > &processors) override
Create the surface converter and network preprocessor.
NvDsInferStatus allocateResource(const ic::InferenceConfig &config) override
Allocate resources for the preprocessors and post-processor.
Header file containing utility functions and classes used by the nvinferserver low level library.
NetworkPreprocessor * m_NetworkPreprocessor
InferTensorOrder
The type of tensor order.
Stores the information of a layer in the inference model.
std::shared_ptr< BaseBatchArray > SharedBatchArray
UniqLstmController m_LstmController
LSTM controller.
std::shared_ptr< CudaEvent > SharedCuEvent
InferCudaContext()
Constructor.
@ kRGB
24-bit interleaved R-G-B
std::string m_NetworkImageName
The input layer name.
SharedCuEvent acquireTensorHostEvent()
Acquire a CUDA event from the events pool.
std::vector< SharedCudaTensorBuf > m_ExtraInputs
Array of buffers of the additional inputs.
int tensorPoolSize() const
Get the size of the tensor pool.
void notifyError(NvDsInferStatus status) override
In case of error, notify the waiting threads.
std::unique_ptr< BasePostprocessor > UniqPostprocessor
Processor interfaces.
Holds information about the model network.
std::unique_ptr< StreamManager > UniqStreamManager
InferDataType m_InputDataType
The input layer datatype.
std::shared_ptr< IOptions > SharedOptions
MapBufferPool< std::string, UniqSysMem > m_HostTensorPool
Map of pools for the output tensors.
NvDsInferStatus deinit() override
Release the host tensor pool buffers, extra input buffers, LSTM controller, extra input processor.
std::unique_ptr< LstmController > UniqLstmController
std::shared_ptr< SysMem > SharedSysMem
NvDsInferStatus preInference(SharedBatchArray &inputs, const ic::InferenceConfig &config) override
Initialize non-image input layers if the custom library has implemented the interface.
UniqInferExtraProcessor m_ExtraProcessor
Extra and custom processing pre/post inference.
InferTensorOrder m_InputTensorOrder
The input layer tensor order.
const ic::InferenceConfig & config() const
SharedSysMem acquireTensorHostBuf(const std::string &name, size_t bytes)
Allocator.
CropSurfaceConverter * m_SurfaceConverter
Preprocessor and post-processor handles.
NvDsInferStatus createPostprocessor(const ic::PostProcessParams ¶ms, UniqPostprocessor &processor) override
Create the post-processor as per the network output type.
NvDsInferStatus fixateInferenceInfo(const ic::InferenceConfig &config, BaseBackend &backend) override
Check the tensor order, media format, and datatype for the input tensor.
std::unique_ptr< InferExtraProcessor > UniqInferExtraProcessor
InferMediaFormat m_NetworkImageFormat
The input layer media format.
~InferCudaContext() override
Destructor.
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.
NvDsInferNetworkInfo m_NetworkImageInfo
Network input height, width, channels for preprocessing.