NVIDIA DeepStream SDK API Reference

6.4 Release
INFER_EXPORT_API Namespace Reference

Data Structures

class  BufferPool
 Template class for buffer pool of the specified buffer type. More...
 
class  DlLibHandle
 Helper class for dynamic loading of custom library. More...
 
class  GuardQueue
 Template class for creating a thread safe queue for the given container class. More...
 
class  MapBufferPool
 Template class for a map of buffer pools. More...
 
class  QueueThread
 Template class for running the specified function on the queue items in a separate thread. More...
 
class  WakeupException
 Wrapper class for handling exception. More...
 

Typedefs

template<class UniPtr >
using SharedBufPool = std::shared_ptr< BufferPool< UniPtr > >
 

Functions

void dsInferLogPrint__ (NvDsInferLogLevel level, const char *fmt,...)
 Print the nvinferserver log messages as per the configured log level. More...
 
void dsInferLogVPrint__ (NvDsInferLogLevel level, const char *fmt, va_list args)
 Helper function to print the nvinferserver logs. More...
 
bool string_empty (const char *str)
 Helper function, returns true if the input C string is empty or null. More...
 
template<typename T >
bool isNonBatch (T b)
 Checks if the input batch size is zero. More...
 
bool fEqual (float a, float b)
 Check if the two floating point values are equal, the difference is less than or equal to the epsilon value. More...
 
uint32_t getElementSize (InferDataType t)
 Get the size of the element from the data type. More...
 
bool hasWildcard (const InferDims &dims)
 Check if any of the InferDims dimensions are of dynamic size (-1 or negative values). More...
 
size_t dimsSize (const InferDims &dims)
 Calculate the total number of elements for the given dimensions. More...
 
void normalizeDims (InferDims &dims)
 Recalculates the total number of elements for the dimensions. More...
 
NvDsInferLayerInfo toCapi (const LayerDescription &desc, void *bufPtr)
 Convert the layer description and buffer pointer to NvDsInferLayerInfo of the interface. More...
 
NvDsInferDims toCapi (const InferDims &dims)
 Convert the InferDims to NvDsInferDims of the library interface. More...
 
NvDsInferLayerInfo toCapiLayerInfo (const InferBufferDescription &desc, void *buf=nullptr)
 Generate NvDsInferLayerInfo of the interface from the buffer description and buffer pointer. More...
 
NvDsInferDataType toCapiDataType (InferDataType dt)
 Convert the InferDataType to NvDsInferDataType of the library interface. More...
 
bool intersectDims (const InferDims &a, const InferDims &b, InferDims &c)
 Get the intersection of the two input dimensions. More...
 
bool isPrivateTensor (const std::string &tensorName)
 Check if the given tensor is marked as private (contains INFER_SERVER_PRIVATE_BUF in the name). More...
 
bool isCpuMem (InferMemType type)
 Check if the memory type uses CPU memory (kCpu or kCpuCuda). More...
 
std::string memType2Str (InferMemType type)
 Returns a string object corresponding to the InferMemType name. More...
 
InferDims fullDims (int batchSize, const InferDims &in)
 Extend the dimensions to include batch size. More...
 
bool debatchFullDims (const InferDims &full, InferDims &debatched, uint32_t &batch)
 Separates batch size from given dimensions. More...
 
bool squeezeMatch (const InferDims &a, const InferDims &b)
 Check that the two dimensions are equal ignoring single element values. More...
 
SharedBatchBuf ReshapeBuf (const SharedBatchBuf &in, uint32_t batch, const InferDims &dims, bool reCalcBytes=false)
 Update the buffer dimensions as per provided new dimensions. More...
 
SharedBatchBuf reshapeToFullDimsBuf (const SharedBatchBuf &buf, bool reCalcBytes=false)
 Reshape the buffer dimensions with batch size added as new dimension. More...
 
NvDsInferStatus tensorBufferCopy (const SharedBatchBuf &in, const SharedBatchBuf &out, const SharedCuStream &stream)
 Copy one tensor buffer to another. More...
 

Typedef Documentation

◆ SharedBufPool

template<class UniPtr >
using INFER_EXPORT_API::SharedBufPool = typedef std::shared_ptr<BufferPool<UniPtr> >

Definition at line 493 of file infer_utils.h.

Function Documentation

◆ batchDims2Str()

std::string INFER_EXPORT_API::batchDims2Str ( const InferBatchDims &  d)

◆ dataType2GrpcStr()

std::string INFER_EXPORT_API::dataType2GrpcStr ( const InferDataType  type)

◆ dataType2Str()

std::string INFER_EXPORT_API::dataType2Str ( const InferDataType  type)

◆ debatchFullDims()

bool INFER_EXPORT_API::debatchFullDims ( const InferDims &  full,
InferDims &  debatched,
uint32_t &  batch 
)

Separates batch size from given dimensions.

Parameters
[in]fullInput full dimensions with batch size.
[out]debatchedOutput dimensions without the batch size.
[out]batchBatch size of the input dimensions.
Returns
True if batch size could be derived (number dimensions >= 1), false otherwise.

◆ dims2ImageInfo()

NvDsInferNetworkInfo INFER_EXPORT_API::dims2ImageInfo ( const InferDims &  d,
InferTensorOrder  order 
)

◆ dims2Str()

std::string INFER_EXPORT_API::dims2Str ( const InferDims &  d)

Helper functions to convert the various data types to string values for debug, log information.

◆ dimsSize()

size_t INFER_EXPORT_API::dimsSize ( const InferDims &  dims)
inline

Calculate the total number of elements for the given dimensions.

Parameters
dimsInput Input dimensions.
Returns
Total number of elements, 0 in case of dynamic size.

Definition at line 670 of file infer_utils.h.

References hasWildcard().

Referenced by normalizeDims().

◆ dirName()

std::string INFER_EXPORT_API::dirName ( const std::string &  path)

◆ dsInferLogVPrint__()

void INFER_EXPORT_API::dsInferLogVPrint__ ( NvDsInferLogLevel  level,
const char *  fmt,
va_list  args 
)

Helper function to print the nvinferserver logs.

This functions prints the log message to stderr or stdout depending on the input level. If the input level is more that the log level configured by the environment variable NVDSINFERSERVER_LOG_LEVEL (default level NVDSINFER_LOG_INFO), the log message is discarded. Messages of level NVDSINFER_LOG_ERROR are output to stderr others to stdout. A global mutex is used to guard concurrent prints from multiple threads.

Parameters
[in]levelLog level of the message.
[in]fmtThe fprintf format string of the log message.
[in]argsThe variable argument list for fprintf.

◆ fEqual()

bool INFER_EXPORT_API::fEqual ( float  a,
float  b 
)

Check if the two floating point values are equal, the difference is less than or equal to the epsilon value.

◆ file_accessible() [1/2]

bool INFER_EXPORT_API::file_accessible ( const char *  path)
inline

Helper functions to check if the input file path is valid and accessible.

Definition at line 86 of file infer_utils.h.

◆ file_accessible() [2/2]

bool INFER_EXPORT_API::file_accessible ( const std::string &  path)
inline

Definition at line 91 of file infer_utils.h.

◆ fullDims()

InferDims INFER_EXPORT_API::fullDims ( int  batchSize,
const InferDims &  in 
)

Extend the dimensions to include batch size.

Parameters
[in]batchSizeInput batch size.
[in]inInput dimensions.
Returns
Extended dimensions with batch size added as first dimension.

Referenced by nvdsinfer::convertFullDims().

◆ getElementSize()

uint32_t INFER_EXPORT_API::getElementSize ( InferDataType  t)
inline

Get the size of the element from the data type.

Definition at line 624 of file infer_utils.h.

References InferError.

◆ grpcStr2DataType()

InferDataType INFER_EXPORT_API::grpcStr2DataType ( const std::string &  type)

◆ hasWildcard()

bool INFER_EXPORT_API::hasWildcard ( const InferDims &  dims)
inline

Check if any of the InferDims dimensions are of dynamic size (-1 or negative values).

Definition at line 656 of file infer_utils.h.

Referenced by nvdsinferserver::DimsFromTriton(), and dimsSize().

◆ intersectDims()

bool INFER_EXPORT_API::intersectDims ( const InferDims &  a,
const InferDims &  b,
InferDims &  c 
)

Get the intersection of the two input dimensions.

This functions derives the intersections of the two input dimensions by replacing the wild card dimensions (dynamic sized) with the corresponding value from the other input. The functions returns failure if the two inputs have different number of dimensions or if the two corresponding dimensions are of fixed size but different.

Parameters
[in]aFirst input dimensions.
[in]bSecond input dimensions.
[out]cThe derived output intersection.
Returns
True if the intersection could be found, false otherwise.

◆ isAbsolutePath()

bool INFER_EXPORT_API::isAbsolutePath ( const std::string &  path)

◆ isCpuMem()

bool INFER_EXPORT_API::isCpuMem ( InferMemType  type)

Check if the memory type uses CPU memory (kCpu or kCpuCuda).

◆ isNonBatch()

template<typename T >
bool INFER_EXPORT_API::isNonBatch ( b)
inline

Checks if the input batch size is zero.

Definition at line 101 of file infer_utils.h.

Referenced by nvdsinferserver::BaseBackend::isNonBatching().

◆ isPrivateTensor()

bool INFER_EXPORT_API::isPrivateTensor ( const std::string &  tensorName)

Check if the given tensor is marked as private (contains INFER_SERVER_PRIVATE_BUF in the name).

Private tensors are skipped in inference output processing.

◆ joinPath()

std::string INFER_EXPORT_API::joinPath ( const std::string &  a,
const std::string &  b 
)

Helper functions for parsing the configuration file.

◆ memType2Str()

std::string INFER_EXPORT_API::memType2Str ( InferMemType  type)

Returns a string object corresponding to the InferMemType name.

◆ normalizeDims()

void INFER_EXPORT_API::normalizeDims ( InferDims &  dims)
inline

Recalculates the total number of elements for the dimensions.

Parameters
dimsInput dimensions.

Definition at line 686 of file infer_utils.h.

References dimsSize().

◆ operator!=()

bool INFER_EXPORT_API::operator!= ( const InferDims &  a,
const InferDims &  b 
)

◆ operator<=()

bool INFER_EXPORT_API::operator<= ( const InferDims &  a,
const InferDims &  b 
)

Comparison operators for the InferDims type.

◆ operator==()

bool INFER_EXPORT_API::operator== ( const InferDims &  a,
const InferDims &  b 
)

◆ operator>()

bool INFER_EXPORT_API::operator> ( const InferDims &  a,
const InferDims &  b 
)

◆ realPath()

bool INFER_EXPORT_API::realPath ( const std::string &  inPath,
std::string &  absPath 
)

◆ ReshapeBuf()

SharedBatchBuf INFER_EXPORT_API::ReshapeBuf ( const SharedBatchBuf &  in,
uint32_t  batch,
const InferDims &  dims,
bool  reCalcBytes = false 
)

Update the buffer dimensions as per provided new dimensions.

Parameters
[in]inInput batch buffer.
[in]batchExpected batch size.
[in]dimsNew buffer dimensions.
[in]reCalcBytesFlag to enable recalculation of total number of bytes in buffer based on expected batch size and new dimensions.
Returns
Shared pointer to new batch buffer pointing to same memory with updated dimensions.

◆ reshapeToFullDimsBuf()

SharedBatchBuf INFER_EXPORT_API::reshapeToFullDimsBuf ( const SharedBatchBuf &  buf,
bool  reCalcBytes = false 
)

Reshape the buffer dimensions with batch size added as new dimension.

Parameters
[in]bufInput batch buffer.
[in]reCalcBytesFlag to enable recalculation of total number of bytes in buffer.
Returns
Shared pointer to new batch buffer pointing to same memory with updated dimensions.

◆ safeStr() [1/2]

const char* INFER_EXPORT_API::safeStr ( const char *  str)
inline

Helper functions to get a safe C-string representation for the input string.

Returns an empty string if the input pointer is null.

Definition at line 64 of file infer_utils.h.

◆ safeStr() [2/2]

◆ squeezeMatch()

bool INFER_EXPORT_API::squeezeMatch ( const InferDims &  a,
const InferDims &  b 
)

Check that the two dimensions are equal ignoring single element values.

Parameters
[in]aFirst set of inference dimensions.
[in]bSecond set of inference dimensions.
Returns
True if the two dimensions match.

◆ string_empty()

bool INFER_EXPORT_API::string_empty ( const char *  str)
inline

Helper function, returns true if the input C string is empty or null.

Definition at line 77 of file infer_utils.h.

◆ tensorBufferCopy()

NvDsInferStatus INFER_EXPORT_API::tensorBufferCopy ( const SharedBatchBuf &  in,
const SharedBatchBuf &  out,
const SharedCuStream &  stream 
)

Copy one tensor buffer to another.

This functions copies the data from one batch buffer to another. Both the buffers must have the same total number of bytes. In case of copy to or from device memory cudaMemcpyAsync is used with the provided CUDA stream and GPU ID. For copy between host memory buffers memcpy is used.

Parameters
[in]inSource batch buffer.
[out]outDestination btach buffer.
[in]streamCUDA stream for use with cudaMemcpyAsync
Returns
NVDSINFER_SUCCESS on success or NVDSINFER_CUDA_ERROR on error.

◆ tensorOrder2Str()

std::string INFER_EXPORT_API::tensorOrder2Str ( InferTensorOrder  order)

◆ toCapi() [1/2]

NvDsInferDims INFER_EXPORT_API::toCapi ( const InferDims &  dims)

Convert the InferDims to NvDsInferDims of the library interface.

◆ toCapi() [2/2]

NvDsInferLayerInfo INFER_EXPORT_API::toCapi ( const LayerDescription &  desc,
void *  bufPtr 
)

Convert the layer description and buffer pointer to NvDsInferLayerInfo of the interface.

◆ toCapiDataType()

NvDsInferDataType INFER_EXPORT_API::toCapiDataType ( InferDataType  dt)

Convert the InferDataType to NvDsInferDataType of the library interface.

◆ toCapiLayerInfo()

NvDsInferLayerInfo INFER_EXPORT_API::toCapiLayerInfo ( const InferBufferDescription &  desc,
void *  buf = nullptr 
)

Generate NvDsInferLayerInfo of the interface from the buffer description and buffer pointer.