NVIDIA DeepStream SDK API Reference

6.2 Release
nvdsinferserver::TritonGrpcBackend Class Reference

Detailed Description

Triton gRPC mode backend processing class.

Definition at line 34 of file infer_grpc_backend.h.

Inheritance diagram for nvdsinferserver::TritonGrpcBackend:
Collaboration diagram for nvdsinferserver::TritonGrpcBackend:

Public Member Functions

 TritonGrpcBackend (std::string model, int64_t version)
 
 ~TritonGrpcBackend () override
 
void setOutputs (const std::set< std::string > &names)
 
void setUrl (const std::string &url)
 
void setEnableCudaBufferSharing (const bool enableSharing)
 
NvDsInferStatus initialize () override
 
void addClassifyParams (const TritonClassParams &c)
 Add Triton Classification parameters to the list. More...
 
NvDsInferStatus specifyInputDims (const InputShapes &shapes) override
 Specify the input layers for the backend. More...
 
void setTensorMaxBytes (const std::string &name, size_t maxBytes)
 Set the maximum size for the tensor, the maximum of the existing size and new input size is used. More...
 

Protected Types

enum  {
  kName,
  kGpuId,
  kMemType
}
 Tuple keys as <tensor-name, gpu-id, memType> More...
 
using AsyncDone = std::function< void(NvDsInferStatus, SharedBatchArray)>
 Asynchronous inference done function: AsyncDone(Status, outputs). More...
 
using PoolKey = std::tuple< std::string, int64_t, InferMemType >
 Tuple holding tensor name, GPU ID, memory type. More...
 
using PoolValue = SharedBufPool< UniqSysMem >
 The buffer pool for the specified tensor, GPU and memory type combination. More...
 
using ReorderItemPtr = std::shared_ptr< ReorderItem >
 

Protected Member Functions

NvDsInferStatus enqueue (SharedBatchArray inputs, SharedCuStream stream, InputsConsumed bufConsumed, InferenceDone inferenceDone) override
 
void requestTritonOutputNames (std::set< std::string > &names) override
 
NvDsInferStatus ensureServerReady () override
 
NvDsInferStatus ensureModelReady () override
 
NvDsInferStatus setupLayersInfo () override
 
NvDsInferStatus Run (SharedBatchArray inputs, InputsConsumed bufConsumed, AsyncDone asyncDone) override
 
NvDsInferStatus setupReorderThread ()
 Create a loop thread that calls inferenceDoneReorderLoop on the queued items. More...
 
void setAllocator (UniqTritonAllocator allocator)
 Set the output tensor allocator. More...
 
TrtServerPtrserver ()
 Get the Triton server handle. More...
 
NvDsInferStatus fixateDims (const SharedBatchArray &bufs)
 Extend the dimensions to include batch size for the buffers in input array. More...
 
SharedSysMem allocateResponseBuf (const std::string &tensor, size_t bytes, InferMemType memType, int64_t devId)
 Acquire a buffer from the output buffer pool associated with the device ID and memory type. More...
 
void releaseResponseBuf (const std::string &tensor, SharedSysMem mem)
 Release the output tensor buffer. More...
 
NvDsInferStatus ensureInputs (SharedBatchArray &inputs)
 Ensure that the array of input buffers are expected by the model and reshape the input buffers if required. More...
 
PoolValue findResponsePool (PoolKey &key)
 Find the buffer pool for the given key. More...
 
PoolValue createResponsePool (PoolKey &key, size_t bytes)
 Create a new buffer pool for the key. More...
 
void serverInferCompleted (std::shared_ptr< TrtServerRequest > request, std::unique_ptr< TrtServerResponse > uniqResponse, InputsConsumed inputsConsumed, AsyncDone asyncDone)
 Call the inputs consumed function and parse the inference response to form the array of output batch buffers and call asyncDone on it. More...
 
bool inferenceDoneReorderLoop (ReorderItemPtr item)
 Add input buffers to the output buffer list if required. More...
 
bool debatchingOutput (SharedBatchArray &outputs, SharedBatchArray &inputs)
 Separate the batch dimension from the output buffer descriptors. More...
 

Member Typedef Documentation

◆ AsyncDone

using nvdsinferserver::TrtISBackend::AsyncDone = std::function<void(NvDsInferStatus, SharedBatchArray)>
protectedinherited

Asynchronous inference done function: AsyncDone(Status, outputs).

Definition at line 169 of file infer_trtis_backend.h.

◆ PoolKey

using nvdsinferserver::TrtISBackend::PoolKey = std::tuple<std::string, int64_t, InferMemType>
protectedinherited

Tuple holding tensor name, GPU ID, memory type.

Definition at line 224 of file infer_trtis_backend.h.

◆ PoolValue

using nvdsinferserver::TrtISBackend::PoolValue = SharedBufPool<UniqSysMem>
protectedinherited

The buffer pool for the specified tensor, GPU and memory type combination.

Definition at line 229 of file infer_trtis_backend.h.

◆ ReorderItemPtr

using nvdsinferserver::TrtISBackend::ReorderItemPtr = std::shared_ptr<ReorderItem>
protectedinherited

Definition at line 293 of file infer_trtis_backend.h.

Member Enumeration Documentation

◆ anonymous enum

anonymous enum
protectedinherited

Tuple keys as <tensor-name, gpu-id, memType>

Enumerator
kName 
kGpuId 
kMemType 

Definition at line 220 of file infer_trtis_backend.h.

Constructor & Destructor Documentation

◆ TritonGrpcBackend()

nvdsinferserver::TritonGrpcBackend::TritonGrpcBackend ( std::string  model,
int64_t  version 
)

◆ ~TritonGrpcBackend()

nvdsinferserver::TritonGrpcBackend::~TritonGrpcBackend ( )
override

Member Function Documentation

◆ addClassifyParams()

void nvdsinferserver::TrtISBackend::addClassifyParams ( const TritonClassParams c)
inlineinherited

Add Triton Classification parameters to the list.

Definition at line 58 of file infer_trtis_backend.h.

◆ allocateResponseBuf()

SharedSysMem nvdsinferserver::TrtISBackend::allocateResponseBuf ( const std::string &  tensor,
size_t  bytes,
InferMemType  memType,
int64_t  devId 
)
protectedinherited

Acquire a buffer from the output buffer pool associated with the device ID and memory type.

Create the pool if it doesn't exist.

Parameters
[in]tensorName of the output tensor.
[in]bytesBuffer size.
[in]memTypeRequested memory type.
[in]devIdDevice ID for the allocation.
Returns
Pointer to the allocated buffer.

◆ createResponsePool()

PoolValue nvdsinferserver::TrtISBackend::createResponsePool ( PoolKey key,
size_t  bytes 
)
protectedinherited

Create a new buffer pool for the key.

Parameters
[in]keyThe pool key combination.
[in]bytesSize of the requested buffer.
Returns

◆ debatchingOutput()

bool nvdsinferserver::TrtISBackend::debatchingOutput ( SharedBatchArray outputs,
SharedBatchArray inputs 
)
protectedinherited

Separate the batch dimension from the output buffer descriptors.

Parameters
[in]outputsArray of output batch buffers.
[in]inputsArray of input batch buffers.
Returns
Boolean indicating success or failure.

◆ enqueue()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::enqueue ( SharedBatchArray  inputs,
SharedCuStream  stream,
InputsConsumed  bufConsumed,
InferenceDone  inferenceDone 
)
overrideprotected

◆ ensureInputs()

NvDsInferStatus nvdsinferserver::TrtISBackend::ensureInputs ( SharedBatchArray inputs)
protectedinherited

Ensure that the array of input buffers are expected by the model and reshape the input buffers if required.

Parameters
inputsArray of input batch buffers.
Returns
NVDSINFER_SUCCESS or NVDSINFER_TRITON_ERROR.

◆ ensureModelReady()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::ensureModelReady ( )
overrideprotectedvirtual

Reimplemented from nvdsinferserver::TrtISBackend.

◆ ensureServerReady()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::ensureServerReady ( )
overrideprotectedvirtual

Reimplemented from nvdsinferserver::TrtISBackend.

◆ findResponsePool()

PoolValue nvdsinferserver::TrtISBackend::findResponsePool ( PoolKey key)
protectedinherited

Find the buffer pool for the given key.

◆ fixateDims()

NvDsInferStatus nvdsinferserver::TrtISBackend::fixateDims ( const SharedBatchArray bufs)
protectedinherited

Extend the dimensions to include batch size for the buffers in input array.

Do nothing if batch input is not required.

◆ getClassifyParams()

std::vector<TritonClassParams> nvdsinferserver::TrtISBackend::getClassifyParams ( )
inlineinherited

Definition at line 71 of file infer_trtis_backend.h.

◆ inferenceDoneReorderLoop()

bool nvdsinferserver::TrtISBackend::inferenceDoneReorderLoop ( ReorderItemPtr  item)
protectedinherited

Add input buffers to the output buffer list if required.

De-batch and run inference done callback.

Parameters
[in]itemThe reorder task.
Returns
Boolean indicating success or failure.

◆ initialize()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::initialize ( )
override

◆ model()

const std::string& nvdsinferserver::TrtISBackend::model ( ) const
inlineinherited

Definition at line 73 of file infer_trtis_backend.h.

◆ outputDevId()

int64_t nvdsinferserver::TrtISBackend::outputDevId ( ) const
inlineinherited

Definition at line 70 of file infer_trtis_backend.h.

◆ outputMemType()

InferMemType nvdsinferserver::TrtISBackend::outputMemType ( ) const
inlineinherited

Definition at line 68 of file infer_trtis_backend.h.

◆ outputPoolSize()

int nvdsinferserver::TrtISBackend::outputPoolSize ( ) const
inlineinherited

Definition at line 66 of file infer_trtis_backend.h.

◆ releaseResponseBuf()

void nvdsinferserver::TrtISBackend::releaseResponseBuf ( const std::string &  tensor,
SharedSysMem  mem 
)
protectedinherited

Release the output tensor buffer.

Parameters
[in]tensorName of the output tensor.
[in]memPointer to the memory buffer.

◆ requestTritonOutputNames()

void nvdsinferserver::TritonGrpcBackend::requestTritonOutputNames ( std::set< std::string > &  names)
overrideprotectedvirtual

Reimplemented from nvdsinferserver::TrtISBackend.

◆ Run()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::Run ( SharedBatchArray  inputs,
InputsConsumed  bufConsumed,
AsyncDone  asyncDone 
)
overrideprotectedvirtual

Reimplemented from nvdsinferserver::TrtISBackend.

◆ server()

TrtServerPtr& nvdsinferserver::TrtISBackend::server ( )
inlineprotectedinherited

Get the Triton server handle.

Definition at line 164 of file infer_trtis_backend.h.

◆ serverInferCompleted()

void nvdsinferserver::TrtISBackend::serverInferCompleted ( std::shared_ptr< TrtServerRequest request,
std::unique_ptr< TrtServerResponse uniqResponse,
InputsConsumed  inputsConsumed,
AsyncDone  asyncDone 
)
protectedinherited

Call the inputs consumed function and parse the inference response to form the array of output batch buffers and call asyncDone on it.

Parameters
[in]requestPointer to the inference request.
[in]uniqResponsePointer to the inference response from the server.
[in]inputsConsumedCallback function for releasing input buffer.
[in]asyncDoneCallback function for processing response .

◆ setAllocator()

void nvdsinferserver::TrtISBackend::setAllocator ( UniqTritonAllocator  allocator)
inlineprotectedinherited

Set the output tensor allocator.

Definition at line 148 of file infer_trtis_backend.h.

◆ setEnableCudaBufferSharing()

void nvdsinferserver::TritonGrpcBackend::setEnableCudaBufferSharing ( const bool  enableSharing)
inline

Definition at line 43 of file infer_grpc_backend.h.

◆ setOutputDevId()

void nvdsinferserver::TrtISBackend::setOutputDevId ( int64_t  devId)
inlineinherited

Definition at line 69 of file infer_trtis_backend.h.

◆ setOutputMemType()

void nvdsinferserver::TrtISBackend::setOutputMemType ( InferMemType  memType)
inlineinherited

Definition at line 67 of file infer_trtis_backend.h.

◆ setOutputPoolSize()

void nvdsinferserver::TrtISBackend::setOutputPoolSize ( int  size)
inlineinherited

Helper function to access the member variables.

Definition at line 65 of file infer_trtis_backend.h.

◆ setOutputs()

void nvdsinferserver::TritonGrpcBackend::setOutputs ( const std::set< std::string > &  names)
inline

Definition at line 39 of file infer_grpc_backend.h.

◆ setTensorMaxBytes()

void nvdsinferserver::TrtISBackend::setTensorMaxBytes ( const std::string &  name,
size_t  maxBytes 
)
inlineinherited

Set the maximum size for the tensor, the maximum of the existing size and new input size is used.

The size is rounded up to INFER_MEM_ALIGNMENT bytes.

Parameters
nameName of the tensor.
maxBytesNew maximum number of bytes for the buffer.

Definition at line 110 of file infer_trtis_backend.h.

References INFER_MEM_ALIGNMENT, and INFER_ROUND_UP.

◆ setupLayersInfo()

NvDsInferStatus nvdsinferserver::TritonGrpcBackend::setupLayersInfo ( )
overrideprotectedvirtual

Reimplemented from nvdsinferserver::TrtISBackend.

◆ setupReorderThread()

NvDsInferStatus nvdsinferserver::TrtISBackend::setupReorderThread ( )
protectedinherited

Create a loop thread that calls inferenceDoneReorderLoop on the queued items.

Returns
NVDSINFER_SUCCESS or NVDSINFER_TRITON_ERROR.

◆ setUrl()

void nvdsinferserver::TritonGrpcBackend::setUrl ( const std::string &  url)
inline

Definition at line 42 of file infer_grpc_backend.h.

◆ specifyInputDims()

NvDsInferStatus nvdsinferserver::TrtISBackend::specifyInputDims ( const InputShapes &  shapes)
overrideinherited

Specify the input layers for the backend.

Parameters
shapesList of name and shapes of the input layers.
Returns
Status code of the type NvDsInferStatus.

◆ version()

int64_t nvdsinferserver::TrtISBackend::version ( ) const
inlineinherited

Definition at line 74 of file infer_trtis_backend.h.


The documentation for this class was generated from the following file: