NVIDIA DeepStream SDK API Reference

8.0 Release
9.0/sources/libs/nvdsinferserver/infer_trtis_server.h File Reference

Detailed Description

Header file of wrapper classes for Triton Inference Server server instance, inference request, response.

This file declares the wrapper classes used for inference processing using the Triton Inference Server C-API mode.

Definition in file 9.0/sources/libs/nvdsinferserver/infer_trtis_server.h.

Go to the source code of this file.

Data Structures

class  nvdsinferserver::TrtServerRequest
 Wrapper class for Triton inference request. More...
 
class  nvdsinferserver::TrtServerResponse
 Wrapper class for Triton output parsing. More...
 
class  nvdsinferserver::TrtServerAllocator
 Wrapper class for Triton server output memory allocator. More...
 
struct  nvdsinferserver::triton::BackendConfig
 The backend configuration settings. More...
 
struct  nvdsinferserver::triton::RepoSettings
 Model repository settings for the Triton Inference Server. More...
 
class  nvdsinferserver::TrtISServer
 Wrapper class for creating Triton Inference Server instance. More...
 

Namespaces

 nvdsinferserver
 This is a header file for pre-processing cuda kernels with normalization and mean subtraction required by nvdsinfer.
 
 nvdsinferserver::triton
 

Macros

#define TRITON_DEFAULT_MINIMUM_COMPUTE_CAPABILITY   6.0
 
#define TRITON_DEFAULT_PINNED_MEMORY_BYTES   (1 << 28)
 
#define TRITON_DEFAULT_BACKEND_DIR   "/opt/tritonserver/backends"
 

Macro Definition Documentation

◆ TRITON_DEFAULT_BACKEND_DIR

#define TRITON_DEFAULT_BACKEND_DIR   "/opt/tritonserver/backends"

◆ TRITON_DEFAULT_MINIMUM_COMPUTE_CAPABILITY

#define TRITON_DEFAULT_MINIMUM_COMPUTE_CAPABILITY   6.0

◆ TRITON_DEFAULT_PINNED_MEMORY_BYTES

#define TRITON_DEFAULT_PINNED_MEMORY_BYTES   (1 << 28)