NVIDIA TensorRT Inference Server
1.4.0 -0000000
Version select:
Current release
master (unstable)
Older releases
Documentation home
User Guide
Quickstart
Prerequisites
Using A Prebuilt Docker Container
Building With Docker
Building With CMake
Run TensorRT Inference Server
Verify Inference Server Is Running Correctly
Getting The Client Examples
Running The Image Classification Example
Installing the Server
Installing Prebuilt Containers
Running the Server
Example Model Repository
Running The Inference Server
Running The Inference Server On A System Without A GPU
Checking Inference Server Status
Client Libraries and Examples
Getting the Client Libraries and Examples
Build Using Dockerfile
Build Using CMake
Ubuntu 16.04 / Ubuntu 18.04
Windows 10
Download From GitHub
Image Classification Example Application
Ensemble Image Classification Example Application
Performance Example Application
Client API
String Datatype
Client API for Stateful Models
Models And Schedulers
Stateless Models
Stateful Models
Ensemble Models
Model Repository
Modifying the Model Repository
Model Versions
Framework Model Definition
TensorRT Models
TensorFlow Models
Caffe2 Models
TensorRT/TensorFlow Models
ONNX Models
PyTorch Models
Custom Backends
Custom Backend API
Example Custom Backend
Ensemble Backends
Model Configuration
Generated Model Configuration
Datatypes
Reshape
Version Policy
Instance Groups
Scheduling And Batching
Default Scheduler
Dynamic Batcher
Sequence Batcher
Ensemble Scheduler
Optimization Policy
Inference Server API
Health
Status
Inference
Stream Inference
Metrics
Developer Guide
Architecture
Concurrent Model Execution
Custom Operations
TensorRT
TensorFlow
Building
Building the Server with Docker
Incremental Builds with Docker
Building the Server with CMake
Dependencies
CUDA, cuBLAS, cuDNN
TensorRT
TensorFlow
ONNX Runtime
PyTorch and Caffe2
Configure Inference Server
Build Inference Server
Building A Custom Backend
Build Using CMake
Build Using Custom Backend SDK
Building the Client Libraries and Examples
Build Using Dockerfile
Build Using CMake
Ubuntu 16.04 / Ubuntu 18.04
Windows 10
Building the Documentation
Testing
Generate QA Model Repositories
Build QA Container
Run QA Container
Contributing
Coding Convention
Reference
Capabilities
Protobuf API
HTTP/GRPC API
Model Configuration
Status
C++ API
Class Hierarchy
File Hierarchy
Full API
Namespaces
Namespace nvidia
Namespace nvidia::inferenceserver
Namespace nvidia::inferenceserver::client
Classes and Structs
Struct custom_initdata_struct
Struct custom_payload_struct
Class InferGrpcContext
Class InferGrpcStreamContext
Class InferHttpContext
Class ProfileGrpcContext
Class ProfileHttpContext
Class ServerHealthGrpcContext
Class ServerHealthHttpContext
Class ServerStatusGrpcContext
Class ServerStatusHttpContext
Enums
Enum custom_serverparamkind_enum
Functions
Function CustomErrorString
Function CustomExecute
Function CustomFinalize
Function CustomInitialize
Defines
Define CUSTOM_NO_GPU_DEVICE
Define CUSTOM_SERVER_PARAMETER_CNT
Define TRTIS_CUSTOM_EXPORT
Typedefs
Typedef CustomErrorStringFn_t
Typedef CustomExecuteFn_t
Typedef CustomFinalizeFn_t
Typedef CustomGetNextInputFn_t
Typedef CustomGetOutputFn_t
Typedef CustomInitializeData
Typedef CustomInitializeFn_t
Typedef CustomPayload
Typedef CustomServerParameter
Python API
Client
NVIDIA TensorRT Inference Server
Docs
»
C++ API
»
Typedef CustomInitializeData
View page source
Typedef CustomInitializeData
¶
Defined in
File custom.h
Typedef Documentation
¶
typedef
struct
custom_initdata_struct
CustomInitializeData
¶