NVIDIA TensorRT Inference Server
0.11.0 -0000000
Version select:
  • Documentation home

User Guide

  • Quickstart
    • Prerequisites
    • Using A Prebuilt Docker Container
    • Building From Source Code
    • Run TensorRT Inference Server
    • Verify Inference Server Is Running Correctly
    • Building The Client Examples
    • Running The Image Classification Example
  • Installing the Server
    • Installing Prebuilt Containers
  • Running the Server
    • Example Model Repository
    • Running The Inference Server
      • Checking Inference Server Status
  • Client Libraries and Examples
    • Building the Client Libraries and Examples
    • Image Classification Example Application
    • Performance Example Application
    • Client API
      • String Datatype
  • Model Repository
    • Modifying the Model Repository
    • Model Versions
    • Framework Model Definition
      • TensorRT Models
      • TensorFlow Models
      • Caffe2 Models
      • TensorRT/TensorFlow Models
      • ONNX Models
    • Custom Backends
      • Custom Backend API
      • Example Custom Backend
  • Model Configuration
    • Generated Model Configuration
    • Datatypes
    • Version Policy
    • Instance Groups
    • Dynamic Batching
    • Optimization Policy
  • Inference Server API
    • Health
    • Status
    • Inference
  • Metrics

Developer Guide

  • Architecture
    • Concurrent Model Execution
  • Contributing
    • Coding Convention
  • Building
    • Building the Server
      • Incremental Builds
    • Building the Client Libraries and Examples
    • Building the Documentation
  • Testing
    • Generate QA Model Repositories
    • Build QA Container
    • Run QA Container

API Reference

  • Protobuf API
    • HTTP/GRPC API
    • Model Configuration
    • Status
  • C++ API
    • Class Hierarchy
    • File Hierarchy
    • Full API
      • Namespaces
        • Namespace nvidia
        • Namespace nvidia::inferenceserver
        • Namespace nvidia::inferenceserver::client
      • Classes and Structs
        • Struct custom_payload_struct
        • Struct Result::ClassResult
        • Struct InferContext::Stat
        • Class Error
        • Class InferContext
        • Class InferContext::Input
        • Class InferContext::Options
        • Class InferContext::Output
        • Class InferContext::Request
        • Class InferContext::RequestTimers
        • Class InferContext::Result
        • Class InferGrpcContext
        • Class InferHttpContext
        • Class ProfileContext
        • Class ProfileGrpcContext
        • Class ProfileHttpContext
        • Class ServerHealthContext
        • Class ServerHealthGrpcContext
        • Class ServerHealthHttpContext
        • Class ServerStatusContext
        • Class ServerStatusGrpcContext
        • Class ServerStatusHttpContext
      • Functions
        • Function CustomErrorString
        • Function CustomExecute
        • Function CustomFinalize
        • Function CustomInitialize
        • Function nvidia::inferenceserver::client::operator<<
      • Defines
        • Define CUSTOM_NO_GPU_DEVICE
      • Typedefs
        • Typedef CustomErrorStringFn_t
        • Typedef CustomExecuteFn_t
        • Typedef CustomFinalizeFn_t
        • Typedef CustomGetNextInputFn_t
        • Typedef CustomGetOutputFn_t
        • Typedef CustomInitializeFn_t
        • Typedef CustomPayload
      • Directories
        • Directory src
        • Directory clients
        • Directory c++
        • Directory servables
        • Directory custom
      • Files
        • File custom.h
        • File request.h
  • Python API
    • Client
NVIDIA TensorRT Inference Server
  • Docs »
  • C++ API »
  • File custom.h
  • View page source

File custom.h¶

↰ Parent directory (src/servables/custom)

Contents

  • Definition (src/servables/custom/custom.h)
  • Includes
  • Classes
  • Functions
  • Defines
  • Typedefs

Definition (src/servables/custom/custom.h)¶

  • Program Listing for File custom.h

Includes¶

  • stddef.h
  • stdint.h

Classes¶

  • Struct custom_payload_struct

Functions¶

  • Function CustomErrorString
  • Function CustomExecute
  • Function CustomFinalize
  • Function CustomInitialize

Defines¶

  • Define CUSTOM_NO_GPU_DEVICE

Typedefs¶

  • Typedef CustomErrorStringFn_t
  • Typedef CustomExecuteFn_t
  • Typedef CustomFinalizeFn_t
  • Typedef CustomGetNextInputFn_t
  • Typedef CustomGetOutputFn_t
  • Typedef CustomInitializeFn_t
  • Typedef CustomPayload
Next Previous

© Copyright 2018, NVIDIA Corporation

Built with Sphinx using a theme provided by Read the Docs.