NVIDIA TensorRT Inference Server
0.10.0 -9717094
Version select:
  • Documentation home

User Guide

  • Quickstart
  • Installing the Server
    • Installing Prebuilt Containers
  • Running the Server
    • Example Model Repository
    • Running The Inference Server
      • Checking Inference Server Status
  • Client Libraries and Examples
    • Building the Client Libraries and Examples
    • Image Classification Example Application
    • Performance Example Application
    • Client API
  • Model Repository
    • Modifying the Model Repository
    • Model Versions
    • Framework Model Definition
      • TensorRT Models
      • TensorFlow Models
      • Caffe2 Models
      • TensorRT/TensorFlow Models
      • ONNX Models
    • Custom Backends
      • Custom Backend API
      • Example Custom Backend
  • Model Configuration
    • Generated Model Configuration
    • Version Policy
    • Instance Groups
    • Dynamic Batching
    • Optimization Policy
  • Inference Server API
    • Health
    • Status
    • Inference
  • Metrics

Developer Guide

  • Architecture
    • Concurrent Model Execution
  • Contributing
    • Coding Convention
  • Building
    • Building the Server
      • Incremental Builds
    • Building the Client Libraries and Examples
    • Building the Documentation
  • Testing
    • Generate QA Model Repository
    • Build QA Container
    • Run QA Container

API Reference

  • Protobuf API
    • HTTP/GRPC API
    • Model Configuration
    • Status
  • C++ API
    • Class Hierarchy
    • File Hierarchy
    • Full API
      • Namespaces
        • Namespace nvidia
        • Namespace nvidia::inferenceserver
        • Namespace nvidia::inferenceserver::client
      • Classes and Structs
        • Struct custom_payload_struct
        • Struct Result::ClassResult
        • Struct InferContext::Stat
        • Class Error
        • Class InferContext
        • Class InferContext::Input
        • Class InferContext::Options
        • Class InferContext::Output
        • Class InferContext::Request
        • Class InferContext::RequestTimers
        • Class InferContext::Result
        • Class InferGrpcContext
        • Class InferHttpContext
        • Class ProfileContext
        • Class ProfileGrpcContext
        • Class ProfileHttpContext
        • Class ServerHealthContext
        • Class ServerHealthGrpcContext
        • Class ServerHealthHttpContext
        • Class ServerStatusContext
        • Class ServerStatusGrpcContext
        • Class ServerStatusHttpContext
      • Functions
        • Function CustomErrorString
        • Function CustomExecute
        • Function CustomFinalize
        • Function CustomInitialize
        • Function nvidia::inferenceserver::client::operator<<
      • Defines
        • Define CUSTOM_NO_GPU_DEVICE
      • Typedefs
        • Typedef CustomErrorStringFn_t
        • Typedef CustomExecuteFn_t
        • Typedef CustomFinalizeFn_t
        • Typedef CustomGetNextInputFn_t
        • Typedef CustomGetOutputFn_t
        • Typedef CustomInitializeFn_t
        • Typedef CustomPayload
      • Directories
        • Directory src
        • Directory clients
        • Directory c++
        • Directory servables
        • Directory custom
      • Files
        • File custom.h
        • File request.h
  • Python API
    • Client
NVIDIA TensorRT Inference Server
  • Docs »
  • Class Hierarchy
  • View page source

Class HierarchyΒΆ

    • Namespace nvidia
      • Namespace nvidia::inferenceserver
        • Namespace nvidia::inferenceserver::client
          • Class Error
          • Class InferContext
            • Struct InferContext::Stat
            • Class InferContext::Input
            • Class InferContext::Options
            • Class InferContext::Output
            • Class InferContext::Request
            • Class InferContext::RequestTimers
            • Class InferContext::Result
              • Struct Result::ClassResult
          • Class InferGrpcContext
          • Class InferHttpContext
          • Class ProfileContext
          • Class ProfileGrpcContext
          • Class ProfileHttpContext
          • Class ServerHealthContext
          • Class ServerHealthGrpcContext
          • Class ServerHealthHttpContext
          • Class ServerStatusContext
          • Class ServerStatusGrpcContext
          • Class ServerStatusHttpContext
    • Struct custom_payload_struct

© Copyright 2018, NVIDIA Corporation

Built with Sphinx using a theme provided by Read the Docs.