NVIDIA TensorRT Inference Server
0.9.0 -06970c8
Version select:
  • Documentation home

User Guide

  • Quickstart
  • Installing the Server
    • Installing Prebuilt Containers
  • Running the Server
    • Example Model Repository
    • Running The Inference Server
      • Checking Inference Server Status
  • Client Libraries and Examples
    • Building the Client Libraries and Examples
    • Image Classification Example Application
    • Performance Example Application
    • Client API
  • Model Repository
    • Modifying the Model Repository
    • Model Versions
    • Model Definition
      • TensorRT Models
      • TensorFlow Models
      • Caffe2 Models
      • TensorRT/TensorFlow Models
      • ONNX Models
  • Model Configuration
    • Generated Model Configuration
    • Version Policy
    • Instance Groups
    • Dynamic Batching
    • Optimization Policy
  • Inference Server API
    • Health
    • Status
    • Inference
  • Metrics

Developer Guide

  • Architecture
    • Concurrent Model Execution
  • Contributing
    • Coding Convention
  • Building
    • Building the Server
      • Incremental Builds
    • Building the Client Libraries and Examples
    • Building the Documentation
  • Testing
    • Generate QA Model Repository
    • Build QA Container
    • Run QA Container

API Reference

  • Protobuf API
    • HTTP/GRPC API
    • Model Configuration
    • Status
  • C++ API
    • Class Hierarchy
    • File Hierarchy
    • Full API
      • Namespaces
        • Namespace nvidia
        • Namespace nvidia::inferenceserver
        • Namespace nvidia::inferenceserver::client
      • Classes and Structs
        • Struct Result::ClassResult
        • Struct InferContext::Stat
        • Class Error
        • Class InferContext
        • Class InferContext::Input
        • Class InferContext::Options
        • Class InferContext::Output
        • Class InferContext::Request
        • Class InferContext::RequestTimers
        • Class InferContext::Result
        • Class InferGrpcContext
        • Class InferHttpContext
        • Class ProfileContext
        • Class ProfileGrpcContext
        • Class ProfileHttpContext
        • Class ServerHealthContext
        • Class ServerHealthGrpcContext
        • Class ServerHealthHttpContext
        • Class ServerStatusContext
        • Class ServerStatusGrpcContext
        • Class ServerStatusHttpContext
      • Functions
        • Function nvidia::inferenceserver::client::operator<<
      • Directories
        • Directory src
        • Directory clients
        • Directory c++
      • Files
        • File request.h
  • Python API
    • Client
NVIDIA TensorRT Inference Server
  • Docs »
  • Protobuf API
  • View page source

Protobuf API¶

HTTP/GRPC API¶

  • src/core/api.proto
  • src/core/grpc_service.proto
  • src/core/request_status.proto

Model Configuration¶

  • src/core/model_config.proto

Status¶

  • src/core/server_status.proto
Next Previous

© Copyright 2018, NVIDIA Corporation

Built with Sphinx using a theme provided by Read the Docs.