.. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of NVIDIA CORPORATION nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY # OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .. _section-inference-server-api: Inference Server API ==================== The TensorRT Inference Server exposes both HTTP and GRPC endpoints. Three endpoints with identical functionality are exposed for each protocol. * :ref:`section-api-health`: The server health API for determining server liveness and readiness. * :ref:`section-api-status`: The server status API for getting information about the server and about the models being served. * :ref:`section-api-inference`: The inference API that accepts model inputs, runs inference and returns the requested outputs. The HTTP endpoints can be used directly as described in this section, but for most use-cases, the preferred way to access TRTIS is via the `C++ and Python Client libraries `. The GRPC endpoints can also be used via the `C++ and Python Client libraries ` or a GRPC-generated API can be used directly as shown in the grpc_image_client.py example. .. _section-api-health: Health ------ Performing an HTTP GET to /api/health/live returns a 200 status if the server is able to receive and process requests. Any other status code indicates that the server is still initializing or has failed in some way that prevents it from processing requests. Once the liveness endpoint indicates that the server is active, performing an HTTP GET to /api/health/ready returns a 200 status if the server is able to respond to inference requests for some or all models (based on TRTIS's -\\-strict-readiness option explained below). Any other status code indicates that the server is not ready to respond to some or all inference requests. For GRPC the :cpp:var:`GRPCService ` uses the :cpp:var:`HealthRequest ` and :cpp:var:`HealthResponse ` messages to implement the endpoint. By default, the readiness endpoint will return success if the server is responsive and all models loaded successfully. Thus, by default, success indicates that an inference request for any model can be handled by the server. For some use cases, you want the readiness endpoint to return success even if all models are not available. In this case, use the -\\-strict-readiness=false option to cause the readiness endpoint to report success as long as the server is responsive (even if one or more models are not available). .. _section-api-status: Status ------ Performing an HTTP GET to /api/status returns status information about the server and all the models being served. Performing an HTTP GET to /api/status/ returns information about the server and the single model specified by . The server status is returned in the HTTP response body in either text format (the default) or in binary format if query parameter format=binary is specified (for example, /api/status?format=binary). The success or failure of the status request is indicated in the HTTP response code and the **NV-Status** response header. The **NV-Status** response header returns a text protobuf formatted :cpp:var:`RequestStatus ` message. For GRPC the :cpp:var:`GRPCService ` uses the :cpp:var:`StatusRequest ` and :cpp:var:`StatusResponse ` messages to implement the endpoint. The response includes a :cpp:var:`RequestStatus ` message indicating success or failure. For either protocol the status itself is returned as a :cpp:var:`ServerStatus ` message. .. _section-api-inference: Inference --------- Performing an HTTP POST to /api/infer/ performs inference using the latest version of the model that is being made available by the model's :ref:`version policy `. The latest version is the numerically greatest version number. Performing an HTTP POST to /api/infer// performs inference using a specific version of the model. The request uses the **NV-InferRequest** header to communicate an :cpp:var:`InferRequestHeader ` message that describes the input tensors and the requested output tensors. For example, for a resnet50 model the following **NV-InferRequest** header indicates that a batch-size 1 request is being made with input size of 602112 bytes (3 * 224 * 224 * sizeof(FP32)), and that the result of the tensor named "output" should be returned as the top-3 classification values:: NV-InferRequest: batch_size: 1 input { name: "input" byte_size: 602112 } output { name: "output" byte_size: 4000 cls { count: 3 } } The input tensor values are communicated in the body of the HTTP POST request as raw binary in the order as the inputs are listed in the request header. The inference results are returned in the body of the HTTP response to the POST request. For outputs where full result tensors were requested, the result values are communicated in the body of the response in the order as the outputs are listed in the request header. After those, an :cpp:var:`InferResponseHeader ` message is appended to the response body. The :cpp:var:`InferResponseHeader ` message is returned in either text format (the default) or in binary format if query parameter format=binary is specified (for example, /api/infer/foo?format=binary). For example, assuming outputs specified in the :cpp:var:`InferResponseHeader ` in order are “output0”, “output1”, …, “outputn”, the response body would contain:: ... The success or failure of the inference request is indicated in the HTTP response code and the **NV-Status** response header. The **NV-Status** response header returns a text protobuf formatted :cpp:var:`RequestStatus ` message. For GRPC the :cpp:var:`GRPCService ` uses the :cpp:var:`InferRequest ` and :cpp:var:`InferResponse ` messages to implement the endpoint. The response includes a :cpp:var:`RequestStatus ` message indicating success or failure, :cpp:var:`InferResponseHeader ` message giving response meta-data, and the raw output tensors.