Inference Server API¶
The inference server exposes both HTTP and GRPC endpoints. Three endpoints with identical functionality are exposed for each protocol.
- Health: The server health API for determining server liveness and readiness.
- Status: The server status API for getting information about the server and about the models being served.
- Inference: The inference API that accepts model inputs, runs inference and returns the requested outputs.
The HTTP endpoints can be used directly as described in this section, but for most use-cases, the preferred way to access the inference server is via the C++ and Python Client libraries <section-client-libraries-and-examples>.
The GRPC endpoints can also be used via the C++ and Python Client libraries <section-client-libraries-and-examples> or a GRPC-generated API can be used directly as shown in the grpc_image_client.py example.
Health¶
Performing an HTTP GET to /api/health/live returns a 200 status if the server is able to receive and process requests. Any other status code indicates that the server is still initializing or has failed in some way that prevents it from processing requests.
Once the liveness endpoint indicates that the server is active, performing an HTTP GET to /api/health/ready returns a 200 status if the server is able to respond to inference requests for some or all models (based on the inference server’s --strict-readiness option explained below). Any other status code indicates that the server is not ready to respond to some or all inference requests.
For GRPC the GRPCService
uses the
HealthRequest
and
HealthResponse
messages to implement the endpoint.
By default, the readiness endpoint will return success if the server is responsive and all models loaded successfully. Thus, by default, success indicates that an inference request for any model can be handled by the server. For some use cases, you want the readiness endpoint to return success even if all models are not available. In this case, use the --strict-readiness=false option to cause the readiness endpoint to report success as long as the server is responsive (even if one or more models are not available).
Status¶
Performing an HTTP GET to /api/status returns status information about
the server and all the models being served. Performing an HTTP GET to
/api/status/<model name> returns information about the server and the
single model specified by <model name>. The server status is returned
in the HTTP response body in either text format (the default) or in
binary format if query parameter format=binary is specified (for
example, /api/status?format=binary). The success or failure of the
status request is indicated in the HTTP response code and the
NV-Status response header. The NV-Status response header
returns a text protobuf formatted RequestStatus
message.
For GRPC the GRPCService
uses the
StatusRequest
and
StatusResponse
messages to implement the endpoint. The response includes a
RequestStatus
message indicating success or failure.
For either protocol the status itself is returned as a
ServerStatus
message.
Inference¶
Performing an HTTP POST to /api/infer/<model name> performs inference using the latest version of the model that is being made available by the model’s version policy. The latest version is the numerically greatest version number. Performing an HTTP POST to /api/infer/<model name>/<model version> performs inference using a specific version of the model.
The request uses the NV-InferRequest header to communicate an
InferRequestHeader
message that describes
the input tensors and the requested output tensors. For example, for a
resnet50 model the following NV-InferRequest header indicates that
a batch-size 1 request is being made with input size of 602112 bytes
(3 * 224 * 224 * sizeof(FP32)), and that the result of the tensor
named “output” should be returned as the top-3 classification values:
NV-InferRequest: batch_size: 1 input { name: "input" byte_size: 602112 } output { name: "output" byte_size: 4000 cls { count: 3 } }
The input tensor values are communicated in the body of the HTTP POST request as raw binary in the order as the inputs are listed in the request header.
The inference results are returned in the body of the HTTP response to
the POST request. For outputs where full result tensors were
requested, the result values are communicated in the body of the
response in the order as the outputs are listed in the request
header. After those, an InferResponseHeader
message is appended to
the response body. The InferResponseHeader
message is returned in
either text format (the default) or in binary format if query
parameter format=binary is specified (for example,
/api/infer/foo?format=binary).
For example, assuming outputs specified in the
InferResponseHeader
in order are
“output0”, “output1”, …, “outputn”, the response body would contain:
<raw binary tensor values for output0, if raw output was requested for output0>
<raw binary tensor values for output1, if raw output was requested for output1>
...
<raw binary tensor values for outputn, if raw output was requested for outputn>
<text or binary encoded InferResponseHeader proto>
The success or failure of the inference request is indicated in the
HTTP response code and the NV-Status response header. The
NV-Status response header returns a text protobuf formatted
RequestStatus
message.
For GRPC the GRPCService
uses the
InferRequest
and
InferResponse
messages to implement the endpoint. The response includes a
RequestStatus
message indicating success or failure, InferResponseHeader
message giving
response meta-data, and the raw output tensors.