Client Libraries and Examples

The inference server client libraries make it easy to communicate with the TensorRT Inference Server from your C++ or Python application. Using these libraries you can send either HTTP or GRPC requests to the server to check status or health and to make inference requests.

A couple of example applications show how to use the client libraries to perform image classification and to test performance:

  • C++ and Python versions of image_client, an example application that uses the C++ or Python client library to execute image classification models on the TensorRT Inference Server.

  • Python version of grpc_image_client, an example application that is functionally equivalent to image_client but that uses GRPC generated client code to communicate with the inference server (instead of the client library).

  • C++ version of perf_client, an example application that issues a large number of concurrent requests to the inference server to measure latency and throughput for a given model. You can use this to experiment with different model configuration settings for your models.

Getting the Client Libraries and Examples

The provided Makefile.client and Dockerfile.client can be used to build the client libraries and examples. As an alternative to building it is also possible to download the pre-build client libraries and examples from GitHub.

Build Using Dockerfile

To build the libaries and examples, first change directory to the root of the repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version). The branch you use for the client build should match the version of the inference server you are using:

$ git checkout r19.04

Then, issue the following command to build the C++ client library, C++ and Python examples, and a Python wheel file for the Python client library:

$ docker build -t tensorrtserver_client -f Dockerfile.client .

You can optionally add --build-arg “PYVER=<ver>” to set the Python version that you want the Python client library built for. Supported values for <ver> are 2.6 and 3.5, with 3.5 being the default.

After the build completes the tensorrtserver_client docker image will contain the built client libraries and examples, and will also be configured with all the dependencies required to run those example within the container. The easiest way to try the examples described in the following sections is to run the client image with --net=host so that the client examples can access the inference server running in its own container (see Running The Inference Server for more information about running the inference server):

$ docker run -it --rm --net=host tensorrtserver_client

In the tensorrtserver_client image you can find the C++ library and example executables in /workspace/build, and the Python examples in /workspace/src/clients/python. A tar file containing all the library and example binaries and Python scripts is at /workspace/v<version>.clients.tar.gz.

Build Using Makefile

The actual client build is performed by Makefile.client. The build dependencies and requirements are shown in Dockerfile.client. To build without Docker you must first install those dependencies. The Makefile can also be targeted for other OSes and platforms. We welcome any updates that expand the Makefiles functionality and allow the clients to be built on additional platforms.

Download From GitHub

An alternative to running the examples within the tensorrtserver_client container is to instead download the pre-built client libraries and examples from the GitHub release page corresponding to the release you are interested in. The client libraries and examples are found in the “Assets” section of the release page in a tar file named after the version of the release, for example, v1.1.0.clients.tar.gz.

The pre-built libraries and examples can be used on a Ubuntu-16.04 host or you can install them into the TensorRT Inference Server container to have both the clients and server in the same container:

$ mkdir clients
$ cd clients
$ wget https://github.com/NVIDIA/tensorrt-inference-server/releases/download/<tarfile_path>
$ tar xzf <tarfile_name>

After untaring you can find the client example binaries in bin/, libraries in lib/, and Python client examples and wheel file in python/.

To use the C++ libraries and examples you must install some dependencies:

$ apt-get update
$ apt-get install curl libcurl3-dev

The Python examples require that you additionally install the wheel file and some other dependencies:

$ apt-get install python3 python3-pip
$ pip3 install --user --upgrade python/tensorrtserver-*.whl numpy pillow

The C++ image_client example uses OpenCV for image manipulation so for that example you must install the following:

$ apt-get install libopencv-dev libopencv-core-dev

Image Classification Example Application

The image classification example that uses the C++ client API is available at src/clients/c++/image_client.cc. The Python version of the image classification client is available at src/clients/python/image_client.py.

To use image_client (or image_client.py) you must first have a running inference server that is serving one or more image classification models. The image_client application requires that the model have a single image input and produce a single classification output. If you don’t have a model repository with image classification models see Example Model Repository for instructions on how to create one.

Follow the instructions in Running The Inference Server to launch the server using the model repository. Once the server is running you can use the image_client application to send inference requests to the server. You can specify a single image or a directory holding images. Here we send a request for the resnet50_netdef model from the example model repository for an image from the qa/images directory:

$ image_client -m resnet50_netdef -s INCEPTION qa/images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.723991

The Python version of the application accepts the same command-line arguments:

$ python3 image_client.py -m resnet50_netdef -s INCEPTION qa/images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.778078556061

The image_client and image_client.py applications use the inference server client library to talk to the server. By default image_client instructs the client library to use HTTP protocol to talk to the server, but you can use GRPC protocol by providing the -i flag. You must also use the -u flag to point at the GRPC endpoint on the inference server:

$ image_client -i grpc -u localhost:8001 -m resnet50_netdef -s INCEPTION qa/images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.723991

By default the client prints the most probable classification for the image. Use the -c flag to see more classifications:

$ image_client -m resnet50_netdef -s INCEPTION -c 3 qa/images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.723991
    968 (CUP) = 0.270953
    967 (ESPRESSO) = 0.00115996

The -b flag allows you to send a batch of images for inferencing. The image_client application will form the batch from the image or images that you specified. If the batch is bigger than the number of images then image_client will just repeat the images to fill the batch:

$ image_client -m resnet50_netdef -s INCEPTION -c 3 -b 2 qa/images/mug.jpg
Request 0, batch size 2
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.778078556061
    968 (CUP) = 0.213262036443
    967 (ESPRESSO) = 0.00293014757335
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.778078556061
    968 (CUP) = 0.213262036443
    967 (ESPRESSO) = 0.00293014757335

Provide a directory instead of a single image to perform inferencing on all images in the directory:

$ image_client -m resnet50_netdef -s INCEPTION -c 3 -b 2 qa/images
Request 0, batch size 2
Image '../qa/images/car.jpg':
    817 (SPORTS CAR) = 0.836187
    511 (CONVERTIBLE) = 0.0708251
    751 (RACER) = 0.0597549
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.723991
    968 (CUP) = 0.270953
    967 (ESPRESSO) = 0.00115996
Request 1, batch size 2
Image '../qa/images/vulture.jpeg':
    23 (VULTURE) = 0.992326
    8 (HEN) = 0.00231854
    84 (PEACOCK) = 0.00201471
Image '../qa/images/car.jpg':
    817 (SPORTS CAR) = 0.836187
    511 (CONVERTIBLE) = 0.0708251
    751 (RACER) = 0.0597549

The grpc_image_client.py application at available at src/clients/python/grpc_image_client.py behaves the same as the image_client except that instead of using the inference server client library it uses the GRPC generated client library to communicate with the server.

Performance Example Application

The perf_client example application located at src/clients/c++/perf_client.cc uses the C++ client API to send concurrent requests to the server to measure latency and inferences-per-second under varying client loads.

To create each load level the perf_client maintains a constant number of outstanding inference requests to the server. The lowest load level is created by having one outstanding request to the server. When that request completes (i.e. the response is received from the server), the perf_client immediately sends another request. The next highest load level is created by having two outstanding requests to the server. When one of those requests completes, the perf_client immediately sends another request so that there are always exactly two inference requests in-flight at all times. The next highest load level is created with three outstanding requests, etc.

At each load level the perf_client measures the throughput and latency over a time window, and then repeats the measurements until it gets stable results. The perf_client then increases the load level and measures again. This repeats until the perf_client reaches one of the specified limits: either the maximum latency value is reached or the maximum concurrency value is reached.

To use perf_client you must first have a running inference server that is serving one or more models. The perf_client application works with any type of model by sending random data for all input tensors and by reading and ignoring all output tensors. If you don’t have a model repository see Example Model Repository for instructions on how to create one.

Follow the instructions in Running The Inference Server to launch the inference server using the model repository.

The perf_client application has two major modes. In the first mode you specify how many concurrent outstanding inference requests you want and perf_client finds a stable latency and inferences/second for that level of concurrency. Use the -t flag to control concurrency and -v to see verbose output. The following example uses four outstanding inference requests to the inference server:

$ perf_client -m resnet50_netdef -p3000 -t4 -v
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 3000 msec

Request concurrency: 4
  Pass [1] throughput: 207 infer/sec. Avg latency: 19268 usec (std 910 usec)
  Pass [2] throughput: 206 infer/sec. Avg latency: 19362 usec (std 941 usec)
  Pass [3] throughput: 208 infer/sec. Avg latency: 19252 usec (std 841 usec)
  Client:
    Request count: 624
    Throughput: 208 infer/sec
    Avg latency: 19252 usec (standard deviation 841 usec)
    Avg HTTP time: 19224 usec (send 714 usec + response wait 18486 usec + receive 24 usec)
  Server:
    Request count: 749
    Avg request latency: 17886 usec (overhead 55 usec + queue 26 usec + compute 17805 usec)

In the second mode perf_client will generate an inferences/second vs. latency curve by increasing request concurrency until a specific latency limit or concurrency limit is reached. This mode is enabled by using the -d option and -l option to specify the latency limit, and optionally the -c option to specify a maximum concurrency limit. By default the initial concurrency value is one, but the -t option can be used to select a different starting value. The following example measures latency and inferences/second starting with request concurrency one and increasing until request concurrency equals three or average request latency exceeds 50 milliseconds:

$ perf_client -m resnet50_netdef -p3000 -d -l50 -c 3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 3000 msec
  Latency limit: 50 msec
  Concurrency limit: 3 concurrent requests

Request concurrency: 1
  Client:
    Request count: 327
    Throughput: 109 infer/sec
    Avg latency: 9191 usec (standard deviation 822 usec)
    Avg HTTP time: 9188 usec (send/recv 1007 usec + response wait 8181 usec)
  Server:
    Request count: 391
    Avg request latency: 7661 usec (overhead 90 usec + queue 68 usec + compute 7503 usec)

Request concurrency: 2
  Client:
    Request count: 521
    Throughput: 173 infer/sec
    Avg latency: 11523 usec (standard deviation 616 usec)
    Avg HTTP time: 11448 usec (send/recv 711 usec + response wait 10737 usec)
  Server:
    Request count: 629
    Avg request latency: 10018 usec (overhead 70 usec + queue 41 usec + compute 9907 usec)

Request concurrency: 3
  Client:
    Request count: 580
    Throughput: 193 infer/sec
    Avg latency: 15518 usec (standard deviation 635 usec)
    Avg HTTP time: 15487 usec (send/recv 779 usec + response wait 14708 usec)
  Server:
    Request count: 697
    Avg request latency: 14083 usec (overhead 59 usec + queue 30 usec + compute 13994 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, 109 infer/sec, latency 9191 usec
Concurrency: 2, 173 infer/sec, latency 11523 usec
Concurrency: 3, 193 infer/sec, latency 15518 usec

Use the -f option to generate a file containing CSV output of the results:

$ perf_client -m resnet50_netdef -p3000 -d -l50 -c 3 -f perf.csv

You can then import the CSV file into a spreadsheet to help visualize the latency vs inferences/second tradeoff as well as see some components of the latency. Follow these steps:

  • Open this spreadsheet

  • Make a copy from the File menu “Make a copy…”

  • Open the copy

  • Select the A2 cell

  • From the File menu select “Import…”

  • Select “Upload” and upload the file

  • Select “Replace data at selected cell” and then select the “Import data” button

Client API

The C++ client API exposes a class-based interface for querying server and model status and for performing inference. The commented interface is available at src/clients/c++/request.h and in the API Reference.

The Python client API provides similar capabilities as the C++ API. The commented interface is available at src/clients/python/__init__.py and in the API Reference.

A simple C++ example application at src/clients/c++/simple_client.cc and a Python version at src/clients/python/simple_client.py demonstrate basic client API usage.

To run the the C++ version of the simple example, first build or download it as described in Getting the Client Libraries and Examples and then:

$ simple_client
0 + 1 = 1
0 - 1 = -1
1 + 1 = 2
1 - 1 = 0
2 + 1 = 3
2 - 1 = 1
...
14 - 1 = 13
15 + 1 = 16
15 - 1 = 14

To run the the Python version of the simple example, first build or download it as described in Getting the Client Libraries and Examples and install the tensorrtserver whl, then:

$ python3 simple_client.py

String Datatype

Some frameworks support tensors where each element in the tensor is a string (see Datatypes for information on supported datatypes). For the most part, the Client API is identical for string and non-string tensors. One exception is that in the C++ API a string input tensor must be initialized with SetFromString() instead of SetRaw().

String tensors are demonstrated in the C++ example application at src/clients/c++/simple_string_client.cc and a Python version at src/clients/python/simple_string_client.py.

Client API for Stateful Models

When performing inference using a stateful model, a client must identify which inference requests belong to the same sequence and also when a sequence starts and ends.

Each sequence is identified with a correlation ID that is provided when the inference context is created (in either the Python of C++ APIs). It is up to the clients to create a unique correlation ID. For each sequence the first inference request should be marked as the start of the sequence and the last inference requests should be marked as the end of the sequence. Start and end are marked using the flags provided with the RunOptions in the C++ API and the run() and async_run() methods in the Python API.

The use of correlation ID and start and end flags are demonstrated in the C++ example application at src/clients/c++/simple_sequence_client.cc and a Python version at src/clients/python/simple_sequence_client.py.