Quickstart

The TensorRT Inference Server is available in two ways:

Prerequisites

Regardless of which method you choose (starting with a pre-built container from NGC or building from source), you must perform the following prerequisite steps:

  • Ensure you have access and are logged into NGC. For step-by-step instructions, see the NGC Getting Started Guide.

  • Install Docker and nvidia-docker. For DGX users, see Preparing to use NVIDIA Containers. For users other than DGX, see the nvidia-docker installation documentation.

  • Clone the TensorRT Inference Server GitHub repo. Even if you choose to get the pre-built inference server from NGC, you need the GitHub repo for the example model repository and to build the example applications. Go to https://github.com/NVIDIA/tensorrt-inference-server, select the r<xx.yy> release branch that you are using, and then select the clone or download drop down button.

  • Create a model repository containing one or more models that you want the inference server to serve. An example model repository is included in the docs/examples/model_repository directory of the GitHub repo. Before using the repository, you must fetch any missing model definition files from their public model zoos via the provided docs/examples/fetch_models.sh script:

    $ cd docs/examples
    $ ./fetch_models.sh
    

Using A Prebuilt Docker Container

Make sure you log into NGC as described in Prerequisites before attempting the steps in this section. Use docker pull to get the TensorRT Inference Server container from NGC:

$ docker pull nvcr.io/nvidia/tensorrtserver:<xx.yy>-py3

Where <xx.yy> is the version of the inference server that you want to pull. Once you have the container follow these steps to run the server and the example client applications.

  1. Run the inference server.

  2. Verify that the server is running correct.

  3. Get the example client applications.

  4. Run the image classification example.

Building From Source Code

Make sure you complete the steps in Prerequisites before attempting to build the inference server. To build the inference server from source, change to the root directory of the GitHub repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version):

$ git checkout r19.05

Then use docker to build:

$ docker build --pull -t tensorrtserver

After the build completes follow these steps to run the server and the example client applications.

  1. Run the inference server.

  2. Verify that the server is running correct.

  3. Get the example client applications.

  4. Run the image classification example.

Run TensorRT Inference Server

Assuming the example model repository is available in /full/path/to/example/model/repository, use the following command to run the inference server container:

$ nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v/full/path/to/example/model/repository:/models <docker image> trtserver --model-store=/models

Where <docker image> is nvcr.io/nvidia/tensorrtserver:<xx.yy>-py3 if you pulled the inference server container from NGC, or is tensorrtserver if you built the inference server from source.

For more information, see Running The Inference Server.

Verify Inference Server Is Running Correctly

Use the server’s Status endpoint to verify that the server and the models are ready for inference. From the host system use curl to access the HTTP endpoint to request the server status. For example:

$ curl localhost:8000/api/status
id: "inference:0"
version: "0.6.0"
uptime_ns: 23322988571
model_status {
  key: "resnet50_netdef"
  value {
    config {
      name: "resnet50_netdef"
      platform: "caffe2_netdef"
    }
    ...
    version_status {
      key: 1
      value {
        ready_state: MODEL_READY
      }
    }
  }
}
ready_state: SERVER_READY

The ready_state field should return SERVER_READY to indicate that the inference server is online, that models are properly loaded, and that the server is ready to receive inference requests.

For more information, see Checking Inference Server Status.

Getting The Client Examples

The provided Dockerfile.client can be used to build the client libraries and examples. First change directory to the root of the repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version). The branch you use for the client build should match the version of the inference server you are using:

$ git checkout r19.05

Then use docker to build the C++ client library, C++ and Python examples, and a Python wheel file for the Python client library:

$ docker build -t tensorrtserver_client -f Dockerfile.client .

After the build completes, the tensorrtserver_client Docker image will contain the built client libraries and examples. Run the client image so that the client examples can access the inference server running in its own container:

$ docker run -it --rm --net=host tensorrtserver_client

It is also possible to build the client examples without Docker and for some platforms per-compiled client examples are available. For more information, see Getting the Client Libraries and Examples.

Running The Image Classification Example

From within the tensorrtserver_client image, run the example image-client application to perform image classification using the example resnet50_netdef from the example model repository.

To send a request for the resnet50_netdef (Caffe2) model from the example model repository for an image from the qa/images directory:

$ /tmp/client/bin/image_client -m resnet50_netdef -s INCEPTION images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.723991

The Python version of the application accepts the same command-line arguments:

$ python3 /tmp/client/python/image_client.py -m resnet50_netdef -s INCEPTION images/mug.jpg
Request 0, batch size 1
Image '../qa/images/mug.jpg':
    504 (COFFEE MUG) = 0.778078556061

For more information, see Image Classification Example Application.