Testing

Currently there is no CI testing enabled for the open-source version of the TensorRT Inference Server. We will enable CI testing in a future update.

There is a set of tests in the qa/ directory that can be run manually to provide some testing. Before running these tests you must first generate a test model repository containing the models needed by the tests.

Generate QA Model Repository

The QA model repository contains some simple models that are used to verify the correctness of TRTIS. To generate the QA model repository:

$ cd qa/common
$ ./gen_qa_model_repository

This will generate the model repository in /tmp/qa_model_repository. The TensorRT models will be created for the GPU on the system that CUDA considers device 0 (zero). If you have multiple GPUs on your system see the documentation in the script for how to target a specific GPU.

Build QA Container

Next you need to build a QA version of the TRTIS container. This container will contain TRTIS, the QA tests, and all the dependencies needed to run the QA tests. You must first build the tensorrtserver_build and tensorrtserver containers as described in Building the Server and then build the QA container:

$ docker build -t tensorrtserver_qa -f Dockerfile.QA .

Run QA Container

Now run the QA container and mount the QA model repository into the container so the tests will be able to access it:

$ nvidia-docker run -it --rm -v/tmp/qa_model_repository:/models tensorrtserver_qa

Within the container the QA tests are in /opt/tensorrtserver/qa. To run a test:

$ cd <test directory>
$ ./test.sh