.. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of NVIDIA CORPORATION nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY # OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Testing ======= Currently there is no CI testing enabled for the open-source version of the TensorRT Inference Server. We will enable CI testing in a future update. There is a set of tests in the qa/ directory that can be run manually to provide some testing. Before running these tests you must first generate a test model repository containing the models needed by the tests. Generate QA Model Repository ---------------------------- The QA model repository contains some simple models that are used to verify the correctness of TRTIS. To generate the QA model repository:: $ cd qa/common $ ./gen_qa_model_repository This will generate the model repository in /tmp/qa_model_repository. The TensorRT models will be created for the GPU on the system that CUDA considers device 0 (zero). If you have multiple GPUs on your system see the documentation in the script for how to target a specific GPU. Build QA Container ------------------ Next you need to build a QA version of the TRTIS container. This container will contain TRTIS, the QA tests, and all the dependencies needed to run the QA tests. You must first build the tensorrtserver_build and tensorrtserver containers as described in :ref:`section-building-the-server` and then build the QA container:: $ docker build -t tensorrtserver_qa -f Dockerfile.QA . Run QA Container ---------------- Now run the QA container and mount the QA model repository into the container so the tests will be able to access it:: $ nvidia-docker run -it --rm -v/tmp/qa_model_repository:/models tensorrtserver_qa Within the container the QA tests are in /opt/tensorrtserver/qa. To run a test:: $ cd $ ./test.sh