Quickstart

To quickly get the TensorRT Inference Server (TRTIS) up and running follow these steps. After you’ve seen TRTIS in action you can revisit the rest of the User Guide to learn more about its features.

First, follow the instructions in Installing Prebuilt Containers to install the TRTIS container.

Next, use the Example Model Repository section to create an example model repository containing a couple of models that you can serve with TRTIS.

Now that you have a model repository, follow the instructions in Running The Inference Server to start TRTIS. Use the server’s Status endpoint to make sure the server and the models are ready for inferencing.

Finally, build and run the example image-client application to perform image classification using TRTIS.