Step #3: Client Application

Now that the Triton Inference Server is up and running, for the final step of this lab, we will write a client application that will send inference requests to the server and receive sentiment analysis back from the corresponding text sent. You will run through the Client Jupyter Notebook within this part of the lab. You will get a chance to familiarize yourself with Triton Client libraries which we reviewed in the Triton Inference Server Overview, and how inference requests can be sent to the Triton Inference Server.

Open and run through the Sentiment Analysis Client Jupyter Notebook to send inference requests to the Triton Inference Server started in Step #2: Start the Triton Inference Server.

© Copyright 2022-2023, NVIDIA. Last updated on Jan 10, 2023.