Step #2: Start the Triton Inference Server

Important

Before you begin, you will need to shut down the kernel in the Jupyter lab using the Training Jupyter Notebook link on the left navigation pane.

shut-down-kernel.png

Using the VM Console link on the left-hand navigation pane, open the VM Console. You will use the VM console to start the Triton Inference Server on the VM using the command below.

Using the script below, start Triton Inference Server.

Copy
Copied!
            

sh ~/triton-startup.sh

Within the console output, notice the Triton model repository contains the mobilenet_classifier model which was saved within the Jupyter Notebook and the status is Ready.

mobilenet-classifier.png

Important

Below are the credentials for the VM in case sudo access is required.

  • Username: temp

  • Password: launchpad!

© Copyright 2022-2023, NVIDIA. Last updated on Jan 10, 2023.