NVIDIA TAO Toolkit v3.0
NVIDIA TAO Release tlt.30

TLT Quick Start Guide

This page provides a quick start guide for installing and running TLT.

Hardware

The following system configuration is recommended to achieve reasonable training performance with the TLT and supported models provided:

  • 32 GB system RAM

  • 32 GB of GPU RAM

  • 8 core CPU

  • 1 NVIDIA GPU

  • 100 GB of SSD space

TLT is supported on A100, V100 and RTX 30x0 GPUs.

Software Requirements

Software

Version

Ubuntu 18.04 LTS

18.04

python

>=3.6.9

docker-ce

>19.03.5

docker-API

1.40

nvidia-container-toolkit

>1.3.0-1

nvidia-container-runtime

3.4.0-1

nvidia-docker2

2.5.0-1

nvidia-driver

>455

python-pip

>21.06

nvidia-pyindex

Installing the Pre-requisites

The tlt-launcher is strictly a python3 only package, capable of running on python 3.6.9 or 3.7.

  1. Install docker-ce by following the official instructions.

    Once you have installed docker-ce, follow the post-installation steps to ensure that the docker can be run without sudo.

  2. Install nvidia-container-toolkit by following the install-guide.

  3. Get an NGC account and API key:

    1. Go to NGC and click the Transfer Learning Toolkit container in the Catalog tab. This message is displayed: “Sign in to access the PULL feature of this repository”.

    2. Enter your Email address and click Next, or click Create an Account.

    3. Choose your organization when prompted for Organization/Team.

    4. Click Sign In.

  4. Log in to the NGC docker registry (nvcr.io) using the command docker login nvcr.io and enter the following credentials:

    Copy
    Copied!
                

    a. Username: "$oauthtoken" b. Password: "YOUR_NGC_API_KEY"


    where YOUR_NGC_API_KEY corresponds to the key you generated from step 3.

Note

DeepStream 5.0 - NVIDIA SDK for IVA inference is recommended.


Installing TLT

The Transfer Learning Toolkit (TLT) is a Python pip package that is hosted on the NVIDIA PyIndex. The package uses the docker restAPI under the hood to interact with the NGC Docker registry to pull and instantiate the underlying docker containers. You must have an NGC account and an API key associated with your account. See the Installation Prerequisites section for details on creating an NGC account and obtaining an API key.

  1. Create a new virtualenv using virtualenvwrapper.

    You may follow the instructions in this link to set up a Python virtualenv using a virtualenvwrapper.

    Once you have followed the instructions to install virtualenv and virtualenvwrapper, set the Python version in the virtualenv. This can be done in either of the following ways:

    • Defining the environment variable called VIRTUALENVWRAPPER_PYTHON. This variable should point to the path where the python3 binary is installed in your local machine. You can also add it to your .bashrc or .bash_profile for setting your Python virtualenv by default.

      Copy
      Copied!
                  

      export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3


    • Setting the path to the python3 binary when creating your virtualenv using the virtualenvwrapper wrapper

      Copy
      Copied!
                  

      mkvirtualenv launcher -p /path/to/your/python3


    Once you have logged into the virtualenv, the command prompt should show the name of your virtual environment

    Copy
    Copied!
                

    (launcher) py-3.6.9 desktop:

    When you are done with you session, you may deactivate your virtualenv using the deactivate command:

    Copy
    Copied!
                

    deactivate


    You may re-instantiate this created virtualenv env using the workon command.

    Copy
    Copied!
                

    workon launcher

  2. Install the tlt launcher Python package called nvidia-tlt.

    Copy
    Copied!
                

    pip3 install nvidia-pyindex pip3 install nvidia-tlt

    Note

    The nvidia-tlt package is hosted in the nvidia-pyindex, which has to be installed as a pre-requisite to install nvidia-tlt.

    If you had installed an older version of nvidia-tlt launcher, you may upgrade to the latest version by running the following command.

    Copy
    Copied!
                

    pip3 install --upgrade nvidia-tlt

  3. Invoke the entrypoints using the tlt command.

    Copy
    Copied!
                

    tlt --help

    The sample output of the above command is:

    Copy
    Copied!
                

    usage: tlt [-h] {list,stop,info,augment,bpnet,classification,detectnet_v2,dssd,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet, heartratenet,intent_slot_classification,lprnet,mask_rcnn,punctuation_and_capitalization,question_answering, retinanet,speech_to_text,ssd,text_classification,tlt-converter,token_classification,unet,yolo_v3,yolo_v4} ... Launcher for TLT optional arguments: -h, --help show this help message and exit tasks: {list,stop,info,augment,bpnet,classification,detectnet_v2,dssd,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet,heartratenet ,intent_slot_classification,lprnet,mask_rcnn,punctuation_and_capitalization,question_answering,retinanet,speech_to_text, ssd,text_classification,tlt-converter,token_classification,unet,yolo_v3,yolo_v4}

    Note that under tasks you can see all the launcher-invokable tasks. The following are the specific tasks that help with handling the launched commands using the TLT launcher:

    • list

    • stop

    • info

Running the Transfer Learning Toolkit

Information about the TLT launcher CLI and details on using it to run TLT supported tasks are captured in the section.

Use the examples

Example Jupyter notebooks for all the tasks that are supported in TLT are available in NGC resources. TLT provides sample workflows for Computer Vision and Conversational AI.

Computer Vision

All the samples for the supported computer vision tasks are hosted on ngc under the TLT Computer Vision Samples. To run the available examples, download this sample resource by using the following commands.

Copy
Copied!
            

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip unzip -u tlt_cv_samples_v1.1.0.zip -d ./tlt_cv_samples_v1.1.0 && rm -rf tlt_cv_samples_v1.1.0.zip && cd ./tlt_cv_samples_v1.1.0

Conversational AI

The TLT Conversational AI package, provides several end to end sample workflows to train conversational AI models using TLT and subsequently deploying them to Riva. You can find these samples at:

Conversational AI Task

Jupyter Notebooks

Speech to Text

Speech to Text Notebook

Speech to Text Citrinet

Speech to Text Citrinet Notebook

Question Answering

Question Answering Notebook

Text Classification

Text Classification Notebook

Token Classification

Token Classification Notebook

Punctuation and Capitalization

Punctuation Capitalization Notebook

Intent and Slot Classification

Intent Slot Classification Notebook

You can download these resources, by using the NGC CLI command available at the NGC resource page. Once you download the respective tutorial resource, you may instantiate the jupyter notebook server.

Copy
Copied!
            

pip3 install jupyter jupyter notebook --ip 0.0.0.0 --allow-root --port 8888

Copy and paste the link produced from this command into your browser to access the notebook. The /workspace/examples folder will contain a demo notebook. Feel free to use any free port available to host the notebook if port 8888 is unavailable.

Downloading the Models

The Transfer Learning Toolkit Docker gives you access to a repository of pretrained models that can serve as a starting point when training deep neural networks. These models are hosted on the NGC. To download the models, please download the NGC CLI and install it. More information about the NGC Catalog CLI is available here. Once you have installed the CLI, you may follow the instructions below to configure the NGC CLI and download the models.

Configure the NGC API key

Using the NGC API Key obtained in Installation Prerequisites, configure the enclosed ngc cli by executing this command and following the prompts:

Copy
Copied!
            

ngc config set


Get a list of models

Use this command to get a list of models that are hosted in the NGC model registry:

Copy
Copied!
            

ngc registry model list <model_glob_string>

For the computer vision models, here is an example of using this command:

Copy
Copied!
            

ngc registry model list nvidia/tlt_pretrained_*

Note

All our classification models have names based on this template: nvidia/tlt_pretrained_classification:&lt;template&gt;.

To view all the conversational AI models, you may using the following command:

Copy
Copied!
            

ngc registry model list nvidia/tlt-riva/*


Download a model

Use this command to download the model you have chosen from the NGC model registry:

Copy
Copied!
            

ngc registry model download-version <ORG/model_name:version> -dest <path_to_download_dir>

For example, use this command to download the resnet 18 classification model to the $USER_EXPERIMENT_DIR directory:

Copy
Copied!
            

ngc registry model download-version nvidia/tlt_pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

Copy
Copied!
            

Downloaded 82.41 MB in 9s, Download speed: 9.14 MB/s ---------------------------------------------------- Transfer id: tlt_iva_classification_resnet18_v1 Download status: Completed. Downloaded local path: /workspace/tlt-experiments/pretrained_resnet18/ tlt_resnet18_classification_v1 Total files downloaded: 2 Total downloaded size: 82.41 MB Started at: 2019-07-16 01:29:53.028400 Completed at: 2019-07-16 01:30:02.053016 Duration taken: 9s seconds

Install the Python Virtual Environment

Follow these instructions to install the virtualenv with Python 3.6.9. Once virtualenvwrapper is set up, set the version of python to be used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON variable. You may do so by running the following:

Copy
Copied!
            

export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x

where x >= 6 and <= 8.

Use the following command to instantiate a virtual environment:

Copy
Copied!
            

mkvirtualenv launcher -p /path/to/bin/python3.x

where x >= 6 and <= 8

Download Jupyter Notebook

TLT provides samples notebooks to walk through an prescrible TLT workflow. These samples are hosted on NGC as a resource and can be downloaded from NGC by executing the command mentioned below.

Copy
Copied!
            

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip unzip -u tlt_cv_samples_v1.1.0.zip -d ./tlt_cv_samples_v1.1.0 && rm -rf tlt_cv_samples_v1.1.0.zip && cd ./tlt_cv_samples_v1.1.0

The list with their corresponding samples mapped are mentioned below.

Model Name

Jupyter Notebook

VehicleTypeNet

classification/classification.ipynb

VehicleMakeNet

classification/classification.ipynb

TrafficCamNet

detectnet_v2/detectnet_v2.ipynb

PeopleSegNet

mask_rcnn/mask_rcnn.ipynb

PeopleNet

detectnet_v2/detectnet_v2.ipynb

License Plate Recognition

lprnet/lprnet.ipynb

License Plate Detection

detectnet_v2/detectnet_v2.ipynb

Heart Rate Estimation

heartratenet/heartratenet.ipynb

Gesture Recognition

gesturenet/gesturenet.ipynb

Gaze Estimation

gazenet/gazenet.ipynb

Facial Landmark

fpenet/fpenet.ipynb

FaceDetectIR

detectnet_v2/detectnet_v2.ipynb

FaceDetect

facenet/facenet.ipynb

Emotion Recognition

emotionnet/emotionnet.ipynb

DashCamNet

detectnet_v2/detectnet_v2.ipynb

BodyPoseNet

bpnet/bpnet.ipynb

Open model architecture:

Open model architecture

Jupyter notebook

DetectNet_v2

detectnet_v2/detectnet_v2.ipynb

FasterRCNN

faster_rcnn/faster_rcnn.ipynb

YOLOV3

yolo_v3/yolo_v3.ipynb

YOLOV4

yolo_v4/yolo_v4.ipynb

SSD

ssd/ssd.ipynb

DSSD

dssd/dssd.ipynb

RetinaNet

retinanet/retinanet.ipynb

MaskRCNN

mask_rcnn/mask_rcnn.ipynb

UNET

unet/unet_isbi.ipynb

Classification

classification/classification.ipynb

Start Jupyter Notebook

Once the notebook samples are downloaded, you may start the notebook using the below commands:

Copy
Copied!
            

jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

Open an internet browser on localhost and navigate to the following URL:

Copy
Copied!
            

http://0.0.0.0:8888

Note

If you want to run the notebook from a remote server then please follow these steps.


1. Train the Model

Follow the Notebook instructions to train the model.

You may now deploy these models to DeepStream by following the instructions here.

© Copyright 2020, NVIDIA. Last updated on Jul 28, 2021.