TAO v5.5.0
NVIDIA TAO v5.5.0

Beginners

For those who are new to AI, deep neural network training can be daunting. To help with this, TAO provides an easy-to-use command line interface–the TAO CLI Launcher–to interact with and run TAO workflows.

The TAO Launcher is a lightweight, Python-based, command-line interface, which acts as a front-end for TAO containers built on top of PyTorch, TensorFlow, and TensorRT. The CLI abstracts information about what network actions are implemented in which container. When you use the CLI, the respective container gets launched automatically based on the model you plan to use.

After you have downloaded the getting started resource using the instructions for the package content, you can get started with the launcher as follows:

  1. Installing the prerequisite software required to run the CLI

  2. Setting up your Python environment

  3. Installing the TAO launcher

  4. Running a sample launcher TAO tutorial notebook

The TAO Launcher is strictly a Python3 package, capable of running on Python versions >= 3.10.

  1. Install docker-ce by following the official instructions.

    Note

    After you have installed docker-ce, follow the post-installation steps to ensure that Docker can be run without sudo. The launcher requires that the docker CLI be available to run without superuser (sudo) privileges.

  2. Install nvidia-container-toolkit by following the install-guide.

  3. Get an NGC account and API key:

    1. Go to NGC and click the TAO container in the Catalog tab. This message will be displayed: “Sign in to access the PULL feature of this repository”.

    2. Enter your Email address and click Next, or click Create an Account.

    3. Choose your organization when prompted for Organization/Team.

    4. Click Sign In.

  4. Log in to the NGC Docker registry (nvcr.io) using the command:

    Copy
    Copied!
                

    docker login nvcr.io

    Then, enter the following credentials:

    Copy
    Copied!
                

    a. Username: "$oauthtoken" b. Password: "YOUR_NGC_API_KEY"


    where YOUR_NGC_API_KEY corresponds to the key you generated from step 3.

Note

DeepStream 7.0 - NVIDIA SDK for streaming inference is recommended to deploy TAO trained and fine-tuned models.

We recommend setting up a Python environment using miniconda. The following instructions show how to setup a Python conda environment.

  1. Follow the instructions in this link to set up a Conda environment using Miniconda.

  2. After you have installed miniconda, create a new environment and set the Python version to 3.10.

    Copy
    Copied!
                

    conda create -n launcher python=3.10


  3. Activate the conda environment that you have just created.

    Copy
    Copied!
                

    conda activate launcher


  4. Verify that the command prompt shows the name of your Conda environment.

    Copy
    Copied!
                

    (launcher) py-3.10 desktop:


  5. Set the notebooks to have the same Python kernel as the virtual environment:

    Copy
    Copied!
                

    python -m pip install ipykernel python -m ipykernel install --user --name launcher --display-name "launcher"


When you are done with your session, you can deactivate your conda environment using the deactivate command:

Copy
Copied!
            

conda deactivate

You may re-instantiate this conda environment using the following command:

Copy
Copied!
            

conda activate launcher


As part of the getting started package, a quick start script is included to install the TAO launcher CLI package and its Python dependencies, along with the NGC CLI required to pull models and interact with the model, Docker, and resource registry on the NVIDIA GPU Cloud.

  1. Install the CLI launcher and the NGC CLI using the quick start script downloaded with the getting_started NGC package from the package content.

    Copy
    Copied!
                

    bash setup/quickstart_launcher.sh --install

  2. Update the launcher to the latest version of TAO by running the following command:

    Copy
    Copied!
                

    bash setup/quickstart_launcher.sh --upgrade

  3. Invoke the entry points using the tao command:

    Copy
    Copied!
                

    tao --help


    The following is sample output of the above command:

    Copy
    Copied!
                

    usage: tao [-h] {list,stop,info,dataset,deploy,model} ... Launcher for TAO optional arguments: -h, --help show this help message and exit task_groups: {list,stop,info,dataset,deploy,model}

    Under task_groups you can see all the launcher-invokable tasks. The following are the tasks that handle launched commands using the TAO Launcher:

    • list

    • stop

    • info

Note

When installing the TAO Launcher to the native Python3 kernel of your host machine, as opposed to the recommended route of using a virtual environment, you may get an error stating that the tao binary wasn’t found; the path to your tao binary installed by pip has not been added to the PATH environment variable on your local machine. To fix this, run the following command:

Copy
Copied!
            

export PATH=$PATH:~/.local/bin

Now that you have installed the TAO launcher and the tutorial notebooks from GitHub, you can get started by running a sample tutorial notebook.

The following command allows you to invoke a sample notebook:

Copy
Copied!
            

jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

Open an internet browser on localhost and navigate to the following URL:

Copy
Copied!
            

http://0.0.0.0:8888

Note

If you want to run the notebook from a remote server, follow these steps.

Execute the cells in the notebook to train a model using TAO.

You may choose any one of the sample notebooks below to get started with TAO. As a starting point, NVIDIA recommends running the GroundingDINO open vocabulary detection notebook.

The tables below provide a list of the TAO tutorial notebooks.

Purpose-Built Pre-Trained Models

The following is a list of purpose-built pre-trained models mapped with their corresponding samples.

Model Name

Jupyter Notebook

Description

ActionRecognitionNet notebooks/tao_launcher_starter_kit/actionrecognitionnet/actionrecognitionnet.ipynb Sample notebook to train and optimize an Action Recognition model on the HMDB51 dataset
PoseClassificationNet notebooks/tao_launcher_starter_kit/pose_classification_net/poseclassificationnet.ipynb Sample notebook to train and optimize a Pose Classification network on the Market-1501 dataset
PointPillars notebooks/tao_launcher_starter_kit/pointpillars/pointpillars.ipynb Sample notebook to train, prune, and optimize a 3-D Object Detection model on the KITTI point cloud dataset
ReIdentificationNet notebooks/tao_launcher_starter_kit/re_identification_net/reidentificationnet_resnet.ipynb Sample notebook to train and optimize a Re-Identification network on the Market-1501 dataset
ReIdentificationNet Transformer notebooks/tao_launcher_starter_kit/re_identification_net/reidentificationnet_swin.ipynb Sample notebook to train and optimize a Re-Identification Transformer network on the Market-1501 dataset
OCDNet notebooks/tao_launcher_starter_kit/ocdnet/ocdnet.ipynb Sample notebook to train, prune, and optimize an optical character detection model on the ICDAR2015 dataset
OCRNet notebooks/tao_launcher_starter_kit/ocrnet/ocrnet.ipynb Sample notebook to train, prune, and optimize an optical character recognition model on the ICDAR2015 dataset
Optical Inspection notebooks/tao_launcher_starter_kit/optical_inspection/OpticalInspection.ipynb Sample notebook to train and optimize a siamese model for optical inspection of PCB components on a custom dataset
Retail object recognition notebooks/tao_launcher_starter_kit/metric_learning_recogntition/metric_learning_recogntition.ipynb Sample notebook to train and optimize a metric learning recognition model on the Retail Product Checkout Dataset
VisualChangeNet-Classification notebooks/tao_launcher_starter_kit/visual_changenet/visual_changenet_classification.ipynb Sample notebook to train and optimize a visual changenet model for optical inspection of PCB components on a custom dataset
CenterPose notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb Sample notebook to train and optimize a centerpose model for estimating the object pose on the Google Objectron dataset

Open Model Architectures

Network Architecture

Jupyter Notebook

Description

Classification (TF2) notebooks/tao_launcher_starter_kit/classification_tf2/classification.ipynb Sample notebook to train, prune, and optimize a EfficientNet-b0 image classification model on a Cats/Dogs dataset
EfficientDet (TF2) notebooks/tao_launcher_starter_kit/efficientdet_tf2/efficientdet.ipynb Sample notebook to train, prune, and optimize a EfficientDet-D0 object detection model on a COCO dataset
PointPillars notebooks/tao_launcher_starter_kit/pointpillars/pointpillars.ipynb Sample notebook to train, prune, and optimize a 3-D Object Detection model on a KITTI point cloud dataset
Deformable DETR notebooks/tao_launcher_starter_kit/deformable_detr/deformable_detr.ipynb Sample notebook to train and optimize a ResNet-50 Deformable-DETR model on a COCO dataset
DINO notebooks/tao_launcher_starter_kit/dino/dino.ipynb Sample notebook to train and optimize a ResNet-50 DINO model on a COCO dataset
SegFormer notebooks/tao_launcher_starter_kit/segformer/segformer.ipynb Sample notebook to train and optimize a MIT-B5 SegFormer semantic segmentation model on the ISBI dataset
Classification (PyT) notebooks/tao_launcher_starter_kit/classification_pyt/classification.ipynb Sample notebook to train and optimize a FAN based image classification model on a Cats/Dogs dataset
VisualChangeNet-Segmentation notebooks/tao_launcher_starter_kit/visual_changenet/visual_changenet_segmentation.ipynb Sample notebook to train and optimize a visual changenet model on the LEVIR-CD dataset for segmentation change detection
CenterPose notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb Sample notebook to train and optimize a centerpose model on the Google Objectron dataset for object pose estimation

The TAO Docker gives you access to a repository of pretrained models that can serve as a starting point when training deep neural networks. These models are hosted on the NGC. Follow these steps to download the models:

Note

If you installed the TAO launcher via the quickstart_launcher.sh script, the NGC CLI is installed with the TAO launcher. You may skip step 1 below.

  1. Download the NGC CLI and install it. More information about the NGC Catalog CLI is available here.

  2. Follow the instructions below to configure the NGC CLI and download the models.

ListingAall Available Models

Use this command to get a list of models that are hosted in the NGC model registry:

Copy
Copied!
            

ngc registry model list <model_glob_string>

Here is an example of using this command for the computer vision models:

Copy
Copied!
            

ngc registry model list nvidia/tao/pretrained_*

This command returns a list of the pretrained backbones available for different tasks:

Copy
Copied!
            

+----------------------------------------+----------------------------------------+----------------------------------------+-----------------------+---------------------------+-----------+---------------+------------+-------------+---------------------+ | Name | Repository | Latest Version | Application | Framework | Precision | Last Modified | Permission | Access Type | Associated Products | +========================================+========================================+========================================+=======================+===========================+===========+===============+============+=============+=====================+ | TAO Pretrained EfficientDet | nvidia/tao/pretrained_efficientdet | efficientnet_b2 | Object Detection | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | Pre-trained DINO ImageNet weights | nvidia/tao/pretrained_dino_imagenet | gcvit_large_imagenet22k_384 | Object Detection | TAO Toolkit | FP32 | Oct 16, 2023 | unlocked | | | | TAO Pretrained DetectNet V2 | nvidia/tao/pretrained_detectnet_v2 | resnet34 | Object Detection | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | Pre-trained DINO NvImageNet weights | nvidia/tao/pretrained_dino_nvimagenet | resnet50 | Object Detection | TAO Toolkit | FP32 | Oct 16, 2023 | unlocked | | | | DINO | nvidia/tao/pretrained_dino_coco | dino_fan_large_trainable_v1.0 | Object Detection | TAO Toolkit | FP32 | Feb 03, 2024 | unlocked | | | | TAO Pretrained Classification | nvidia/tao/pretrained_classification | cspdarknet_tiny | Classification | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | TAO Pretrained Object Detection | nvidia/tao/pretrained_object_detection | cspdarknet_tiny | Other | Other | FP32 | May 03, 2024 | unlocked | | | | TAO Pretrained EfficientDet-TF2 | nvidia/tao/pretrained_efficientdet_tf2 | efficientnet_b0 | OBJECT_DETECTION | TransferLearningToolkit | FP32 | Dec 13, 2022 | unlocked | | | | Pretrained Mask Auto Label | nvidia/tao/pretrained_mask_auto_label | vit-base | Semantic Segmentation | Transfer Learning Toolkit | FP32 | Jul 27, 2023 | unlocked | | | +----------------------------------------+----------------------------------------+----------------------------------------+-----------------------+---------------------------+-----------+---------------+------------+-------------+---------------------+

Note

All TAO classification models have names based on this template: nvidia/tao/pretrained_classification:<template>.

To view the full list of models, use the following command:

Copy
Copied!
            

ngc registry model list nvidia/tao/*

Downloading a Model

Use this command to download the model you have chosen from the NGC model registry:

Copy
Copied!
            

ngc registry model download-version <org/team/model_name:version> -dest <path_to_download_dir>

For example, use this command to download the resnet 18 classification model to the $USER_EXPERIMENT_DIR directory:

Copy
Copied!
            

ngc registry model download-version nvidia/tao/pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

Copy
Copied!
            

Downloaded 82.41 MB in 9s, Download speed: 9.14 MB/s ---------------------------------------------------- Transfer id: pretrained_classification_vresnet18 Download status: Completed. Downloaded local path: /workspace/tao-experiments/pretrained_resnet18/ Total files downloaded: 2 Total downloaded size: 82.41 MB Started at: 2019-07-16 01:29:53.028400 Completed at: 2019-07-16 01:30:02.053016 Duration taken: 9s seconds

After training is complete, follow these instructions to deploy a computer vision model to DeepStream.

Previous Getting Started
Next Intermediate Users
© Copyright 2024, NVIDIA. Last updated on Aug 30, 2024.