Running TAO via the Launcher CLI#
For those who are new to AI, deep neural network training can be daunting. To help with this, TAO provides an easy-to-use command line interface—the TAO CLI Launcher—to interact with and run TAO workflows.
The TAO Launcher is a lightweight, Python-based, command-line interface. It acts as a front end for TAO containers built on top of PyTorch, TensorFlow, and NVIDIA® TensorRT™. Its CLI abstracts information about what network actions are implemented in which container. When you use the CLI the respective container is launched automatically, based on the model you plan to use.
After you have downloaded the getting started resource using the instructions for the package content, you can get started with the launcher:
Install the prerequisite software required to run the CLI.
Set up your Python environment.
Install the TAO launcher.
For more details on the TAO Launcher, refer to the TAO Launcher documentation.
Installing the Prerequisite Software#
The following sections describe the prerequisites and steps to run the TAO Launcher.
Software Prerequisites#
Software |
Version |
Comment |
---|---|---|
Ubuntu LTS |
22.04 |
|
Python |
≥3.10 |
Not needed if you use TAO API |
docker-ce |
>19.03.5 |
Not needed if you use TAO API |
docker-API |
1.40 |
Not needed if you use TAO API |
>1.3.0-1 |
Not needed if you use TAO API |
|
nvidia-container-runtime |
3.4.0-1 |
Not needed if you use TAO API |
nvidia-docker2 |
2.5.0-1 |
Not needed if you use TAO API |
nvidia-driver |
>550.xx |
Not needed if you use TAO API |
python-pip |
>21.06 |
Not needed if you use TAO API |
The TAO Launcher is strictly a Python3 package, capable of running on Python versions >= 3.10.
Installation Process#
Install
docker-ce
by following the official instructions.Note
After you have installed docker-ce, follow the post-installation steps to ensure that you can run Docker without
sudo
. The launcher requires that thedocker
CLI be available to run without superuser (sudo) privileges.Install
nvidia-container-toolkit
by following the installation guide.Get an NGC account and API key.
Go to NGC and click the TAO container in the Catalog tab. NGC displays the message “Sign in to access the PULL feature of this repository.”
Enter your email address and click Next, or click Create an Account.
Choose your organization when prompted for Organization/Team.
Click Sign In.
Log in to the NGC Docker registry (
nvcr.io
) using the command:docker login nvcr.io
Then enter these credentials:
a. Username: "$oauthtoken" b. Password: "YOUR_NGC_API_KEY"
Where
YOUR_NGC_API_KEY
represents the key you generated in step 3.
Note
For DeepStream 7.0, we recommend NVIDIA SDK for streaming inference to deploy TAO trained and fine-tuned models.
Setting up Your Python Environment#
We recommend setting up a Python environment using miniconda
. The following instructions
show how to setup a Python conda
environment.
Follow the instructions on the Miniconda information page to set up a Conda environment using Miniconda.
After you have installed
miniconda
, create a new environment and set the Python version to 3.10.conda create -n launcher python=3.10
Activate the
conda
environment that you have just created.conda activate launcher
Verify that the command prompt shows the name of your Conda environment.
(launcher) py-3.10 desktop:
Set the notebooks to have the same Python kernel as the virtual environment:
python -m pip install ipykernel python -m ipykernel install --user --name launcher --display-name "launcher"
When you are done with your session, you can deactivate your conda
environment using the
deactivate
command:
conda deactivate
You may re-instantiate this conda
environment by entering this command:
conda activate launcher
Installing the TAO Launcher#
The getting started package includes a quick start script to install the TAO launcher CLI package and its Python dependencies, along with the NGC CLI required to pull models and interact with the model, Docker, and resource registry on the NVIDIA GPU Cloud.
Install the CLI launcher and the NGC CLI using the quick start script downloaded with the
getting_started
NGC package from the package content.bash setup/quickstart_launcher.sh --install
Run this command to update the launcher to the latest version of TAO:
bash setup/quickstart_launcher.sh --upgrade
Invoke the entry points using the
tao
command:tao --help
The
tao --help
command displays output that looks like this:usage: tao [-h] {list,stop,info,dataset,deploy,model} ... Launcher for TAO optional arguments: -h, --help show this help message and exit task_groups: {list,stop,info,dataset,deploy,model}
Under
task_groups
you can see all the launcher-invokable tasks. These are the tasks that handle launched commands using the TAO Launcher:list
stop
info
Note
If you install the TAO Launcher to the native Python3 kernel of your host machine, as opposed to the recommended route of using
a virtual environment, you may get an error stating that the tao
binary wasn’t found. This means that the path to your tao
binary
installed by pip
has not been added to the PATH
environment variable on your local machine. To fix this, run the
following command:
export PATH=$PATH:~/.local/bin
Running a Sample TAO Notebook#
Now that you have installed the TAO launcher and the tutorial notebooks from GitHub, you can get started by running a sample tutorial notebook.
You can run this command to invoke a sample notebook:
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
Open an internet browser on localhost
and navigate to this URL:
http://0.0.0.0:8888
Note
If you want to run the notebook from a remote server, follow these instructions from DigitalOcean.
Execute the cells in the notebook to train a model using TAO.
You may choose any one of the sample notebooks below to get started with TAO. As a starting point, we recommend running the GroundingDINO open vocabulary detection notebook.
The tables below list the TAO tutorial notebooks.
Purpose-Built Pre-Trained Models#
Below is a list of purpose-built pre-trained models and their corresponding samples.
Model Name |
Jupyter Notebook |
Purpose of Notebook |
---|---|---|
ActionRecognitionNet |
notebooks/tao_launcher_starter_kit/action_recognition_net/actionrecognitionnet.ipynb |
Train and optimize an Action Recognition model on the HMDB51 dataset. |
PoseClassificationNet |
notebooks/tao_launcher_starter_kit/pose_classification_net/poseclassificationnet.ipynb |
Train and optimize a Pose Classification network on the Market-1501 dataset. |
PointPillars |
notebooks/tao_launcher_starter_kit/pointpillars/pointpillars.ipynb |
Train, prune, and optimize a 3-D Object Detection model on the KITTI point cloud dataset. |
ReIdentificationNet |
notebooks/tao_launcher_starter_kit/re_identification_net/reidentificationnet_resnet.ipynb |
Train and optimize a Re-Identification network on the Market-1501 dataset. |
ReIdentificationNet Transformer |
notebooks/tao_launcher_starter_kit/re_identification_net/reidentificationnet_swin.ipynb |
Train and optimize a Re-Identification Transformer network on the Market-1501 dataset. |
OCDNet |
Train, prune, and optimize an optical character detection model on the ICDAR2015 dataset. |
|
OCRNet |
Train, prune, and optimize an optical character recognition model on the ICDAR2015 dataset. |
|
Optical Inspection |
notebooks/tao_launcher_starter_kit/optical_inspection/OpticalInspection.ipynb |
Train and optimize a Siamese model for optical inspection of PCB components on a custom dataset. |
Retail object recognition |
notebooks/tao_launcher_starter_kit/metric_learning_recogntition/metric_learning_recogntition.ipynb |
Train and optimize a metric learning recognition model on the Retail Product Checkout Dataset. |
VisualChangeNet-Classification |
notebooks/tao_launcher_starter_kit/visual_changenet/visual_changenet_classification.ipynb |
|
CenterPose |
notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb |
Train and optimize a CenterPose model for estimating the object pose on the Google Objectron dataset. |
Open Model Architectures#
Network Architecture |
Jupyter Notebook |
Purpose of Notebook |
---|---|---|
Classification (TF2) |
notebooks/tao_launcher_starter_kit/classification_tf2/classification.ipynb |
Train, prune, and optimize a EfficientNet-b0 image classification model on a Cats/Dogs dataset. |
EfficientDet (TF2) |
notebooks/tao_launcher_starter_kit/efficientdet_tf2/efficientdet.ipynb |
Train, prune, and optimize a EfficientDet-D0 object detection model on a COCO dataset. |
PointPillars |
notebooks/tao_launcher_starter_kit/pointpillars/pointpillars.ipynb |
Train, prune, and optimize a 3-D Object Detection model on a KITTI point cloud dataset. |
Deformable DETR |
notebooks/tao_launcher_starter_kit/deformable_detr/deformable_detr.ipynb |
Train and optimize a ResNet-50 Deformable-DETR model on a COCO dataset. |
DINO |
Train and optimize a ResNet-50 DINO model on a COCO dataset. |
|
SegFormer |
notebooks/tao_launcher_starter_kit/segformer/segformer.ipynb |
Train and optimize a MIT-B5 SegFormer semantic segmentation model on the ISBI dataset. |
Classification (PyT) |
notebooks/tao_launcher_starter_kit/classification_pyt/classification.ipynb |
Train and optimize a FAN based image classification model on a Cats/Dogs dataset. |
VisualChangeNet-Segmentation |
notebooks/tao_launcher_starter_kit/visual_changenet/visual_changenet_segmentation.ipynb |
Train and optimize a visual changenet model on the LEVIR-CD dataset for segmentation change detection. |
CenterPose |
notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb |
Train and optimize a CenterPose model on the Google Objectron dataset for object pose estimation. |
Downloading the Models#
The TAO Docker gives you access to a repository of pretrained models that can serve as a starting point when training deep neural networks. These models are hosted on NGC. Follow these steps to download the models:
Download the NGC CLI and install it. More information about the NGC Catalog CLI is available from Welcome to NGC CLI Docs in the NGC documentation.
Note
If you installed the TAO launcher via the
quickstart_launcher.sh
script, the NGC CLI is installed with the TAO launcher. You may skip this step.
Follow the instructions below to configure the NGC CLI and download the models.
Listing all Available Models#
Enter this command to get a list of models that are hosted in the NGC model registry:
ngc registry model list <model_glob_string>
Enter this command to list the computer vision models:
ngc registry model list nvidia/tao/pretrained_*
The command displays a list of pretrained backbones available for different tasks, which looks like this:
+----------------------------------------+----------------------------------------+----------------------------------------+-----------------------+---------------------------+-----------+---------------+------------+-------------+---------------------+ | Name | Repository | Latest Version | Application | Framework | Precision | Last Modified | Permission | Access Type | Associated Products | +========================================+========================================+========================================+=======================+===========================+===========+===============+============+=============+=====================+ | TAO Pretrained EfficientDet | nvidia/tao/pretrained_efficientdet | efficientnet_b2 | Object Detection | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | Pre-trained DINO ImageNet weights | nvidia/tao/pretrained_dino_imagenet | gcvit_large_imagenet22k_384 | Object Detection | TAO Toolkit | FP32 | Oct 16, 2023 | unlocked | | | | TAO Pretrained DetectNet V2 | nvidia/tao/pretrained_detectnet_v2 | resnet34 | Object Detection | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | Pre-trained DINO NvImageNet weights | nvidia/tao/pretrained_dino_nvimagenet | resnet50 | Object Detection | TAO Toolkit | FP32 | Oct 16, 2023 | unlocked | | | | DINO | nvidia/tao/pretrained_dino_coco | dino_fan_large_trainable_v1.0 | Object Detection | TAO Toolkit | FP32 | Feb 03, 2024 | unlocked | | | | TAO Pretrained Classification | nvidia/tao/pretrained_classification | cspdarknet_tiny | Classification | Transfer Learning Toolkit | FP32 | Apr 04, 2023 | unlocked | | | | TAO Pretrained Object Detection | nvidia/tao/pretrained_object_detection | cspdarknet_tiny | Other | Other | FP32 | May 03, 2024 | unlocked | | | | TAO Pretrained EfficientDet-TF2 | nvidia/tao/pretrained_efficientdet_tf2 | efficientnet_b0 | OBJECT_DETECTION | TransferLearningToolkit | FP32 | Dec 13, 2022 | unlocked | | | | Pretrained Mask Auto Label | nvidia/tao/pretrained_mask_auto_label | vit-base | Semantic Segmentation | Transfer Learning Toolkit | FP32 | Jul 27, 2023 | unlocked | | | +----------------------------------------+----------------------------------------+----------------------------------------+-----------------------+---------------------------+-----------+---------------+------------+-------------+---------------------+
Note
All TAO classification models have names based on this template:
nvidia/tao/pretrained_classification:<template>
.Enter this command to view the full list of models:
ngc registry model list nvidia/tao/*
Downloading a Model#
Enter this command to download the model you have chosen from the NGC model registry:
ngc registry model download-version <org/team/model_name:version> -dest <path_to_download_dir>
For example, this command downloads the resnet 18 classification model to the
directory $USER_EXPERIMENT_DIR
:
ngc registry model download-version
nvidia/tao/pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18
The command displays output like this:
Downloaded 82.41 MB in 9s, Download speed: 9.14 MB/s
----------------------------------------------------
Transfer id: pretrained_classification_vresnet18 Download status: Completed.
Downloaded local path: /workspace/tao-experiments/pretrained_resnet18/
Total files downloaded: 2
Total downloaded size: 82.41 MB
Started at: 2019-07-16 01:29:53.028400
Completed at: 2019-07-16 01:30:02.053016
Duration taken: 9s seconds
After training is complete, follow these instructions for Integrating TAO Models into DeepStream to deploy a computer vision model to DeepStream.