Migrating to TAO Toolkit
NVIDIA Transfer Learning Toolkit has now been renamed to NVIDIA TAO Toolkit. TAO toolkit provides serveral new features from TLT 3.0 and TLT 2.0:
Unified command line tool to launch commands
Multiple Docker setup
Conversational AI applications
Support for training n-gram Language Models
CV features
New training applications
New feature extractor backbones
New purpose built models
Integration to DeepStream and Riva Inference Platforms
When migrating from TLT 3.0 to TAO Toolkit, if you had a previously installed
nvidia-tlt
package in your virtualenv, make sure to uninstall this package before installing the
nvidia-tao
CLI package.
You may do this by running the following commands:
pip3 uninstall nvidia-tlt
pip3 install nvidia-tao
With TAO Toolkit, running the following commands from TLT v2.0 and TLT v3.0 have been deprecated and are now mapped as shown:
Version Comparison |
TAO Toolkit 3.0-21.08 |
TLT v3.0 |
TLT v2.0 |
---|---|---|---|
Command mapping |
|
|
|
The following table shows some of the key differences between TAO Toolit 3.0-21.08, TLT v3.0, and TLT v2.0.
Version Comparison |
TAO Toolkit 3.0-21.08 |
TLT v3.0 |
TLT v2.0 |
Interface Difference |
Users run the commands via the TAO launcher Python package |
Users run the commands via the TLT launcher Python package |
Users interact with the commands inside Docker |
Steps to run TAO |
|
|
|
Data preparation for TLT v2.0 and v3.0 are slightly different for SSD / DSSD / YOLOv3 / RetinaNet. In TLT v2.0, you have to generate TFRecords (and possibly resize your images). Those are no longer required in TLT v3.0. These networks in TLT v3.0 directly take original images and KITTI labels as input. If image resizing is needed, the data loader automatically handles it.
If you already prepared data for TLT v2.0 training, you don’t need to further process it for the TAO Toolkit training. Instead, you only need to provide the label directory path in the spec file, and training should run smoothly for TLT v3.0 and greater.