NVIDIA TAO Toolkit v5.2.0
v5.2.0

Overview

TAO Toolkit supports image classification, object detection architectures–including YOLOv3, YOLOv4, YOLOv4-tiny, FasterRCNN, SSD, DSSD, RetinaNet, EfficientDet and DetectNet_v2–and a semantic and instance segmentation architecture, namely UNet and MaskRCNN. In addition, there are 18 classification backbones supported by TAO Toolkit. For a complete list of all the permutations that are supported by TAO Toolkit, see the matrix below:

ImageClassification Object Detection Instance Segmentation Semantic Segmentation
Backbone DetectNet_V2 FasterRCNN SSD YOLOv3 RetinaNet DSSD YOLOv4 YOLOv4-tiny EfficientDet MaskRCNN UNet
ResNet10/18/34/50/101 Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
VGG 16/19 Yes Yes Yes Yes Yes Yes Yes Yes Yes
GoogLeNet Yes Yes Yes Yes Yes Yes Yes Yes
MobileNet V1/V2 Yes Yes Yes Yes Yes Yes Yes Yes
SqueezeNet Yes Yes Yes Yes Yes Yes Yes
DarkNet 19/53 Yes Yes Yes Yes Yes Yes Yes Yes
CSPDarkNet 19/53 Yes Yes
CSPDarkNet-tiny Yes Yes
Efficientnet B0 Yes Yes Yes Yes Yes Yes Yes Yes
Efficientnet B1 Yes Yes Yes
Efficientnet B2 Yes Yes
Efficientnet B3 Yes
Efficientnet B4 Yes Yes
Efficientnet B5 Yes

The TAO Toolkit container contains Jupyter notebooks and the necessary spec files to train any network combination. The pre-trained weight for each backbone is provided on NGC. The pre-trained model is trained on Open image dataset. The pre-trained weights provide a great starting point for applying transfer learning on your own dataset.

To get started, first choose the type of model that you want to train, then go to the appropriate model card on NGC and choose one of the supported backbones.

Model to train NGC model card
YOLOv3 TAO object detection
YOLOv4
YOLOv4-tiny
SSD
FasterRCNN
RetinaNet
DSSD
DetectNet_v2 TAO DetectNet_v2 detection
MaskRCNN TAO instance segmentation
Image Classification TAO image classification
UNet TAO semantic segmentation

Once you pick the appropriate pre-trained model, follow the TAO workflow to use your dataset and pre-trained model to export a tuned model that is adapted to your use case. The TAO Workflow sections walk you through all the steps in training.

You can deploy most trained models on any edge device using DeepStream and TensorRT. See the Integrating TAO models into DeepStream chapter for deployment instructions.

We also have a reference application for deployment with Triton. See the Integrating TAO CV models with Triton Inference server chapter for Triton instructions.

Previous Open Images
Next Open Images Pre-trained Image Classification
© Copyright 2024, NVIDIA. Last updated on Mar 18, 2024.