Topics
Topics
AR / VR
Cybersecurity
Edge Computing
Recommenders / Personalization
Computer Vision / Video Analytics
Data Center / Cloud
Generative AI / LLMs
Robotics
Content Creation / Rendering
Data Science
Networking
Simulation / Modeling / Design
Conversational AI
NVIDIA Developer
Blog
Forums
Sign In
Menu
DOCS HUB
Topics
Topics
AR / VR
Cybersecurity
Edge Computing
Recommenders / Personalization
Computer Vision / Video Analytics
Data Center / Cloud
Generative AI / LLMs
Robotics
Content Creation / Rendering
Data Science
Networking
Simulation / Modeling / Design
Conversational AI
NVIDIA Developer
Blog
Forums
Sign In
What can I help you with?
Submit Search
NVIDIA TAO Toolkit v30.2205
Submit Search
Submit Search
NVIDIA Docs Hub
NVIDIA TAO
NVIDIA TAO Toolkit v30.2205
Object Detection
Object Detection
DetectNet_v2
Data Input for Object Detection
Pre-processing the Dataset
Creating a Configuration File
Training the Model
Evaluating the Model
Using Inference on the Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to Deepstream
FasterRCNN
Preparing the Input Data Structure
Creating a Configuration File
Training the model
Evaluating the model
Running inference on the model
Pruning the model
Retraining the pruned model
Exporting the model
Deploying to DeepStream
YOLOv3
Preparing the Input Data Structure
Creating a Configuration File
Generate the Anchor Shape
Training the Model
Evaluating the Model
Running Inference on a YOLOv3 Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
YOLOv4
Preparing the Input Data Structure
Creating a Configuration File
Generate anchor shape
Training the Model
Evaluating the Model
Running Inference on a YOLOv4 Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
YOLOv4-tiny
Preparing the Input Data Structure
Creating a Configuration File
Generate anchor shape
Training the Model
Evaluating the Model
Running Inference on a YOLOv4-tiny Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
SSD
Data Input for Object Detection
Pre-processing the Dataset
Creating a Configuration File
Training the Model
Evaluating the Model
Running Inference on the Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
DSSD
Data Input for Object Detection
Pre-processing the Dataset
Creating a Configuration File
Training the Model
Evaluating the Model
Running Inference on the Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
RetinaNet
Data Input for Object Detection
Pre-processing the Dataset
Creating a Configuration File
Training the Model
Evaluating the Model
Running Inference on a RetinaNet Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
EfficientDet
Data Input for EfficientDet
Pre-processing the Dataset
Creating a Configuration File
Training the Model
Evaluating the Model
Running Inference with an EfficientDet Model
Pruning the Model
Re-training the Pruned Model
Exporting the Model
Deploying to DeepStream
© Copyright 2022, NVIDIA..
Last updated on Dec 13, 2022.
Close
content here