TAO Toolkit Logo
Version 4.0

Introduction

  • Overview
    • TAO Toolkit Architecture
    • TAO Computer Vision Workflow Overview
      • Data Augmentation
      • Training
      • Evaluation
      • Pruning
      • Re-training
      • Export
    • TAO Conversational AI Workflow Overview
    • Model Pruning
    • Learning Resources
  • TAO Toolkit Quick Start Guide
    • Requirements
      • Hardware
      • Software Requirements
      • Package Content
    • Running TAO Toolkit
      • Launcher CLI
        • Installing the Pre-requisites
        • Installing TAO Launcher
      • Running from container
      • Running TAO Toolkit APIs
      • Running from python wheels
    • Run sample jupyter notebooks
      • Computer vision
      • Conversational AI
    • Downloading the Models
      • Listing all available models
        • Downloading a model
  • TAO Toolkit Launcher
    • Running the launcher
    • Handling launched processes
    • Useful Environment variables
  • Migrating from older TLT to TAO Toolkit
  • Migrating from TAO Toolkit 3.x to TAO Toolkit 4.0
    • Container Mapping
  • TAO Model Export and INT8 Calibration Changes
  • Working With the Containers
    • Invoking the Containers Directly
    • Running Multi-Node Training
    • Running without Elevated User Privileges

Model Zoo

  • Overview
    • Computer Vision Model Zoo
      • Purpose-built models
      • Performance Metrics
      • General purpose computer vision models
      • Computer Vision Feature Summary
    • Conversational AI
  • Computer Vision Model Zoo
    • PeopleNet
      • Training algorithm
      • Intended use case
    • PeopleNet Transformer
      • Training algorithm
      • Intended use case
    • TrafficCamNet
      • Training algorithm
      • Intended Use Case
    • DashCamNet
      • Training algorithm
      • Intended use case
    • LPDNet
      • Training Algorithm
      • Intended Use Case
    • LPRNet
      • Training algorithm
      • Intended use case
    • VehicleTypeNet
      • Training Algorithm
      • Intended Use
    • VehicleMakeNet
      • Training Algorithm
      • Intended Use
    • PeopleSegNet
      • Training Algorithm
      • Intended Use
    • PeopleSemSegNet
      • Training Algorithm
      • Intended Use
    • FaceDetect-IR
      • Training algorithm
      • Intended use case
    • FaceDetect
      • Training algorithm
      • Intended use case
    • Gaze Estimation
      • Training algorithm
      • Intended use case
    • Emotion Classification
      • Training algorithm
      • Intended use case
    • HeartRate Estimation
      • Training algorithm
      • Intended use case
    • Facial Landmarks Estimation
      • Model Architecture
      • Training Algorithm
      • References
      • Intended Use
    • Gesture Recognition
      • Model Overview
      • Model Architecture
      • Training Algorithm
      • Reference
      • Intended Use
    • Body Pose Estimation
      • Model Architecture
      • Training algorithm
      • Reference
      • Intended use case
    • CitySemSegFormer
      • Training Algorithm
      • Intended Use
    • ReidentificationNet
      • Training algorithm
      • Intended use case
    • Retail Object Detection
      • Training algorithm
      • Intended Use case
    • Retail Object Recognition
      • Training algorithm
      • Intended Use case
    • Open Images
      • Overview
        • Training
        • Deployment
      • Open Images Pre-trained Image Classification
        • Supported Backbones
      • Open Images Pre-trained Object Detection
        • Supported Backbones
      • Open Images Pre-trained DetectNet_v2
        • Supported Backbones
      • Open Images Pre-trained EfficientDet
        • Supported Backbones
      • Open Images Pre-trained Instance Segmentation
        • Supported Backbones
      • Open Images Pre-trained Semantic Segmentation
        • Supported Backbones
  • Conversational AI Model Zoo

Running TAO Toolkit in the Cloud

  • Running TAO Toolkit in the Cloud
  • Running TAO Toolkit on an AWS VM
    • Pre-Requisites
    • Setting up an AWS EC2 instance
    • Installing the Pre-Requisites for TAO Toolkit in the VM
    • Download and run the test samples
  • Running TAO Toolkit on Google Cloud Platform
    • Setting up a VM Linux VM Instance
    • Using the VM
    • Setting up the VM and Enabling GPUs
    • Installing the Pre-requisites for TAO Toolkit
    • Downloading and Running Test Samples
  • Running TAO Toolkit on an Azure VM
    • Setting up an Azure VM
    • Installing the Pre-Requisites for TAO Toolkit in the VM
    • Downloading and Running the Test Samples
  • Running TAO Toolkit on Google Colab
    • Pre-Requisites
    • Launching Notebooks with Google Colab
      • General-Purpose Computer Vision Models
      • Purpose-Built Computer Vision Models
      • Conversational AI Models
      • TAO Pre-trained Models (Inference Only)
    • Utility scripts to obtain subset of data
      • To obtain subset for KITTI:
      • To obtain subset for COCO:
    • Steps to Locate Files in a Colab Notebook
    • Notes
  • Running TAO Toolkit on an EKS
  • Running TAO Toolkit on an AKS

TAO Toolkit API

  • Overview
  • Setup
    • Bare-Metal Setup
      • Hardware
        • Minimum Requirements
      • Software
        • OS Support
        • Deployment Steps
    • AWS EKS Setup
      • Pre-Requisites
        • AWS Account
        • IAM User
        • S3 Bucket
      • Software
        • Deployment Steps
    • Azure AKS Setup
      • Software
        • Connect to AKS Cluster
        • Configuring Kubernetes Pods to Access GPU Resources
      • vNet Setup
      • Software Setup
        • Install NGINX Ingress Controller
        • NFS Server
        • Azure NFS Server
        • VM-Based NFS Server
        • Storage Provisioner
        • Image Pull Secret for nvcr.io
  • Deployment
  • REST API
    • User Authentication
    • API Specs
    • Examples
  • Remote Client
    • Installation
    • Storage Topology
    • CLI Specs
    • Examples
  • API Reference
  • Action Specs
    • classification
      • evaluate
      • export
      • inference
      • train
      • convert
      • prune
      • retrain
    • detectnet_v2
      • convert
      • evaluate
      • export
      • inference
      • prune
      • train
      • retrain
    • dssd
      • evaluate
      • inference
      • train
      • prune
      • export
      • retrain
      • convert
    • efficientdet
      • convert
      • evaluate
      • export
      • inference
      • prune
      • train
      • retrain
    • faster_rcnn
      • export
      • prune
      • train
      • inference
      • convert
      • evaluate
      • retrain
    • semantic_segmentation
      • convert
    • instance_segmentation
      • convert
    • lprnet
      • evaluate
      • export
      • inference
      • train
      • convert
    • mask_rcnn
      • export
      • inference
      • prune
      • train
      • retrain
      • convert
      • evaluate
    • multitask_classification
      • export
      • train
      • retrain
      • prune
      • evaluate
      • convert
      • inference
    • object_detection
      • augment
      • convert_efficientdet
      • kmeans
      • convert__kitti
      • convert__coco
      • convert_and_index__kitti
      • convert_and_index__coco
    • retinanet
      • evaluate
      • export
      • inference
      • prune
      • train
      • retrain
      • convert
    • spectro_gen
      • dataset_convert
      • export
      • finetune
      • infer
      • infer_onnx
      • train
    • speech
      • pitch_stats
      • convert
    • ssd
      • evaluate
      • inference
      • train
      • retrain
      • prune
      • export
      • convert
    • unet
      • export
      • prune
      • train
      • convert
      • retrain
      • evaluate
      • inference
    • vocoder
      • export
      • finetune
      • infer
      • infer_onnx
      • train
      • dataset_convert
    • yolo_v3
      • convert
      • evaluate
      • export
      • inference
      • prune
      • train
      • retrain
    • yolo_v4
      • convert
      • evaluate
      • export
      • inference
      • prune
      • train
      • retrain
    • yolo_v4_tinny
      • train
      • prune
      • export
      • convert
      • evaluate
      • inference
      • retrain

AutoML

  • AutoML
    • AutoML Supported Applications and Models
      • Object Detection
      • Image Segmentation
      • Classification
      • Special Use Case Models
    • Prerequisites
    • AutoML Notebooks
    • Getting Started
      • [Mandatory] User Inputs
      • [Optional] User Inputs
      • [Optional] AutoML Algorithm-Specific Parameters
    • AutoML Outcomes
      • Resultant Files
      • Results of AutoML experiments
    • AutoML Algorithm Explanation
      • Bayesian Optimization
      • Hyperband
    • Hyperband Parameter Auto-adjustment Mechanism

CV Applications

  • Offline Data Augmentation
    • Object Detection
      • Configuring the Augmentor
        • Spatial Augmentation Config
        • Color Augmentation Config
        • Dataloader
        • Blur Config
      • Running the Augmentor Tool
  • Optimizing the Training Pipeline
    • Quantization Aware Training
    • Automatic Mixed Precision
  • Visualizing Training
    • Enabling Tensorboard during Training
    • Visualizing using TensorBoard
      • Installing tensorboard
      • Invoking Tensorboard
    • Additional Resources
  • Data Annotation Format
    • Image Classification Format
    • Object Detection – KITTI Format
      • Label Files
      • Sequence Mapping File
    • Object Detection – COCO Format
    • Instance Segmentation – COCO format
    • Semantic Segmentation - PNG Mask Format
      • Semantic Segmentation Mask Format
        • Color/ RGB Input Image Type
        • Grayscale Input Image Type
      • Image and Mask Loading Format
        • Segformer
        • UNet
    • Gesture Recognition – Custom Format
      • Label Format
    • Heart Rate Estimation – Custom Format
    • EmotionNet, FPENET, GazeNet – JSON Label Data Format
    • BodyposeNet – COCO Format
      • Label Files
  • Image Classification (TF1)
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Classification
      • Model Config
        • BatchNormalization Parameters
        • Activation functions
      • Eval Config
      • Training Config
        • Learning Rate Scheduler
        • Optimizer
    • Training the model
      • Required Arguments
      • Optional Arguments
      • Input Requirement
      • Sample Usage
      • Model parallelism
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
      • Using the Prune Command
    • Re-training the Pruned Model
    • Exporting the model
      • Required Arguments
      • Optional Arguments
      • Sample usage
    • TensorRT Engine Generation, Validation, and int8 Calibration
    • Deploying to DeepStream
  • Image Classification (TF2)
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Classification
      • Model Config
        • Dataset Parameters
        • Augmentation Parameters
      • Evaluation Config
      • Training Config
        • Learning Rate Scheduler
        • Optimizer Config
        • BatchNorm Config
      • Pruning Config
      • Export Config
    • Training the model
      • Required Arguments
      • Optional Arguments
      • Input Requirement
      • Sample Usage
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
      • Using the Prune Command
    • Re-training the Pruned Model
    • Exporting the model
      • Required Arguments
      • Optional Arguments
      • Sample Usage
    • TensorRT Engine Generation, Validation, and int8 Calibration
    • Deploying to DeepStream
  • Object Detection
    • DetectNet_v2
      • Data Input for Object Detection
      • Pre-processing the Dataset
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Model Config
        • BBox Ground Truth Generator
        • Post-Processor
        • Cost Function
        • Trainer
        • Augmentation Module
        • Configuring the Evaluator
        • Dataloader
        • Specification File for Inference
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Using Inference on the Model
        • Required Parameters
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Generating an INT8 tensorfile Using the calibration_tensorfile Command
        • Exporting the DetectNet_v2 Model
        • QAT Export Mode Required Arguments
        • Sample usage for the export sub-task
        • Generating a Template DeepStream Config File
      • TensorRT Engine Generation, Validation, and INT8 Calibration
      • Deploying to DeepStream
    • FasterRCNN
      • Preparing the Input Data Structure
        • Required Arguments
        • Optional Arguments
      • Creating a Configuration File
        • Dataset
        • Data augmentation
        • Model architecture
        • Training configurations
        • Inference configurations
        • Evaluation configurations
      • Training the model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
        • Using a Pretrained Model
        • Re-training a pruned model
        • Resuming an interrupted training
        • Input shape: static and dynamic
        • Model parallelism
      • Evaluating the model
        • Required Arguments
        • Optional Arguments
        • Evaluation Metrics
      • Running inference on the model
        • Required Arguments
        • Optional Arguments
      • Pruning the model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Retraining the pruned model
      • Exporting the model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • QAT Export Mode Required Arguments
        • Sample Usage
      • TensorRT engine generation, validation, and int8 calibration
        • Deploying to DeepStream
    • YOLOv3
      • Preparing the Input Data Structure
        • Required Arguments
        • Optional Arguments
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • YOLOv3 Config
      • Generate the Anchor Shape
        • Required Arguments
        • Optional Arguments
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on a YOLOv3 Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • QAT Export Mode Required Arguments
        • Sample Usage
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • YOLOv4
      • Preparing the Input Data Structure
        • Required Arguments
        • Optional Arguments
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • Class Weighting Config
        • YOLO4 Config
      • Generate anchor shape
        • Required Arguments
        • Optional Arguments
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on a YOLOv4 Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • QAT Export Mode Required Arguments
        • Sample usage
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • YOLOv4-tiny
      • Preparing the Input Data Structure
        • Required Arguments
        • Optional Arguments
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • YOLO4 Config
      • Generate anchor shape
        • Required Arguments
        • Optional Arguments
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on a YOLOv4-tiny Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • QAT Export Mode Required Arguments
        • Sample usage
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • SSD
      • Data Input for Object Detection
      • Pre-processing the Dataset
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • SSD config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting command
        • QAT Export Mode Required Arguments
        • Exporting a Model
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • DSSD
      • Data Input for Object Detection
      • Pre-processing the Dataset
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • DSSD Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting command
        • QAT Export Mode Required Arguments
        • Exporting a Model
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • RetinaNet
      • Data Input for Object Detection
      • Pre-processing the Dataset
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • Class Weighting Config
        • RetinaNet Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Running Inference on a RetinaNet Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the RetinaNet Model
        • QAT Export Mode Required Arguments
        • Sample usage
      • TensorRT Engine Generation, Validation, and int8 Calibration
      • Deploying to DeepStream
    • DeformableDETR
      • Data Input for DeformableDETR
        • Sharding the Data
      • Creating an Experiment Spec File
        • model_config
        • train_config
        • dataset_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Sample Usage
      • Running Inference with an DeformableDETR Model
        • Required Arguments
        • Sample Usage
      • Exporting the Model
        • Required Arguments
        • Sample Usage
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • EfficientDet (TF1)
      • Data Input for EfficientDet
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • Dataset Config
        • Model Config
        • Augmentation Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Running Inference with an EfficientDet Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • Exporting the EfficientDet Model
        • Sample usage
      • TensorRT Engine Generation, Validation, and int8 Calibration
      • Deploying to DeepStream
    • EfficientDet (TF2)
      • Data Input for EfficientDet
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • Inference Config
        • Dataset Config
        • Model Config
        • Augmentation Config
        • Pruning Config
        • Export Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Running Inference with an EfficientDet Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • Exporting the EfficientDet Model
        • Sample usage
      • TensorRT Engine Generation, Validation, and int8 Calibration
      • Deploying to DeepStream
  • Instance Segmentation
    • Data Input for Instance Segmentation
    • MaskRCNN
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • MaskRCNN Config
        • Data Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the MaskRCNN Model
      • TensorRT Engine Generation, Validation, and int8 Calibration
      • Deploying to DeepStream
  • Semantic Segmentation
    • UNET
      • Data Input for Semantic Segmentation
      • Creating a Configuration File
        • Model Config
        • Training
      • COCO to UNet Dataset Format Converter
        • Sample Usage of the COCO to UNet format Dataset Converter Tool
        • Required Arguments
        • Optional Arguments
        • Dataset
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Using Inference on the Model
        • Required Parameters
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the UNet Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample Usage for the Export Subtask
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
    • SegFormer
      • Data Input for SegFormer
      • Creating Training Experiment Spec File
        • Configuration for Custom Dataset
      • train_config
        • sf_optim
      • exp_config
      • model_config
      • dataset_config
        • augmentation_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Creating Testing Experiment Spec File
      • Evaluating the model
        • Required Arguments
        • Optional Argument
      • Running Inference on the Model
        • Required Arguments
        • Optional Argument
      • Exporting the Model
        • Required Arguments
        • Sample Usage
      • TensorRT engine generation, validation, and int8 calibration
      • Deploying to DeepStream
  • Gaze Estimation
    • Gaze Estimation
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Specification File
        • Trainer/Ealuator
        • Model
        • Loss
        • Optimizer
        • Dataloader
        • Augmentation
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Run Inference on the Model
        • Required Parameters
        • Sample usage for the inference sub-task
        • Exporting the GazeNet Model
        • Sample usage for the export sub-task
        • Deploying to DeepStream 6.0
  • Emotion Classification
    • Emotion Classification
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Specification File
        • Trainer
        • Model
        • Loss
        • Optimizer
        • Dataloader
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Run Inference on the Model
        • Required Parameters
        • Sample usage for the inference sub-task
        • Exporting the EmotionNet Model
        • Sample usage for the export sub-task
        • Deploying to DeepStream 6.0
  • HeartRate Estimation
    • Heart Rate Estimation
      • Data Input for Heart Rate Estimation
      • Creating a Configuration File to Generate TFRecords
      • Generating TFRecords
        • Required Arguments
        • Sample Usage
      • Creating a Configuration File to Train and Evaluate Heart Rate Network
        • Dataloader
        • Model
        • Loss
        • Optimizer
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Deploying to DeepStream 6.0
  • Facial Landmarks Estimation
    • Facial Landmarks Estimation
      • Facial Landmarks
      • Dataset Preparation
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Spec file
        • Trainer Config
        • Model Config
        • Loss Config
        • Dataloader Config
        • Optimizer Config
        • Complete Sample Experiment Spec File
      • Training the model
        • Sample Usage of the Train tool
      • Evaluating the model
        • Sample Usage of the Evaluate tool
      • Inference of the model
        • Sample Usage of the Inference tool
      • Exporting the model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Sample Usage of the Export tool
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Deploying to DeepStream 6.0
  • Gesture Recognition
    • Gesture Recognition
      • Pre-processing the Dataset
        • Dataset Extraction Config
        • Dataset Experiment Config
        • Sample Usage of the Dataset Converter Tool
        • Required Arguments
        • Sample Usage
      • Creating a Configuration File
        • Trainer Config
        • Model Config
        • Evaluator Config
      • Training the Model
        • Required Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Sample Usage
      • Running Inference on the Model
        • Required Arguments
        • Sample Usage
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Sample Usage of the Export tool
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Deploying to DeepStream 6.0
  • Body Pose Estimation
    • Body Pose Estimation
      • Data Input for BodyPoseNet
      • Dataset Preparation
        • Create a Configuration File for the Dataset Converter
      • Generate Tfrecords and Masks
        • Required Arguments
        • Optional Arguments
      • Create a Train Experiment Configuration File
        • Trainer
        • Model
        • Loss
        • Optimizer
        • Dataloader
        • Augmentation Module
        • Label Processor Module
      • Train the Model
        • Required Arguments
        • Optional Arguments
      • Create an Inference Specification File (for Evaluation and Inference)
      • Run Inference on the Model
        • Required Parameters
        • Optional Parameters
      • Evaluate the Model
        • Required Arguments
        • Optional Arguments
      • Prune the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-train the Pruned Model
      • Export the Model
        • Choose Network Input Resolution for Deployment
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Export the BodyPoseNet Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample usage for the export sub-task
        • Evaluate the exported TRT Model
        • Export the Deployable BodyPoseNet Model
      • Generate TRT Engine using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
        • Example usage for BodyPoseNet
        • Deploying to DeepStream 6.0
  • Multitask Image Classification
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Multitask Classification
      • Model Config
        • BatchNormalization Parameters
        • Activation functions
      • Training Config
      • Dataset Config
    • Training the model
      • Required Arguments
      • Optional Arguments
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Generating Confusion Matrix
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
    • Re-training the Pruned Model
    • Exporting the model
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
    • TensorRT Engine Generation, Validation, and int8 Calibration
    • Deploying to DeepStream
  • Character Recognition
    • LPRNet
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • lpr_config
        • training_config
        • eval_config
        • augmentation_config
        • dataset_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Evaluating the model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the LPRNet Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Deploying the Model
        • Using tao-converter
        • Deploying the LPRNet in the DeepStream sample
  • ActionRecognitionNet
    • Preparing the Dataset
    • Creating an Experiment Spec File
      • model_config
      • train_config
        • optim
      • dataset_config
        • augmentation_config
    • Training the Model
      • Required Arguments
      • Optional Arguments
    • Evaluating the model
      • Required Arguments
      • Optional Arguments
    • Running Inference on the Model
      • Required Arguments
      • Optional Arguments
    • Exporting the Model
      • Required Arguments
      • Optional Arguments
    • Deploying the Model
      • Deploying the ActionRecognitionNet in the DeepStream sample
      • Running ActionRecognitionNet Inference on the Stand-Alone Sample
        • Using tao-converter
        • Usage of inference sample
  • Re-Identification
    • ReIdentificationNet
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • model_config
        • train_config
        • dataset_config
        • re_ranking_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Evaluating the model
        • Required Arguments
        • Optional Argument
      • Running Inference on the Model
        • Required Arguments
        • Optional Argument
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Deploying the Model
        • Running ReIdentificationNet Inference on the Triton Sample
  • Pose Classification
    • PoseClassificationNet
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • model_config
        • train_config
        • dataset_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Evaluating the model
        • Required Arguments
        • Optional Argument
      • Running Inference on the Model
        • Required Arguments
        • Optional Argument
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Converting the Pose Data
        • Required Arguments
        • Optional Arguments
      • Deploying the Model
        • Running PoseClassificationNet Inference on the Triton Sample

MLOPS integration

  • TAO Toolkit MLOPS Integration
  • TAO Toolkit WandB Integration
    • Quick Start
      • Setting up a Weights & Biases account
      • Acquiring a Weights & Biases API key
      • Install the wandb library
      • Log in to the wandb client in the TAO Toolkit Container
    • Configuring the wandb element in the training spec
    • Visualization output
  • TAO Toolkit Clearml Integration
    • Quick Start
      • Setting up a ClearML Account
      • Acquiring a ClearML API Credentials
      • Install clearml Library
      • Log in to the ClearML Client in the TAO Toolkit Container
    • Configuring the clearml Element in the Training Spec
    • Visualization Output

Conversational AI Applications

  • ASR
    • Speech Recognition
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Training Process Configs
        • Dataset Config
        • Model Configs
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Troubleshooting
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Evaluation Procedure
        • Troubleshooting
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Fine-Tuning Procedure
        • Troubleshooting
      • Using Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Troubleshooting
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
    • Speech Recognition With CitriNet
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Training Process Configs
        • Dataset Configs
        • Model Configs
      • Subword Tokenization with the Tokenizer
        • Required Arguments
        • Creating a config file for tokenizer
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Troubleshooting
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Evaluation Procedure
        • Troubleshooting
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Fine-Tuning Procedure
        • Troubleshooting
      • Using Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Troubleshooting
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
    • Speech Recognition With Conformer
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Training Process Configs
        • Dataset Configs
        • Model Configs
      • Subword Tokenization with the Tokenizer
        • Required Arguments
        • Creating a config file for the Tokenizer
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Troubleshooting
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Evaluation Procedure
        • Troubleshooting
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Fine-Tuning Procedure
        • Troubleshooting
      • Using Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Troubleshooting
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
  • Natural Language Processing
    • Joint Intent and Slot Classification
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
        • Fine-tuning Procedure
      • Model Evaluation
        • Required Arguments for Evaluation
        • Evaluation Procedure
      • Model Inference
        • Required Arguments for Inference
        • Inference Procedure
      • Model Export
        • Required Arguments for Export
      • Model Deployment
    • Punctuation and Capitalization
      • Introduction
      • Downloading Sample Spec Files
        • Download Spec Required Arguments
      • Data Input for Punctuation and Capitalization Model
      • Data Format
      • Pre-processing the Dataset
        • Convert Dataset Required Arguments
        • Convert Dataset Optional Arguments
        • Download and Convert Tatoeba Dataset Required Arguments
        • Optional Arguments
      • Training a Punctuation and Capitalization model
        • Required Arguments for Training
        • Optional Arguments
        • Important Parameters
      • Fine-tuning a Model on a Different Dataset
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Evaluating a Trained Model
        • Required Arguments for Evaluation
        • Optional Arguments
      • Running Inference using a Trained Model
        • Required Arguments for Inference
        • Optional Arguments
      • Model Export
        • Required Arguments for Export
        • Optional Arguments
    • Question Answering
      • Introduction
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
        • Fine-tuning Procedure
      • Model Evaluation
        • Required Arguments for Evaluation
        • Evaluation Procedure
      • Model Inference
        • Required Arguments for Inference
        • Inference Procedure
      • Model Export
        • Required Arguments for Export
      • Model Deployment
    • Text Classification
      • Introduction
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Training Suggestions
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Model Evaluation
        • Required Arguments for Evaluation
      • Model Inference
        • Required Arguments for Inference
      • Model Export
        • Required Arguments for Export
    • Token Classification (Named Entity Recognition)
      • Introduction
      • Downloading Sample Spec Files
        • Download Spec Required Arguments
      • Data Input for Token Classification Model
      • Dataset Conversion
        • Convert Dataset Required Arguments
        • Convert Dataset Optional Arguments
      • Training a Token Classification Model
        • Required Arguments for Training
        • Optional Arguments
        • Important Parameters
      • Fine-tuning a Model on a Different Dataset
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Evaluating a Trained Model
        • Required Arguments for Evaluation
        • Optional Arguments for Evaluation
      • Running Inference using a Trained Model
        • Required Arguments for Inference
        • Optional Arguments
      • Model Export
        • Required Arguments for Export
        • Optional Arguments for Export
  • Language Models
    • N-Gram Language Model
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
        • Fine-tuning Procedure
      • Model Evaluation
        • Required Arguments for Evaluation
        • Evaluation Procedure
      • Model Inference
        • Required Arguments for Inference
        • Inference Procedure
      • Model Export
        • Required Arguments for Export
      • Model Deployment
  • Speech Synthesis or Text To Speech
    • Overview
    • Spectrogram Generator
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Configuring the Trainer
        • Configuring the model
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Current Limitations
      • Running Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Current Limitations
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Pitch Statistics
        • Required Arguments:
        • Manifest Creation
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
    • Vocoder
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Configuring the Trainer
        • Configuring the model
        • Configure the dataset
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Current Limitations
      • Running Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Current Limitations
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Finetuning Dataset
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File

Point Cloud Applications

  • 3D Object Detection
    • PointPillars
      • Preparing the Dataset
        • Converting The Dataset
      • Creating an Experiment Spec File
        • Class Names
        • Dataset
        • Model Architecture
        • Training Process
        • Evaluation
        • Inference
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Evaluating the model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the PointPillars Model
        • Required Arguments
        • Optional Arguments
      • Pruning and Retrain a PointPillars Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Deploying the Model
        • Using tao-converter

Deploying to Inference SDKs

  • Integrating TAO CV Models with Triton Inference Server
  • Integrating Conversational AI Models into Riva
  • TAO Converter
  • Integrating TAO Models into DeepStream
    • Installation Prerequisites
    • Deployment Files
    • Sample Application
      • Pre-trained models - License Plate Detection (LPDNet) and Recognition (LPRNet)
        • Download the Repository
        • Download the Models
        • Convert the Models to TRT Engine
        • Build and Run
      • Pre-trained models - PeopleNet, TrafficCamNet, DashCamNet, FaceDetectIR, Vehiclemakenet, Vehicletypenet, PeopleSegNet, PeopleSemSegNet
        • PeopleNet
        • TrafficCamNet
        • DashCamNet + Vehiclemakenet + Vehicletypenet
        • FaceDetectIR
        • PeopleSegNet
        • PeopleSemSegNet
      • Pre-trained models - BodyPoseNet, EmotionNet, FPENet, GazeNet, GestureNet, HeartRateNet
      • General purpose CV model architecture - Classification, Object detection and Segmentation
  • Image classification
    • Deploying to DeepStream for Classification TF1/TF2
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a Classification Model
  • Multitask Classification
    • Deploying to DeepStream for Multitask Classification
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the Model with DeepStream
        • Integrating a Multitask Image Classification Model
  • Object Detection
    • Deploying to DeepStream for DetectNet_v2
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for Deformable DETR
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating an Deformable DETR Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model to DeepStream
        • Integrating an DSSD Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for EfficientDet
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating an EfficientDet Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for FasterRCNN
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model to DeepStream
        • Integrating a FasterRCNN Model
    • Deploying to DeepStream for RetinaNet
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a RetinaNet Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for SSD
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model to DeepStream
        • Integrating an SSD Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for YOLOv3
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model to DeepStream
        • Integrating a YOLOv3 Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for YOLOv4
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a YOLOv4 Model
      • Label File
      • DeepStream Configuration File
    • Deploying to DeepStream for YOLOv4_tiny
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a YOLOv4-tiny Model
      • Label File
      • DeepStream Configuration File
  • Instance Segmentation
    • Deploying to DeepStream for MaskRCNN
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a MaskRCNN Model
      • Label File
      • DeepStream Configuration File
  • Semantic Segmentation
    • Deploying to Deepstream
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Label File
      • Integrating the model with DeepStream
      • DeepStream Configuration File
    • Deploying to Deepstream
      • TensorRT Open Source Software (OSS)
        • TensorRT OSS on x86
        • TensorRT OSS on Jetson (ARM64)
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Label File
      • Integrating the model with DeepStream
      • DeepStream Configuration File

Deploying with TAO Deploy

  • TAO Deploy Overview
    • Running TAO Deploy with the Launcher
  • TAO Deploy Installation
    • Invoking the TAO Deploy Container Directly
    • Installing TAO Deploy through wheel
      • Installing TAO Deploy on Google Colab
      • Installing TAO Deploy on a Jetson Platform
  • Classification (TF1) with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Classification (TF2) with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Export Config
      • Required Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Deformable DETR with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • trt_config
      • Required Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • DetectNet_v2 with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • DSSD with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • EfficientDet (TF1) with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • EfficientDet (TF2) with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Export Config
      • Required Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Faster RCNN with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • LPRNet with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Mask RCNN with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Multitask Image Classification with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • RetinaNet with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • SSD with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • Segformer with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • trt_config
      • Required Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • UNet with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • YOLOv3 with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • YOLOv4 with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage
  • YOLOv4-tiny with TAO Deploy
    • Converting .etlt File into TensorRT Engine
      • Required Arguments
      • Optional Arguments
      • INT8 Engine Generation Required Arguments
      • INT8 Engine Generation Optional Arguments
      • Sample Usage
    • Running Evaluation through TensorRT Engine
      • Required Arguments
      • Sample Usage
    • Running Inference through TensorRT Engine
      • Required Arguments
      • Sample Usage

Bring Your Own Model (BYOM)

  • BYOM Converter
    • Installing the TAO BYOM Converter for TF1 Classification and UNet
      • Software Requirements
      • Installing through pip
    • Installing the TAO BYOM Converter for TF2 Classification
      • Software Requirements
      • Installing via pip
    • Preparing the ONNX Model
    • Running the TAO BYOM Converter
      • Required Arguments
      • Optional Arguments
    • Examples of Converting Open-Source Models through TAO BYOM
    • Supported ONNX nodes in TAO BYOM
  • BYOM Image Classification
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Classification
    • Training the model
      • Required Arguments
      • Optional Arguments
      • Input Requirement
      • Sample Usage
      • Model Parallelism
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
      • Using the Prune Command
    • Re-training the Pruned Model
    • Exporting the Model
      • INT8 Mode Overview
      • FP16/FP32 Model
      • Exporting the BYOM Model
        • Required Arguments
        • Optional Arguments
      • INT8 Export Mode Required Arguments
      • INT8 Export Optional Arguments
      • Exporting a Model
    • Deploying to DeepStream
      • Generating an Engine Using tao-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tao-converter
      • Integrating the model with DeepStream
        • Integrating a Classification Model
  • BYOM UNET
    • Data Input for Semantic Segmentation
    • Creating a Configuration File
      • Model Config
    • Training the Model
      • Required Arguments
      • Optional Arguments
      • Input Requirement
      • Sample Usage
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
      • Using the Prune Command
    • Re-training the Pruned Model
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
      • Sample Usage
    • Using Inference on the Model
      • Required Parameters
    • Exporting the Model
      • INT8 Mode Overview
      • FP16/FP32 Model
      • Exporting the BYOM UNet Model
        • Required Arguments
        • Optional Arguments
      • INT8 Export Mode Required Arguments
      • INT8 Export Optional Arguments
      • Sample Usage for the Export Subtask
    • Deploying to Deepstream
      • Generating an Engine Using tao-converter
      • Instructions for x86
      • Instructions for Jetson
        • Using the tao-converter
      • Label File
      • DeepStream Configuration File

More Information

  • Release Notes
    • Version list
    • TAO Toolkit 4.0.0
      • Key Features
      • Compute Stack
        • TF 1.15.5 Container
        • TF 2.9.1 Container
        • PyTorch Container
        • Deploy Container
      • Model Updates
        • Computer vision
        • Conversational AI
      • Known Issues/Limitations
    • TAO Toolkit 3.0-22.05
      • Key Features
      • Compute Stack
        • TF 1.15.4 Container
        • TF 1.15.5 Container
        • PyTorch Container
        • Language Model Container
      • Model Updates
        • Computer Vision
        • Conversational AI
      • Pretrained models
      • Known Issues/Limitations
    • TAO Toolkit 3.0-22.02
      • Key Features
      • Known Issues/Limitations
    • TAO Toolkit 3.0-21.11
      • Key Features
      • Known Issues/Limitations
      • Resolved Issues
      • Deprecated Features
      • Release Contents
    • TAO Toolkit 3.0-21.08
      • Key Features
      • Known Issues/Limitations
  • Frequently Asked Questions
    • TLT to TAO Toolkit rename
    • Model support
    • Pruning
    • Model Export and Deployment
    • Training
  • Troubleshooting Guide
    • NGC
    • TAO Launcher
    • MaskRCNN
    • DetectNet_v2
    • Natural Language Processing
  • Support Information
  • Acknowledgements
    • nitime
    • OpenSSL
    • JsonCpp
    • Python
    • libcurl
    • OpenCV
    • zlib
    • TensorFlow
    • Keras
    • PyTorch
    • ssd_keras
    • Yamale
    • PyCUDA
    • protobuf
    • onnx
    • PIL
    • PyYAML
    • addict
    • argcomplete
    • bto3
    • cryptography
    • docker
    • dockerpty
    • gRPC
    • h5py
    • jupyter
    • numba
    • numpy
    • pandas
    • posix_ipc
    • prettytable
    • arrow
    • PyJWT
    • requests
    • retrying
    • seaborn
    • scikit-image
    • scikit-learn
    • semver
    • Shapely
    • simplejson
    • six
    • python-tabulate
    • toposort
    • tqdm
    • uplink
    • xmltodict
    • recordclass
    • cocoapi
    • mpi4py
    • Open MPI
    • lazy_object_proxy
    • onnxruntime
    • pytorch-lightning
    • KenLM
    • Eigen
    • google/automl
    • open-mmlab/OpenPCDet
    • VainF/Torch-Pruning
    • gmalivenko/onnx2keras
    • open-mmlab/mmskeleton
TAO Toolkit
  • »
  • Character Recognition
  • View page source

Character Recognition¶

  • LPRNet
    • Preparing the Dataset
    • Creating an Experiment Spec File
    • Training the Model
    • Evaluating the model
    • Running Inference on the LPRNet Model
    • Exporting the Model
    • Deploying the Model
Next Previous

© Copyright 2022, NVIDIA.. Last updated on Jan 23, 2023.

Built with Sphinx using a theme provided by Read the Docs.