Logo
3.0

Introduction

  • Overview
    • Pre-trained Models
    • TLT Computer Vision Workflow Overview
    • TLT Conversational AI Workflow Overview
  • TLT Quick Start Guide
    • Requirements
      • Hardware
      • Software Requirements
      • Installing the Pre-requisites
      • Installing TLT
      • Running the Transfer Learning Toolkit
        • Use the examples
      • Downloading the Models
        • Configure the NGC API key
        • Get a list of models
        • Download a model
    • Training with Jupyter Notebook
      • Install the Python Virtual Environment
      • Download Jupyter Notebook
      • Open model architecture:
      • Start Jupyter Notebook
      • 1. Train the Model
  • TLT Launcher
    • Running the launcher
    • Handling launched processes
    • Useful Environment variables
  • Migrating to TLT 3.0

CV Model Zoo

  • PeopleNet
    • Training algorithm
    • Intended use case
  • TrafficCamNet
    • Training algorithm
    • Intended Use Case
  • DashCamNet
    • Training algorithm
    • Intended use case
  • LPDNet
    • Training algorithm
    • Intended use case
  • LPRNet
    • Training algorithm
    • Intended use case
  • VehicleTypeNet
    • Training Algorithm
    • Intended Use
  • VehicleMakeNet
    • Training Algorithm
    • Intended Use
  • PeopleSegNet
    • Training Algorithm
    • Intended Use
  • PeopleSemSegNet
    • Training Algorithm
    • Intended Use
  • FaceDetect-IR
    • Training algorithm
    • Intended use case
  • FaceDetect
    • Training algorithm
    • Intended use case
  • Gaze Estimation
    • Training algorithm
    • Intended use case
  • Emotion Classification
    • Training algorithm
    • Intended use case
  • HeartRate Estimation
    • Training algorithm
    • Intended use case
  • Facial Landmarks Estimation
    • Model Architecture
    • Training Algorithm
    • References
    • Intended Use
  • Gesture Recognition
    • Model Overview
    • Model Architecture
    • Training Algorithm
    • Reference
    • Intended Use
  • Body Pose Estimation
    • Model Architecture
    • Training algorithm
    • Reference
    • Intended use case
  • Open Images
    • Overview
      • Training
      • Deployment
    • Open Images Pre-trained Image Classification
      • Supported Backbones
    • Open Images Pre-trained Object Detection
      • Supported Backbones
    • Open Images Pre-trained DetectNet_v2
      • Supported Backbones
    • Open Images Pre-trained Instance Segmentation
      • Supported Backbones
    • Open Images Pre-trained Semantic Segmentation
      • Supported Backbones

Conv AI Model Zoo

  • Conversational AI Model Zoo

Running TLT in the Cloud

  • Running TLT in the Cloud
  • Running TLT on an AWS VM
    • Pre-Requisites
    • Setting up an AWS EC2 instance
    • Installing the Pre-Requisites for TLT in the VM
    • Download and run the test samples
  • Running TLT on Google Cloud Platform
    • Setting up a VM Linux VM Instance
    • Using the VM
    • Setting up the VM and Enabling GPUs
    • Installing the Pre-requisites for TLT
    • Downloading and Running Test Samples

CV Applications

  • Offline Data Augmentation
    • Object Detection
      • Configuring the Augmentor
        • Spatial Augmentation Config
        • Color Augmentation Config
        • Dataloader
        • Blur Config
      • Running the Augmentor Tool
  • Optimizing the Training Pipeline
    • Quantization Aware Training
    • Automatic Mixed Precision
  • Data Annotation Format
    • Image Classification Format
    • Object Detection – KITTI Format
      • Label Files
      • Sequence Mapping File
    • Instance Segmentation – COCO format
    • Semantic Segmentation – UNet Format
      • Structured Images and Masks Folders
      • Image and Mask Text files
    • Gesture Recognition – Custom Format
      • Label Format
    • Heart Rate Estimation – Custom Format
    • EmotionNet, FPENET, GazeNet – JSON Label Data Format
    • BodyposeNet – COCO Format
      • Label Files
  • Image Classification
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Classification
      • Model Config
        • BatchNormalization Parameters
        • Activation functions
      • Eval Config
      • Training Config
        • Learning Rate Scheduler
    • Training the model
      • Required Arguments
      • Optional Arguments
      • Input Requirement
      • Sample Usage
      • Model parallelism
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
      • Using the Prune Command
    • Re-training the Pruned Model
    • Exporting the model
      • INT8 Mode Overview
        • Required Arguments
        • Optional Arguments
      • FP16/FP32 Model
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • INT8 Export Mode Required Arguments
      • INT8 Export Optional Arguments
      • Exporting a Model
    • Deploying to DeepStream
      • Generating an Engine Using tlt-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tlt-converter
      • Integrating the model to DeepStream
        • Integrating a Classification Model
  • Object Detection
    • DetectNet_v2
      • Data Input for Object Detection
      • Pre-processing the Dataset
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating a Configuration File
        • Model Config
        • BBox Ground Truth Generator
        • Post-Processor
        • Cost Function
        • Trainer
        • Augmentation Module
        • Configuring the Evaluator
        • Dataloader
        • Specification File for Inference
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Using Inference on the Model
        • Required Parameters
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Generating an INT8 tensorfile Using the calibration_tensorfile Command
        • Exporting the DetectNet_v2 Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample usage for the export sub-task
        • Generating a Template DeepStream Config File
      • Deploying to Deepstream
        • Generating an Engine Using tlt-converter
        • Label File
        • DeepStream Configuration File
    • FasterRCNN
      • Preparing the Input Data Structure
        • Required Arguments
        • Optional Arguments
      • Creating an experiment spec file - Specification file for FasterRCNN
        • Dataset
        • Data augmentation
        • Model architecture
        • Training configurations
        • Inference configurations
        • Evaluation configurations
      • Training the model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
        • Using a Pretrained Model
        • Re-training a pruned model
        • Resuming an interrupted training
        • Input shape: static and dynamic
        • Model parallelism
      • Evaluating the model
        • Required Arguments
        • Optional Arguments
        • Evaluation Metrics
        • Two Modes for Evaluation
      • Running inference on the model
        • Required Arguments
        • Optional Arguments
        • Two Modes for Inference
      • Pruning the model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Retraining the pruned model
      • Exporting the model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Exporting a Model
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model to DeepStream
    • YOLOv3
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • YOLOv3 Config
      • Generate the Anchor Shape
        • Required Arguments
        • Optional Arguments
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on a YOLOv3 Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample Usage
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model to DeepStream
        • Label File
        • DeepStream Configuration File
    • YOLOv4
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • YOLO4 Config
      • Generate anchor shape
        • Required Arguments
        • Optional Arguments
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on a YOLOv4 Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample usage
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model with DeepStream
        • Label File
        • DeepStream Configuration File
    • SSD
      • Data Input for Object Detection
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • SSD config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting command
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Exporting a Model
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model to DeepStream
        • Label File
        • DeepStream Configuration File
    • DSSD
      • Data Input for Object Detection
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • DSSD Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting command
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Exporting a Model
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model to DeepStream
        • Label File
        • DeepStream Configuration File
    • RetinaNet
      • Creating a Configuration File
        • Training Config
        • Evaluation Config
        • NMS Config
        • Augmentation Config
        • Dataset Config
        • RetinaNet Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Running Inference on a RetinaNet Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the RetinaNet Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample usage
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model to DeepStream
        • Label File
        • DeepStream Configuration File
  • Instance Segmentation
    • Data Input for Instance Segmentation
    • MaskRCNN
      • Creating a Configuration File
        • MaskRCNN Config
        • Data Config
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
      • Re-training the Pruned Model
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the MaskRCNN Model
      • Deploying to DeepStream
        • TensorRT Open Source Software (OSS)
        • Generating an Engine Using tlt-converter
        • Integrating the model with DeepStream
        • Label File
        • DeepStream Configuration File
  • Semantic Segmentation
    • UNET
      • Data Input for Semantic Segmentation
      • Creating a Configuration File
        • Model Config
        • Training
        • Dataset
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Input Requirement
        • Sample Usage
      • Pruning the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-training the Pruned Model
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Using Inference on the Model
        • Required Parameters
      • Exporting the Model
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Exporting the UNet Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample Usage for the Export Subtask
      • Deploying to Deepstream
        • Generating an Engine Using tlt-converter
        • Label File
        • DeepStream Configuration File
  • Gaze Estimation
    • Gaze Estimation
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Specification File
        • Trainer/Ealuator
        • Model
        • Loss
        • Optimizer
        • Dataloader
        • Augmentation
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Run Inference on the Model
        • Required Parameters
        • Sample usage for the inference sub-task
        • Exporting the GazeNet Model
        • Sample usage for the export sub-task
      • Deploying to the TLT CV Inference Pipeline
  • Emotion Classification
    • Emotion Classification
      • Pre-processing the Dataset
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Specification File
        • Trainer
        • Model
        • Loss
        • Optimizer
        • Dataloader
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Run Inference on the Model
        • Required Parameters
        • Sample usage for the inference sub-task
        • Exporting the EmotionNet Model
        • Sample usage for the export sub-task
      • Deploying to the TLT CV Inference Pipeline
  • HeartRate Estimation
    • Heart Rate Estimation
      • Data Input for Heart Rate Estimation
      • Creating a Configuration File to Generate TFRecords
      • Generating TFRecords
        • Required Arguments
        • Sample Usage
      • Creating a Configuration File to Train and Evaluate Heart Rate Network
        • Dataloader
        • Model
        • Loss
        • Optimizer
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
  • Facial Landmarks Estimation
    • Facial Landmarks Estimation
      • Facial Landmarks
      • Dataset Preparation
        • Configuration File for Dataset Converter
        • Sample Usage of the Dataset Converter Tool
      • Creating an Experiment Spec file
        • Trainer Config
        • Model Config
        • Loss Config
        • Dataloader Config
        • Optimizer Config
        • Complete Sample Experiment Spec File
      • Training the model
        • Sample Usage of the Train tool
      • Evaluating the model
        • Sample Usage of the Evaluate tool
      • Inference of the model
        • Sample Usage of the Inference tool
      • Exporting the model
        • Sample Usage of the Export tool
      • Deploying to the TLT CV Inference Pipeline
  • Gesture Recognition
    • Gesture Recognition
      • Pre-processing the Dataset
        • Dataset Extraction Config
        • Dataset Experiment Config
        • Sample Usage of the Dataset Converter Tool
        • Required Arguments
        • Sample Usage
      • Creating a Configuration File
        • Trainer Config
        • Model Config
        • Evaluator Config
      • Training the Model
        • Required Arguments
        • Sample Usage
      • Evaluating the Model
        • Required Arguments
        • Sample Usage
      • Running Inference on the Model
        • Required Arguments
        • Sample Usage
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
        • Sample Usage
      • Deploying to the TLT CV Inference Pipeline
  • Body Pose Estimation
    • Body Pose Estimation
      • Data Input for BodyPoseNet
      • Dataset Preparation
        • Create a Configuration File for the Dataset Converter
      • Generate Tfrecords and Masks
        • Required Arguments
        • Optional Arguments
      • Create a Train Experiment Configuration File
        • Trainer
        • Model
        • Loss
        • Optimizer
        • Dataloader
        • Augmentation Module
        • Label Processor Module
      • Train the Model
        • Required Arguments
        • Optional Arguments
      • Create an Inference Specification File (for Evaluation and Inference)
      • Run Inference on the Model
        • Required Parameters
        • Optional Parameters
      • Evaluate the Model
        • Required Arguments
        • Optional Arguments
      • Prune the Model
        • Required Arguments
        • Optional Arguments
        • Using the Prune Command
      • Re-train the Pruned Model
      • Export the Model
        • Choose Network Input Resolution for Deployment
        • INT8 Mode Overview
        • FP16/FP32 Model
        • Export the BodyPoseNet Model
        • INT8 Export Mode Required Arguments
        • INT8 Export Optional Arguments
        • Sample usage for the export sub-task
        • Evaluate the exported TRT Model
        • Export the Deployable BodyPoseNet Model
      • Generate TRT Engine using tlt-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tlt-converter
        • Example usage for BodyPoseNet
      • Deploying to the TLT CV Inference Pipeline
  • Multitask Image Classification
    • Preparing the Input Data Structure
    • Creating an Experiment Spec File - Specification File for Multitask Classification
      • Model Config
        • BatchNormalization Parameters
        • Activation functions
      • Training Config
      • Dataset Config
    • Training the model
      • Required Arguments
      • Optional Arguments
    • Evaluating the Model
      • Required Arguments
      • Optional Arguments
    • Generating Confusion Matrix
      • Required Arguments
      • Optional Arguments
    • Running Inference on a Model
      • Required arguments
      • Optional arguments
    • Pruning the Model
      • Required Arguments
      • Optional Arguments
    • Re-training the Pruned Model
    • Exporting the model
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • INT8 Export Mode Required Arguments
      • INT8 Export Optional Arguments
    • Deploying to DeepStream
      • Generating an Engine Using tlt-converter
        • Instructions for x86
        • Instructions for Jetson
        • Using the tlt-converter
      • Integrating the Model to DeepStream
        • Integrating a Multitask Image Classification Model
  • Character Recognition
    • LPRNet
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • lpr_config
        • training_config
        • eval_config
        • augmentation_config
        • dataset_config
      • Training the Model
        • Required Arguments
        • Optional Arguments
      • Evaluating the model
        • Required Arguments
        • Optional Arguments
      • Running Inference on the LPRNet Model
        • Required Arguments
        • Optional Arguments
      • Exporting the Model
        • Required Arguments
        • Optional Arguments
      • Deploying the Model
        • Using tlt-converter
        • Deploying the LPRNet in the DeepStream sample

Conversational AI Applications

  • ASR
    • Speech Recognition
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Training Process Configs
        • Dataset Configs
        • Model Configs
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Troubleshooting
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Evaluation Procedure
        • Troubleshooting
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Fine-Tuning Procedure
        • Troubleshooting
      • Using Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Troubleshooting
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
    • Speech Recognition With CitriNet
      • Downloading Sample Spec Files
        • Required Arguments
      • Preparing the Dataset
      • Creating an Experiment Spec File
        • Training Process Configs
        • Dataset Configs
        • Model Configs
      • Subword Tokenization with the Tokenizer
        • Required Arguments
        • Creating a config file for tokenizer
      • Training the Model
        • Required Arguments
        • Optional Arguments
        • Training Procedure
        • Troubleshooting
      • Evaluating the Model
        • Required Arguments
        • Optional Arguments
        • Evaluation Procedure
        • Troubleshooting
      • Fine-Tuning the Model
        • Required Arguments
        • Optional Arguments
        • Fine-Tuning Procedure
        • Troubleshooting
      • Using Inference on a Model
        • Required Arguments
        • Optional Arguments
        • Inference Procedure
        • Troubleshooting
      • Model Export
        • Required Arguments
        • Optional Arguments
        • Export Spec File
  • Natural Language Processing
    • Joint Intent and Slot Classification
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
        • Fine-tuning Procedure
      • Model Evaluation
        • Required Arguments for Evaluation
        • Evaluation Procedure
      • Model Inference
        • Required Arguments for Inference
        • Inference Procedure
      • Model Export
        • Required Arguments for Export
      • Model Deployment
    • Punctuation and Capitalization
      • Introduction
      • Downloading Sample Spec Files
        • Download Spec Required Arguments
      • Data Input for Punctuation and Capitalization Model
      • Data Format
      • Pre-processing the Dataset
        • Convert Dataset Required Arguments
        • Convert Dataset Optional Arguments
        • Download and Convert Tatoeba Dataset Required Arguments
        • Optional Arguments
      • Training a Punctuation and Capitalization model
        • Required Arguments for Training
        • Optional Arguments
        • Important Parameters
      • Fine-tuning a Model on a Different Dataset
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Evaluating a Trained Model
        • Required Arguments for Evaluation
        • Optional Arguments
      • Running Inference using a Trained Model
        • Required Arguments for Inference
        • Optional Arguments
      • Model Export
        • Required Arguments for Export
        • Optional Arguments
    • Question Answering
      • Introduction
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
        • Fine-tuning Procedure
      • Model Evaluation
        • Required Arguments for Evaluation
        • Evaluation Procedure
      • Model Inference
        • Required Arguments for Inference
        • Inference Procedure
      • Model Export
        • Required Arguments for Export
      • Model Deployment
    • Text Classification
      • Introduction
      • Downloading Sample Spec files
      • Data Format
      • Dataset Conversion
      • Model Training
        • Required Arguments for Training
        • Optional Arguments
        • Training Procedure
      • Training Suggestions
      • Model Fine-tuning
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Model Evaluation
        • Required Arguments for Evaluation
      • Model Inference
        • Required Arguments for Inference
      • Model Export
        • Required Arguments for Export
    • Token Classification (Named Entity Recognition)
      • Introduction
      • Downloading Sample Spec Files
        • Download Spec Required Arguments
      • Data Input for Token Classification Model
      • Dataset Conversion
        • Convert Dataset Required Arguments
        • Convert Dataset Optional Arguments
      • Training a Token Classification Model
        • Required Arguments for Training
        • Optional Arguments
        • Important Parameters
      • Fine-tuning a Model on a Different Dataset
        • Required Arguments for Fine-tuning
        • Optional Arguments
      • Evaluating a Trained Model
        • Required Arguments for Evaluation
        • Optional Arguments for Evaluation
      • Running Inference using a Trained Model
        • Required Arguments for Inference
        • Optional Arguments
      • Model Export
        • Required Arguments for Export
        • Optional Arguments for Export

Deploying to Inference SDKs

  • Integrating TLT CV Models with the Inference Pipeline
    • Overview
    • Requirements and Installation
      • Hardware Requirements
        • Minimum
        • Recommended
      • Software Requirements
      • Installation Prerequisites
      • Installation
        • Configure the NGC API key
        • Download the TLT CV Inference Pipeline Quick Start
    • TLT CV Inference Pipeline Quick Start Scripts
      • Configuration
      • Initialization
      • Launching the Server and Client Containers
      • Stopping
      • Cleaning
      • Deploying TLT Models
        • Body Pose Estimation
        • Emotion
        • Face Detect (Pruned and Quantized)
        • Face Detect (Pruned)
        • Facial Landmarks
        • Gaze
        • Gesture
        • Heart Rate
    • Running and Building Sample Applications
      • TLT CV Inference Pipelines
      • Running the Body Pose Estimation Sample
        • Body Pose Configuration
        • Body Pose API Usage
      • Running the Emotion Classification Sample
        • Emotion API Usage
      • Running the Face Detection Sample
        • Face Detection API Usage
      • Running the Facial Landmarks Estimation Sample
        • Facial Landmarks API Usage
      • Running the Gaze Estimation Sample
        • Gaze Estimation API Usage
      • Running the Gesture Classification Sample
        • Gesture Classification API Usage
      • Running the Heart Rate Estimation Sample
        • Heart Rate Estimation API Usage
      • Building the Sample Applications
  • Integrating TLT CV Models with Triton Inference Server
  • TensorRT
    • TensorRT Open Source Software
    • Installing the TLT-Converter
      • Installing on an x86 platform
      • Installing on an jetson platform
    • Running the TLT converter
      • Using the tlt-converter
        • Required Arguments
        • Optional Arguments
        • INT8 Mode Arguments
  • Integrating Conversational AI Models into Jarvis
  • Integrating TLT Models into DeepStream
    • Installation Prerequisites
    • Deployment Files
    • Sample Application
      • Pre-trained models - License Plate Detection (LPDNet) and Recognition (LPRNet)
        • Download the Repository
        • Download the Models
        • Convert the Models to TRT Engine
        • Build and Run
      • Pre-trained models - PeopleNet, TrafficCamNet, DashCamNet, FaceDetectIR, Vehiclemakenet, Vehicletypenet, PeopleSegNet, PeopleSemSegNet
        • PeopleNet
        • TrafficCamNet
        • DashCamNet + Vehiclemakenet + Vehicletypenet
        • FaceDetectIR
        • PeopleSegNet
        • PeopleSemSegNet
      • General purpose CV model architecture - Classification, Object detection and Segmentation

More Information

  • Release Notes
    • Transfer Learning Toolkit V3.0
      • Key Features
      • Contents
      • Software Requirements
      • Hardware Requirements
      • Known Issues
      • Resolved Issues
  • Frequently Asked Questions
    • Model support
    • Pruning
    • Model Export and Deployment
    • Training
  • Troubleshooting Guide
    • NGC
    • TLT Launcher
    • MaskRCNN
    • DetectNet_v2
  • Support Information
  • Acknowledgements
    • nitime
    • OpenSSL
    • JsonCpp
    • Python
    • libcurl
    • OpenCV
    • zlib
    • TensorFlow
    • Keras
    • PyTorch
    • ssd_keras
    • Yamale
    • PyCUDA
    • protobuf
    • onnx
    • PIL
    • PyYAML
    • addict
    • argcomplete
    • bto3
    • cryptography
    • docker
    • dockerpty
    • gRPC
    • h5py
    • jupyter
    • numba
    • numpy
    • pandas
    • posix_ipc
    • prettytable
    • arrow
    • PyJWT
    • requests
    • retrying
    • seaborn
    • scikit-image
    • scikit-learn
    • semver
    • Shapely
    • simplejson
    • six
    • python-tabulate
    • toposort
    • tqdm
    • uplink
    • xmltodict
    • recordclass
    • cocoapi
    • mpi4py
    • Open MPI
    • lazy_object_proxy
    • onnxruntime
    • pytorch-lightning
Transfer Learning Toolkit
  • Docs »
  • Instance Segmentation
  • View page source

Instance Segmentation¶

  • Data Input for Instance Segmentation
  • MaskRCNN
    • Creating a Configuration File
    • Training the Model
    • Evaluating the Model
    • Pruning the Model
    • Re-training the Pruned Model
    • Running Inference on the Model
    • Exporting the Model
    • Deploying to DeepStream
Next Previous

© Copyright 2020, NVIDIA. Last updated on Jun 09, 2021.

Built with Sphinx using a theme provided by Read the Docs.