Frequently Asked Questions
This document includes questions and answers to issues that you might encounter using TAO Toolkit.
What is happening?
NVIDIA Transfer Learning Toolkit is being renamed as NVIDIA TAO Toolkit.
How do I get the latest TAO toolkit?
To get the latest TAO Toolkit, install the nvidia-tao
package from the NVIDIA PyIndex.
To install, execute the commands mentioned below
pip3 uninstall nvidia-tlt # This is only required if you had previously installed nvidia-tlt in your
# Virtual environment.
pip3 install nvidia-tao
Are the model and file extension naming convention changing?
No, all models generated by TAO toolkit are still .tlt
file. For deployment to TensorRT based inference
SDK’s the models are still going to be called .etlt
and .riva
files for Computer Vision and Conversational
AI models respectively.
What are purpose-built models? Can I deploy them in production?
Purpose-built models are highly accurate models trained for applications in smart city, retail, healthcare and others. These are production quality models, trained on very large proprietary dataset for best accuracy and performance.
Are all the models free to use and distribute?
Yes, all models are free to use and distribute. For exact terms for purpose-built models, please read our models EULA.
Do I need to re-train the purpose-built models or can I deploy them as is from NGC?
Purpose-built models can be deployed as is using the “pruned” version from the model card but can also be re-trained to better adapt to your environment. For training use the “unpruned” version from the model card.
Instead of NVIDIA provided pre-trained models, can I use TAO Toolkit with my own or any open source pre-trained models?
No third party pre-trained models are supported by TAO Toolkit. Only NVIDIA pre-trained models from NGC are currently supported which can be retrained with your custom data.
Is YOLOv3 supported in TAO Toolkit?
Yes, YOLOv3 is supported in TAO Toolkit.
How do I determine the pruning threshold for my model?
Threshold is set to 0.1 by default. Every threshold will result in different portions of weights to be pruned, which is reported at the end of the pruning process. A common practice is to prune with increasing threshold values, starting from 0.1 or 0.05. A larger threshold will lead to more weights/channels to be pruned, thus it is harder to restore accuracy or mAP.
Is pruning performed automatically or are there hyperparameters that I need to set to prune my model?
There are multiple parameters for pruning.
normalizer
is to choose method to normalize weights, default is maxequalization_criterion
is to choose method to merge weights from different branches of element wise or depth wise layers, default is unionpruning_granularity
is to set granularity when channels are prunedmin_num_filters
is to set minimal channels that pruning needs to retainexcluded_layers
can be used to exclude layers from being prunedpruning_threshold
is the most important option. It is used to set the threshold of pruning, which is also used together with Normalizer. This threshold is common for all layers.
What is the model output format? How can I use this model for deployment?
TAO Toolkit can generate 2 output formats: .etlt
and TensorRT engine files.
.etlt
files can be used with DeepStream deployment, see usage in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps.TensorRT engine files can also be used with DeepStream but can also deploy separately with TensorRT. See the Deployment with DeepStream chapter to learn about different deployment options.
What is the model export key and why is it required?
Model export key is used to encrypt the trained keras/uff model files to .tlt
/.etlt
to
protect your proprietary IP and use the model export key to decrypt the .etlt
model in
DeepStream applications.
How do I deploy models trained with TAO to DeepStream?
Please see the TAO Quick Start Guide <tao_quick_start_guide> and https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps.
Will this model only work with DeepStream? Can I deploy the model without DeepStream?
Deployment to DeepStream is the recommended path for TAO models. Note that the models can also be deployed outside of DeepStream using TensorRT but users will need to do image pre-processing and post-process the output Tensor after inference.
Is it possible to export a custom trained .tlt (or.etlt) model to a conventional TensorFlow(TF) frozen inference graph (.pb) to make inferences with traditional TF tools?
No, this is currently not supported.
Is there a dependency of batch size on the accuracy of the model? How should I choose the appropriate batch size for my training?
As a common practice, a small batch size or single GPU is preferred for a small dataset; while a large batch size or multiple GPUs is preferred for a large dataset.
I am seeing lower accuracy with multi-GPU vs. single GPU. Can multi-GPU training affect the accuracy of the model? How do I improve the accuracy in multi-GPU training?
To improve the accuracy in a multi-gpu environment, learning rate parameters need to be higher, for example max_learning_rate. Multi-gpu is preferred only when the training dataset is large.
Distribute the dataset class: How do I balance the weight between classes if the dataset has significantly higher samples for one class versus another?
To account for imbalance, increase the class_weight for classes with fewer samples. You can also try disabling enable_autoweighting; in this case initial_weight is used to control cov/regression weighting. It is important to keep the number of samples of different classes balanced, which helps improve mAP.
How do I save checkpoints in TAO Toolkit?
The train
command for every DNN, supports saving checkpoints by default. By default,
checkpoints are saved for every 10th epoch. For DetectNet_v2, the interval at which this
checkpoint may be saved is configured using the checkpoint_interval parameter in the
training_config section of a DetectNet_v2 training configuration file.
In DetectNet_V2, are there any parameters that can help improve AP (average precision) on training small objects?
Following parameters can help you improve AP on smaller objects:
Increase
num_layers
of resnetclass_weight
for small objectsIncrease the
coverage_radius_x
andcoverage_radius_y parameters
of thebbox_rasterizer_config
section for the small objects classDecrease
minimum_detection_ground_truth_overlap
Lower
minimum_height
to cover more small objects for evaluation.
Why do I get this error when running tasks in conversational AI?
pytorch_lightning.utilities.exceptions.MisconfigurationException:
you restored a checkpoint with current_epoch=10
but the Trainer(max_epochs=1)
After you have already trained a model for a number epochs, you can not continue the training by setting the number of epochs (max_epochs) to a lower number than the one already trained for.
Can I run TAO Toolkit on systems without elevated user privileges?
Running TAO Toolkit via the TAO Toolkit Launcher requires the user to have docker-ce
installed since the
launcher interacts with the docker service on the local host to run the commands. Installing docker
requires elevated user privileges to run as root. If you don’t have elevated user privileges
on your compute machine, you may run TAO Toolkit using singularity.
This requires you to bypass using the tlt-launcher
and interact directly with the component
dockers. For information on which tasks are implemented in different dockers, run the
tao info --verbose
command. Once you have derived the task-to-docker mapping, you may run
the tasks by following the steps below.
Pull the required docker using the following singularity command:
singularity pull tao-toolkit-tf:v3.21.08-py3.sif docker://nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
For this command to work, the latest version of singularity must be installed.
Instantiate the docker using the following command:
singularity run --nv -B /path/to/workspace:/path/to/workspace tao-toolkit-tf:v3.21.08-py3.sif
Run the commands inside the container without the
tao
prefix. For example, to run adetectnet_v2
training in thetao-toolkit-tf
container, the command would be as follows:
detectnet_v2 train -e /path/to/workspace/specs/file.txt \
-k $KEY \
-r /path/to/workspace/results \
-n name_of_final_model \
--gpus $NUM_GPUS
Can I run TAO Toolkit without network?
Please see https://github.com/NVIDIA-AI-IOT/tao_toolkit_recipes/blob/main/tao_training_without_network/Guide.