Step #2: Optimize Model With TAO – Prune, Quantize, TensorRT Conversion

This lab will use the link from the left-hand navigation pane throughout the course of the lab.

  • Jupyter Notebook

The key objective of this lab is to familiarize with various optimizations in TAO like model pruning and quantization. You will compare the inference performance of the optimized model with the unoptimized model.

Open and run through the YOLO Optimization notebook, by clicking the Jupyter Notebook link in the left hand navigation pane and then selecting and running lab2-yolo_optimization.ipynb, in the tutorial folder.

Notebook steps:

  1. Import saved data, including trained model, from Step 1 & run inference on model.

  2. Export the trained model in FP32 format. We can export in FP32, FP16 or INT8. The result will be a model in etlt format which then needs to be built into a TRT engine file to run inference.

  3. Run inference to get baseline numbers. Compare the inference time on the unoptimized model with the FP32 quantized model.

  4. Prune the model to reduce the model size and accelerate inference time. Pruning removes parameters from the model to reduce the model size without compromising the integrity of the model.

  5. Retrain the pruned model to recover lost accuracy. Once the model has been pruned, there might be a slight decrease in accuracy because some previously useful weights may have been removed. To regain accuracy, we can retrain the model on the same dataset.

  6. Run inference on pruned model and compare numbers with baseline. You should observe the same accuracy, but with a significantly smaller model.

© Copyright 2022-2023, NVIDIA. Last updated on Mar 20, 2023.