Object Detection
- DetectNet_v2
- Data Input for Object Detection
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Using Inference on the Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT Engine Generation, Validation, and INT8 Calibration
- Deploying to DeepStream
- FasterRCNN
- YOLOv3
- Preparing the Input Data Structure
- Creating a Configuration File
- Generate the Anchor Shape
- Training the Model
- Evaluating the Model
- Running Inference on a YOLOv3 Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT engine generation, validation, and int8 calibration
- Deploying to DeepStream
- YOLOv4
- Preparing the Input Data Structure
- Creating a Configuration File
- Generate anchor shape
- Training the Model
- Evaluating the Model
- Running Inference on a YOLOv4 Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT engine generation, validation, and int8 calibration
- Deploying to DeepStream
- YOLOv4-tiny
- Preparing the Input Data Structure
- Creating a Configuration File
- Generate anchor shape
- Training the Model
- Evaluating the Model
- Running Inference on a YOLOv4-tiny Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT engine generation, validation, and int8 calibration
- Deploying to DeepStream
- SSD
- Data Input for Object Detection
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Running Inference on the Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT engine generation, validation, and int8 calibration
- Deploying to DeepStream
- DSSD
- Data Input for Object Detection
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Running Inference on the Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT engine generation, validation, and int8 calibration
- Deploying to DeepStream
- RetinaNet
- Data Input for Object Detection
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Running Inference on a RetinaNet Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT Engine Generation, Validation, and int8 Calibration
- Deploying to DeepStream
- DeformableDETR
- EfficientDet (TF1)
- Data Input for EfficientDet
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Running Inference with an EfficientDet Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT Engine Generation, Validation, and int8 Calibration
- Deploying to DeepStream
- EfficientDet (TF2)
- Data Input for EfficientDet
- Pre-processing the Dataset
- Creating a Configuration File
- Training the Model
- Evaluating the Model
- Running Inference with an EfficientDet Model
- Pruning the Model
- Re-training the Pruned Model
- Exporting the Model
- TensorRT Engine Generation, Validation, and int8 Calibration
- Deploying to DeepStream