Getting Started With TLT¶
1. Prerequisites¶
Install virtualenv with python 3.6.9:
To setup the python virtual environment, please follow these instructions.
Once virtualenvwrapper is setup, please set the version of python to be
used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON
variable. You may do so by running
export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x
where x >= 6 and <= 8
Instantiate a virtual environment using the below
mkvirtualenv launcher -p /path/to/bin/python3.x
where x >= 6 and <= 8
2. Download Jupyter Notebook¶
TLT provides samples notebooks to walk through an prescrible TLT workflow. These samples are hosted on NGC as a resource and can be downloaded from NGC by executing the command mentioned below.
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.0.2/zip -O tlt_cv_samples_v1.0.2.zip
unzip -u tlt_cv_samples_v1.0.2.zip -d ./tlt_cv_samples_v1.0.2 && rm -rf tlt_cv_samples_v1.0.2.zip && cd ./tlt_cv_samples_v1.0.2
The list with their corresponding samples mapped are mentioned below.
Model Name |
Jupyter Notebook |
---|---|
VehicleTypeNet |
classification/classification.ipynb |
VehicleMakeNet |
classification/classification.ipynb |
TrafficCamNet |
detectnet_v2/detectnet_v2.ipynb |
PeopleSegNet |
mask_rcnn/mask_rcnn.ipynb |
PeopleNet |
detectnet_v2/detectnet_v2.ipynb |
License Plate Recognition |
lprnet/lprnet.ipynb |
License Plate Detection |
detectnet_v2/detectnet_v2.ipynb |
Heart Rate Estimation |
heartratenet/heartratenet.ipynb |
Gesture Recognition |
gesturenet/gesturenet.ipynb |
Gaze Estimation |
gazenet/gazenet.ipynb |
Facial Landmark |
fpenet/fpenet.ipynb |
FaceDetectIR |
detectnet_v2/detectnet_v2.ipynb |
FaceDetect |
facenet/facenet.ipynb |
Emotion Recognition |
emotionnet/emotionnet.ipynb |
DashCamNet |
detectnet_v2/detectnet_v2.ipynb |
Open model architecture:¶
Open model architecture |
Jupyter notebook |
---|---|
DetectNet_v2 |
detectnet_v2/detectnet_v2.ipynb |
FasterRCNN |
faster_rcnn/faster_rcnn.ipynb |
YOLOV3 |
yolo_v3/yolo_v3.ipynb |
YOLOV4 |
yolo_v4/yolo_v4.ipynb |
SSD |
ssd/ssd.ipynb |
DSSD |
dssd/dssd.ipynb |
RetinaNet |
retinanet/retinanet.ipynb |
MaskRCNN |
mask_rcnn/mask_rcnn.ipynb |
UNET |
unet/unet_isbi.ipynb |
Classification |
classification/classification.ipynb |
3. Start Jupyter Notebook¶
Once the notebook samples are downloaded, you may start the notebook using below commands:
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
Open the internet browser on localhost and open the url written below:
http://0.0.0.0:8888
Note
If you want to run the notebook from a remote server then please follow these steps.
4. Train the Model¶
Follow the Notebook instructions to train the model.
Deepstream - TLT Integration¶
This page will describe how to integrate TLT models with Deepstream
Prerequisites¶
Software Installation¶
Jetpack 4.4 for Jetson devices
sudo nvpmodel -m 0
sudo /usr/bin/jetson_clocks
Install Deepstream .
Deployment files¶
To run the TLT model with Deepstream, below are the list of files required for each model.
Deepstream_custom.c: Application main file.
nvdsinfer_customparser_$(MODEL)_tlt
: A custom parser function on infernece end nodes.Models: TLT models from NGC
Model configuration files: Deepstream Inference configuration file.
Sample Application : License Plate Detection and Recognition¶
Steps to run the application:
Download Repository¶
cd /opt/nvidia/deepstream/deepstream-5.0/sources
wget https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/archive/master.zip
Download Models¶
cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt
mkdir -p /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPD
cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPD
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_pruned.etlt
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_lpd_cal.bin
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_lpd_label.txt
mkdir -p /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR
cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lprnet/versions/deployable_v1.0/files/us_lprnet_baseline18_deployable.etlt
echo > labels_us.txt
Convert to TRT Engine¶
./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnx_b16.engine
Build¶
Build Parser
cd /opt/nvidia/deepstream/deepstream-5.0/sources/ds-lpr-sample/nvinfer_custom_lpr_parser
make
cp libnvdsinfer_custom_impl_lpr.so /opt/nvidia/deepstream/deepstream-5.0/lib
Build Application
cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/ds-lpr-sample/lpr-test-sample
make
Update the paths in model confiuration files¶
Run¶
./lpr-test-app [language mode:1-us 2-chinese] [sink mode:1-output as 264 stream file 2-no output 3-display on screen]
[0:ROI disable|0:ROI enable] [input mp4 file path and name] [input mp4 file path and name] ... [input mp4 file path and name] [output 264 file path and name]
Resources¶
PeopleNet¶
## Download Model:
mkdir -p $HOME/peoplenet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v1.0/files/resnet34_peoplenet_pruned.etlt \
-O $HOME/peoplenet/resnet34_peoplenet_pruned.etlt
## Run Application
xhost +
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \
-w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \
deepstream-app -c deepstream_app_source1_peoplenet.txt
TrafficCamNet¶
## Download Model:
mkdir -p $HOME/trafficcamnet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt \
-O $HOME/trafficcamnet/resnet18_trafficcamnet_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt \
-O $HOME/trafficcamnet/trafficnet_int8.txt
## Run Application
xhost +
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \
-w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \
deepstream-app -c deepstream_app_source1_trafficcamnet.txt
DashCamNet¶
## Download Model:
mkdir -p $HOME/dashcamnet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_dashcamnet/versions/pruned_v1.0/files/resnet18_dashcamnet_pruned.etlt \
-O $HOME/dashcamnet/resnet18_dashcamnet_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_dashcamnet/versions/pruned_v1.0/files/dashcamnet_int8.txt \
-O $HOME/dashcamnet/dashcamnet_int8.txt
mkdir -p $HOME/vehiclemakenet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehiclemakenet/versions/pruned_v1.0/files/resnet18_vehiclemakenet_pruned.etlt \
-O $HOME/vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehiclemakenet/versions/pruned_v1.0/files/vehiclemakenet_int8.txt \
-O $HOME/vehiclemakenet/vehiclemakenet_int8.txt
mkdir -p $HOME/vehicletypenet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehicletypenet/versions/pruned_v1.0/files/resnet18_vehicletypenet_pruned.etlt \
-O $HOME/vehicletypenet/resnet18_vehicletypenet_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehicletypenet/versions/pruned_v1.0/files/vehicletypenet_int8.txt \
-O $HOME/vehicletypenet/vehicletypenet_int8.txt
## Run Application
xhost +
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \
-w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \
deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
FaceDetectIR¶
## Download Model
mkdir -p $HOME/facedetectir && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/resnet18_facedetectir_pruned.etlt \
-O $HOME/facedetectir/resnet18_facedetectir_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/facedetectir_int8.txt \
-O $HOME/facedetectir/facedetectir_int8.txt
## Run Application
xhost +
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \
-w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \
deepstream-app -c deepstream_app_source1_facedetectir.txt