Getting Started With TLT ================================================ 1. Prerequisites --------------------- * Install virtualenv with python 3.6.9: To setup the python virtual environment, please follow these `instructions`_. Once virtualenvwrapper is setup, please set the version of python to be used in the virtual env by using the :code:`VIRTUALENVWRAPPER_PYTHON` variable. You may do so by running .. code:: shell export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x where x >= 6 and <= 8 .. _instructions: https://virtualenvwrapper.readthedocs.io/en/latest/install.html * Instantiate a virtual environment using the below .. code:: shell mkvirtualenv launcher -p /path/to/bin/python3.x where x >= 6 and <= 8 2. Download Jupyter Notebook --------------------------------- TLT provides samples notebooks to walk through an prescrible TLT workflow. These samples are hosted on NGC as a resource and can be downloaded from NGC by executing the command mentioned below. .. code-block:: shell wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.0.2/zip -O tlt_cv_samples_v1.0.2.zip unzip -u tlt_cv_samples_v1.0.2.zip -d ./tlt_cv_samples_v1.0.2 && rm -rf tlt_cv_samples_v1.0.2.zip && cd ./tlt_cv_samples_v1.0.2 The list with their corresponding samples mapped are mentioned below. +---------------------------+---------------------------------------+ | **Model Name** | **Jupyter Notebook** | +===========================+=======================================+ | VehicleTypeNet | classification/classification.ipynb | +---------------------------+---------------------------------------+ | VehicleMakeNet | classification/classification.ipynb | +---------------------------+---------------------------------------+ | TrafficCamNet | detectnet_v2/detectnet_v2.ipynb | +---------------------------+---------------------------------------+ | PeopleSegNet | mask_rcnn/mask_rcnn.ipynb | +---------------------------+---------------------------------------+ | PeopleNet | detectnet_v2/detectnet_v2.ipynb | +---------------------------+---------------------------------------+ | License Plate Recognition | lprnet/lprnet.ipynb | +---------------------------+---------------------------------------+ | License Plate Detection | detectnet_v2/detectnet_v2.ipynb | +---------------------------+---------------------------------------+ | Heart Rate Estimation | heartratenet/heartratenet.ipynb | +---------------------------+---------------------------------------+ | Gesture Recognition | gesturenet/gesturenet.ipynb | +---------------------------+---------------------------------------+ | Gaze Estimation | gazenet/gazenet.ipynb | +---------------------------+---------------------------------------+ | Facial Landmark | fpenet/fpenet.ipynb | +---------------------------+---------------------------------------+ | FaceDetectIR | detectnet_v2/detectnet_v2.ipynb | +---------------------------+---------------------------------------+ | FaceDetect | facenet/facenet.ipynb | +---------------------------+---------------------------------------+ | Emotion Recognition | emotionnet/emotionnet.ipynb | +---------------------------+---------------------------------------+ | DashCamNet | detectnet_v2/detectnet_v2.ipynb | +---------------------------+---------------------------------------+ Open model architecture: ^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------+--------------------------------------+ |Open model architecture | Jupyter notebook | +=========================+======================================+ | DetectNet_v2 | detectnet_v2/detectnet_v2.ipynb | +-------------------------+--------------------------------------+ | FasterRCNN | faster_rcnn/faster_rcnn.ipynb | +-------------------------+--------------------------------------+ | YOLOV3 | yolo_v3/yolo_v3.ipynb | +-------------------------+--------------------------------------+ | YOLOV4 | yolo_v4/yolo_v4.ipynb | +-------------------------+--------------------------------------+ | SSD | ssd/ssd.ipynb | +-------------------------+--------------------------------------+ | DSSD | dssd/dssd.ipynb | +-------------------------+--------------------------------------+ | RetinaNet | retinanet/retinanet.ipynb | +-------------------------+--------------------------------------+ | MaskRCNN | mask_rcnn/mask_rcnn.ipynb | +-------------------------+--------------------------------------+ | UNET | unet/unet_isbi.ipynb | +-------------------------+--------------------------------------+ | Classification | classification/classification.ipynb | +-------------------------+--------------------------------------+ 3. Start Jupyter Notebook ------------------------------------------ Once the notebook samples are downloaded, you may start the notebook using below commands: .. code-block:: shell jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root * Open the internet browser on localhost and open the url written below: .. code-block:: shell http://0.0.0.0:8888 .. note:: If you want to run the notebook from a remote server then please follow these `steps`_. .. _steps: https://docs.anaconda.com/anaconda/user-guide/tasks/remote-jupyter-notebook/ 4. Train the Model ---------------------- Follow the Notebook instructions to train the model. Deepstream - TLT Integration =============================== This page will describe how to integrate TLT models with Deepstream .. _deepstream_tlt_: Prerequisites ------------- Software Installation ^^^^^^^^^^^^^^^^^^^^^^ * Jetpack 4.4 for Jetson devices .. _note: For Jetson devices, manually increase the `Jetson Power mode`_ and maximize performance further by using the `Jetson Clocks mode`_. The following commands perform this: .. code-block:: shell sudo nvpmodel -m 0 sudo /usr/bin/jetson_clocks .. _Jetson Power mode: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0NO0HA .. _Jetson Clocks mode: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0SB0HA * Install `Deepstream`_ . .. _Deepstream: https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html Deployment files ---------------------------- To run the TLT model with Deepstream, below are the list of files required for each model. * Deepstream_custom.c: Application main file. * :code:`nvdsinfer_customparser_$(MODEL)_tlt`: A custom parser function on infernece end nodes. * Models: TLT models from NGC * Model configuration files: Deepstream Inference configuration file. Sample Application : License Plate Detection and Recognition --------------------------------------------------------------------- Steps to run the application: Download Repository ^^^^^^^^^^^^^^^^^^^^^ .. TODO: Add repository link here (@vpraveen) .. code:: shell cd /opt/nvidia/deepstream/deepstream-5.0/sources wget https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/archive/master.zip Download Models ^^^^^^^^^^^^^^^^^ .. code-block:: shell cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt mkdir -p /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPD cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPD wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_pruned.etlt wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_lpd_cal.bin wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lpdnet/versions/pruned_v1.0/files/usa_lpd_label.txt mkdir -p /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_lprnet/versions/deployable_v1.0/files/us_lprnet_baseline18_deployable.etlt echo > labels_us.txt Convert to TRT Engine ^^^^^^^^^^^^^^^^^^^^^ .. code-block:: shell ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnx_b16.engine Build ^^^^^^^^^ .. code-block:: shell Build Parser cd /opt/nvidia/deepstream/deepstream-5.0/sources/ds-lpr-sample/nvinfer_custom_lpr_parser make cp libnvdsinfer_custom_impl_lpr.so /opt/nvidia/deepstream/deepstream-5.0/lib Build Application cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/ds-lpr-sample/lpr-test-sample make Update the paths in model confiuration files ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Run ^^^^^ .. code-block:: shell ./lpr-test-app [language mode:1-us 2-chinese] [sink mode:1-output as 264 stream file 2-no output 3-display on screen] [0:ROI disable|0:ROI enable] [input mp4 file path and name] [input mp4 file path and name] ... [input mp4 file path and name] [output 264 file path and name] Resources ----------------------------- * :ref:`Deepstream-TLT PeopleNet model Deployment` * :ref:`Deepstream-TLT TrafficCamNet model Deployment` * :ref:`Deepstream-TLT DashCamNet model deployment` * :ref:`Deepstream-TLT FaceDetectIR model deployment` * `Deepstream-TLT apps`_ .. _Deepstream-TLT apps: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps .. _deepstream_peoplnet: PeopleNet ^^^^^^^^^^^^^^^^^^ .. code-block:: shell ## Download Model: mkdir -p $HOME/peoplenet && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v1.0/files/resnet34_peoplenet_pruned.etlt \ -O $HOME/peoplenet/resnet34_peoplenet_pruned.etlt ## Run Application xhost + docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \ -w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \ deepstream-app -c deepstream_app_source1_peoplenet.txt .. _deepstream_trafficcamnet: TrafficCamNet ^^^^^^^^^^^^^^^^ .. code-block:: shell ## Download Model: mkdir -p $HOME/trafficcamnet && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt \ -O $HOME/trafficcamnet/resnet18_trafficcamnet_pruned.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt \ -O $HOME/trafficcamnet/trafficnet_int8.txt ## Run Application xhost + docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \ -w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \ deepstream-app -c deepstream_app_source1_trafficcamnet.txt .. _deepstream_DashCamnet: DashCamNet ^^^^^^^^^^^^^^^ .. code-block:: shell ## Download Model: mkdir -p $HOME/dashcamnet && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_dashcamnet/versions/pruned_v1.0/files/resnet18_dashcamnet_pruned.etlt \ -O $HOME/dashcamnet/resnet18_dashcamnet_pruned.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_dashcamnet/versions/pruned_v1.0/files/dashcamnet_int8.txt \ -O $HOME/dashcamnet/dashcamnet_int8.txt mkdir -p $HOME/vehiclemakenet && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehiclemakenet/versions/pruned_v1.0/files/resnet18_vehiclemakenet_pruned.etlt \ -O $HOME/vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehiclemakenet/versions/pruned_v1.0/files/vehiclemakenet_int8.txt \ -O $HOME/vehiclemakenet/vehiclemakenet_int8.txt mkdir -p $HOME/vehicletypenet && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehicletypenet/versions/pruned_v1.0/files/resnet18_vehicletypenet_pruned.etlt \ -O $HOME/vehicletypenet/resnet18_vehicletypenet_pruned.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_vehicletypenet/versions/pruned_v1.0/files/vehicletypenet_int8.txt \ -O $HOME/vehicletypenet/vehicletypenet_int8.txt ## Run Application xhost + docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \ -w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \ deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt .. _deepstream_facedetection: FaceDetectIR ^^^^^^^^^^^^^ .. code-block:: shell ## Download Model mkdir -p $HOME/facedetectir && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/resnet18_facedetectir_pruned.etlt \ -O $HOME/facedetectir/resnet18_facedetectir_pruned.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/facedetectir_int8.txt \ -O $HOME/facedetectir/facedetectir_int8.txt ## Run Application xhost + docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models \ -w /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.0.1-20.09-samples \ deepstream-app -c deepstream_app_source1_facedetectir.txt