Performance ============= DeepStream application is benchmarked across various NVIDIA TAO Toolkit and open source models. The measured performance represents end-to-end performance of the entire video analytic application considering video capture and decode, pre-processing, batching, inference, and post-processing to generate metadata. The output rendering is turned off to achieve peak inference performance. For information on disabling the output rendering, see :doc:`DS_ref_app_deepstream` chapter. TAO Toolkit Pre-trained models ------------------------------- `TAO toolkit `_ has a set of pretrained models listed in the table below. If the models below satisfy your requirement, you should start with one of them. These could be used for various applications in smart city or smart places. If your application is beyond the scope of these models, you may re-train one of the popular model architecture using TAO toolkit. The second table shows the expected performance of few of other TAO toolkit models. The table below shows the end-to-end performance on highly accurate pre-trained models from TAO toolkit. All models are available on NGC. These models are natively integrated with DeepStream and the instructions to run these models are in ``/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/``. .. csv-table:: Performance - pretrained models :file: ../text/tables/DS_performance_TLT_pretrained.csv :widths: 16, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 :header-rows: 1 All the models in the table above can run solely on DLA. This saves valuable GPU resources to run more complex models. TAO toolkit also supports training on popular `Detection` and `Segmentation` architectures. To learn more about how to train with TAO toolkit, refer to the `TAO toolkit documentation `_. These models are natively integrated with DeepStream. These reference models are available to download from `GitHub `_. .. csv-table:: Performance - pretrained models- detection and segmentation :file: ../text/tables/DS_performance_TLT_pretrained_2.csv :widths: 10, 5, 5, 5, 5, 5, 5, 2, 5, 5, 5 :header-rows: 1 .. note:: * `FasterRCNN` model will not run efficiently on the DLA due to multiple layers not supported on the DLA. * All inferences on Jetson Nano is done using `FP16` precision. DeepStream reference model and tracker --------------------------------------- DeepStream SDK ships with a reference `DetectNet_v2-ResNet10` model and three `ResNet18` classifier models. The detailed instructions to run these models with DeepStream are provided in the next section. The table below shows the performance of these models along with various trackers. DeepStream provides three reference trackers: `IoU`, `KLT` and `NvDCF`. For more information about trackers, See the :doc:`DS_plugin_gst-nvtracker` section. .. csv-table:: Performance - Deepstream reference models :file: ../text/tables/DS_performance_Deepstream_model_tracker.csv :widths: 16, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 :header-rows: 1 .. note:: * \* \- Performance bottleneck identified which will be fixed in future release. * All inferences are done using INT8 precision except on Jetson Nano™. On Nano, it is `FP16`. * Running inference simultaneously on multiple models is not supported on the DLA. You can only run one model at a time on the DLA. To achieve the peak performance shown in the tables above, make sure the devices are properly cooled. For T4, make sure you use a server that meets the thermal and airflow requirements. Along with the hardware setup, a few other options in the config file need to be set to achieve the published performance. Make the required changes to one of the config files from DeepStream SDK to replicate the peak performance. **Turn off output rendering, OSD, and tiler** OSD (on-screen display) is used to display bounding box, masks, and labels on the screen. If output rendering is disabled, creating bounding boxes is not required unless the output needs to be streamed over RTSP or saved to disk. Tiler is used to display the output in `NxM` tiled grid. It is not needed if rendering is disabled. Output rendering, OSD and tiler use some percentage of compute resources, so it can reduce the inference performance. To disable OSD, tiled display and output sink, make the following changes in the DeepStream config file. * To disable OSD, change enable to 0 :: [osd] enable=0 * To disable tiling, change enable to 0 :: [tiled-display] enable=0 * To turn-off output rendering, change the sink to fakesink. :: [sink0] enable=1 #Type - 1=FakeSink 2=EglSink 3=File type=1 sync=0 DeepStream reference model ---------------------------- Data center GPU - GA100 ~~~~~~~~~~~~~~~~~~~~~~~ This section describes configuration and settings for the DeepStream SDK on NVIDIA Data center GPU - GA100. System Configuration ^^^^^^^^^^^^^^^^^^^^^^ The system configuration for the DeepStream SDK is listed below: .. csv-table:: GA100 System configuration :file: ../text/tables/DS_performance_Ampere_system_configuration.csv :widths: 30, 40 :header-rows: 1 Application Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **Config file**: ``source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt`` Change the following items in the config file: * The inference resolution of Primary GIE is specified in the ``samples/models/Primary_detector/resnet10.prototxt``. * Change the `dim` to ``480x272``. * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable `IoU` tracker. The application configuration for the DeepStream SDK is listed below: .. csv-table:: GA100 application configuration :file: ../text/tables/DS_performance_Ampere_application_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** The table below shows the achieved performance of the DeepStream SDK under the specified system and application configuration: ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 158 4.5% 46.08% H.264 91 2.83% 28.69% ============= ======================== =================== ==================== Data center GPU - T4 ~~~~~~~~~~~~~~~~~~~~~~~ This section describes configuration and settings for the DeepStream SDK on NVIDIA Data center GPU - T4. System Configuration ^^^^^^^^^^^^^^^^^^^^^^ The system configuration for the DeepStream SDK is listed below: .. csv-table:: T4 System configuration :file: ../text/tables/DS_performance_Tesla_system_configuration.csv :widths: 30, 40 :header-rows: 1 Application Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **Config file**: ``source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt`` Change the following items in the config file: * The inference resolution of Primary GIE is specified in the ``samples/models/Primary_detector/resnet10.prototxt``. * Change the `dim` to ``480x272``. * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable `IoU` tracker. The application configuration for the DeepStream SDK is listed below: .. csv-table:: T4 application configuration :file: ../text/tables/DS_performance_Tesla_application_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** The table below shows the achieved performance of the DeepStream SDK under the specified system and application configuration: ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 64 8% to 10% 58% H.264 39 5% 31% ============= ======================== =================== ==================== Jetson ~~~~~~~ This section describes configuration and settings for the DeepStream SDK on NVIDIA Jetson™ platforms. JetPack 4.5.1 is used for software installation. System Configuration ^^^^^^^^^^^^^^^^^^^^^^^ For the performance test: 1. Max power mode is enabled: ``$ sudo nvpmodel -m 0`` 2. The GPU clocks are stepped to maximum: ``$ sudo jetson_clocks`` For information about supported power modes, see the “Supported Modes and Power Efficiency” section in the power management topics of `NVIDIA Tegra Linux Driver Package Development Guide`, e.g., “Power Management for Jetson AGX Xavier Devices.” Jetson Nano ^^^^^^^^^^^^^ **Config file**: ``source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt`` Change the following items in the config file: * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable `KLT` tracker and change the tracker resolution to ``480x272``. The following tables describe performance results for the NVIDIA Jetson Nano. .. csv-table:: Jetson Nano application configuration :file: ../text/tables/DS_performance_Jetson_Nano_app_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 8 39% 67% H.264 8 39% 65% ============= ======================== =================== ==================== Jetson AGX Xavier ^^^^^^^^^^^^^^^^^^^ **Config file**: ``source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt`` Change the following items in the config file: * The inference resolution of Primary GIE is specified in the ‘samples/models/Primary_detector/resnet10.prototxt’. * Change the `dim` to ``480x272``. * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable `IOU` tracker. The following tables describe performance results for the NVIDIA Jetson AGX Xavier™. .. csv-table:: Jetson Nano Pipeline Configuration (``deepstream-app``) :file: ../text/tables/DS_performance_Jetson_AGX_Xavier_app_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 45 22% 95% H.264 32 19% 71% ============= ======================== =================== ==================== Jetson NX ^^^^^^^^^^^ **Config file**: ``source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt`` Change the following items in the config file: * The inference resolution of Primary GIE is specified in the ‘samples/models/Primary_detector/resnet10.prototxt’. * Change the `dim` to ``480x272``. * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable `IOU` tracker. The following tables describe performance results for the NVIDIA Jetson NX™. .. csv-table:: Jetson NX Pipeline Configuration (``deepstream-app``) :file: ../text/tables/DS_performance_Jetson_NX_app_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 23 55% 93% H.264 16 45% 65% ============= ======================== =================== ==================== Jetson TX2 ^^^^^^^^^^^^ **Config file**: ``source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt`` Change the following in the config file: * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable KLT tracker and change the tracker resolution to 480x272. The following tables describe performance results for the Jetson™ TX2. .. csv-table:: Jetson TX2 Pipeline Configuration (``deepstream-app``) :file: ../text/tables/DS_performance_Jetson_TX2_app_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 15 35% 47% H.264 14 34% 43% ============= ======================== =================== ==================== Jetson TX1 ^^^^^^^^^^^^ **Config file**: ``source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt`` Change the following in the config file: * Change batch size under ``streammux`` and ``primary-gie`` to match the number of streams. * Disable tiled display and rendering using instructions above. * Enable KLT tracker and change the tracker resolution to 480x272. The following tables describe performance results for the Jetson™ TX1. .. csv-table:: Jetson TX1 Pipeline Configuration (``deepstream-app``) :file: ../text/tables/DS_performance_Jetson_TX1_app_configuration.csv :widths: 30, 40 :header-rows: 1 **Achieved Performance** ============= ======================== =================== ==================== Stream type No. of Stream @ 30 FPS CPU Utilization GPU Utilization ============= ======================== =================== ==================== H.265 13 56% 49% H.264 10 43% 43% ============= ======================== =================== ==================== Running applications using DLA --------------------------------- Jetson AGX Xavier and Jetson NX support 2 DLA engines. DeepStream does support inferencing using GPU and DLAs in parallel. You can do this in separate processes or single process. You will need three separate sets of configs configured to run on GPU, DLA0 and DLA1: * Separate processes: When GPU and DLA are run in separate processes, set the environment variable ``CUDA_DEVICE_MAX_CONNECTIONS`` as ``1`` from the terminal where DLA config is running. * Single process: DeepStream reference application supports multiple configs in the same process. To run DLA and GPU in same process, set environment variable ``CUDA_DEVICE_MAX_CONNECTIONS`` as ``32``: ``$ deepstream-app -c -c -c``