Troubleshooting

If you run into to trouble while using DeepStream, consider the following solutions. if you don’t find answers below, post your questions on DeepStream developer forum

You are migrating from DeepStream 4.0+ to DeepStream 5.0

Solution:

You must clean up the DeepStream 4.0 libraries and binaries. The one of these commands to clean up: * For dGPU:

To remove DeepStream 4.0 or later installations:

  1. Open the uninstall.sh file which will be present in /opt/nvidia/deepstream/deepstream/

  2. Set PREV_DS_VER as 4.0

  3. Run the script as sudo ./uninstall.sh

  • For Jetson: Flash the target device with the latest release of JetPack.

“NvDsBatchMeta not found for input buffer” error while running DeepStream pipeline

Solution:

The Gst-nvstreammux plugin is not in the pipeline. Starting with DeepStream 4.0, Gst-nvstreammux is a required plugin. This is an example pipeline:

Gst nvv4l2decoder |rarr| Gst nvstreammux |rarr| Gst nvinfer |rarr| Gst nvtracker |rarr| Gst nvmultistreamtiler |rarr| Gst nvvideoconvert |rarr| Gst nvosd |rarr| Gst nveglglessink

The DeepStream reference application fails to launch, or any plugin fails to load

Solution:

Try clearing the GStreamer cache by running the command:

$ rm -rf ${HOME}/.cache/gstreamer-1.0

Also run this command if there is an issue with loading any of the plugins. Warnings or errors for failing plugins are displayed on the terminal.

$ gst-inspect-1.0

Then run this command to find missing dependencies:

$ ldd <plugin>.so

Where <plugin> is the name of the plugin that failed to load.

Application fails to run when the neural network is changed

Solution:

Be sure that the network parameters are updated for the corresponding [GIE] group in the configuration file (e.g. source30_720p_dec_infer-resnet_tiled_display_int8.txt). Also be sure that the Gst-nvinfer plugin’s configuration file is updated accordingly. When the model is changed, make sure that the application is not using old engine files.

The DeepStream application is running slowly (Jetson only)

Solution:

Ensure that Jetson clocks are set high. Run these commands to set Jetson clocks high.

$ sudo nvpmodel -m <mode> --for MAX perf and power mode is 0
$ sudo jetson_clocks

The DeepStream application is running slowly

Solution 1:

One of the plugins in the pipeline may be running slowly. You can measure the latency of each plugin in the pipeline to determine whether one of them is slow. * To enable frame latency measurement, run this command on the console:

$ export NVDS_ENABLE_LATENCY_MEASUREMENT=1
  • To enable latency for all plugins, run this command on the console:

    $ export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1
    

Solution 2: (dGPU only)

Ensure that your GPU card is in the PCI slot with the greatest bus width.

Solution 3:

In the configuration file’s [streammux] group, set batched-push-timeout to (1/max_fps).

Solution 4:

In the configuration file’s [streammux] group, set width and height to the stream’s resolution.

Solution 5:

For RTSP streaming input, in the configuration file’s [streammux] group, set live-source=1. Also make sure that all [sink#] groups have the sync property set to 0.

Solution 6:

If secondary inferencing is enabled, try to increase batch-size in the the configuration file’s [secondary-gie#] group in case the number of objects to be inferred is greater than the batch-size setting.

Solution 7:

On Jetson, use Gst-nvoverlaysink instead of Gst-nveglglessink as nveglglessink requires GPU utilization.

Solution 8:

If the GPU is bottlenecking performance, try increasing the interval at which the primary detector infers on input frames by modifying the interval property of [primary-gie] group in the application configuration, or the interval property of the Gst-nvinfer configuration file

Solution 9:

If the elements in the pipeline are getting starved for buffers (you can check if CPU/GPU utilization is low), try increasing the number of buffers allocated by the decoder by setting the num-extra-surfaces property of the [source#] group in the application or the num-extra-surfaces property of Gst-nvv4l2decoder element.

Solution 10:

If you are running the application inside docker/on console and it delivers low FPS, set qos=0 in the configuration file’s [sink0] group. The issue is caused by initial load. With qos set to 1, as the property’s default value in the [sink0] group, decodebin starts dropping frames.

NVIDIA Jetson Nano™, deepstream-segmentation-test starts as expected, but crashes after a few minutes rebooting the system

Solution:

NVIDIA recommends that you power the Jetson module through the DC power connector when running this app. USB adapters may not be able to handle the transients.

Errors occur when deepstream-app is run with a number of streams greater than 100

For example: (deepstream-app:15751): GStreamer-CRITICAL **: 19:25:29.810: gst_poll_write_control: assertion 'set != NULL' failed.

Solution:

Run this command on the console:

ulimit -Sn 4096

Then run deepstream-app again.

Errors occur when deepstream-app fails to load plugin Gst-nvinferserver on dGPU only

For example: (deepstream-app:16632): GStreamer-WARNING **: 13:13:31.201: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory.

This is a harmless warning indicating that the DeepStream’s nvinferserver plugin cannot be used since “Triton Inference Server” is not installed on x86(dGPU) platforms only. Jetson platforms should not have this problem since Triton installed automatically by DeepStream package.

Solution 1:

Ignore this message if Users do not need Triton support. Otherwise see Solution 2, 3.

Solution 2:

Pull deepstream-triton docker image and start the container. Retry deepstream-app to launch triton models.

Solution 3:

Build Triton server library from source(https://github.com/NVIDIA/triton-inference-server/releases/tag/v1.12.0) and fix dynamic link problem manually.

Tensorflow models are running into OOM(Out-Of-Memory) problem

This problem might be observed as CUDA_ERROR_OUT_OF_MEMORY, core dump, application get killed or similar issues once GPU memory was set up by tensorflow component.

Solution:

Tune parameter tf_gpu_memory_fraction in config file (e.g. config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt) to a proper value. For more details, see: samples/configs/deepstream-app-trtis/README

NvDCF tracker parameter tuning

Ghost bbox lingering

A tracker is not terminated even if the target disappears from the scene, resulting in a ghost bbox present in the scene. This ghost bbox may linger around as it learns more background information.

Solution:

To mitigate this issue, you may make the termination policy more aggressive by increasing minTrackerConfidence and/or minTrackingConfidenceDuringInactive. It is recommended to set PGIE’s parameter interval = 0 and set minTrackingConfidenceDuringInactive: 99 and try adjusting minTrackerConfidence first. After that, one can try adjusting the PGIE interval while fine-tuning the two tracker parameters.

BBox flickering

Solution:

If the bbox flickering is observed in the video output, it may be because the value for minTrackerConfidence and/or minTrackingConfidenceDuringInactive is set too low. You may gradually increase the value for those parameters to mitigate the issue.

Although the real-time video output may have bbox flickering, if the tracking IDs are well-maintained over time, then you can enable the past-frame data configuration (i.e., useBufferedOutput: 1 in NvDCF config file and enable-past-frame=1 in deepstream-app config file) to retrieve the missing outputs in the middle for post-processing and analysis.

Frequent tracking ID changes although no nearby objects

Solution:

It is highly likely that the tracker cannot detect the target from the correlation response map. It is recommended to start with lower minimum qualification for the target. Set minTrackerConfidence with a relatively low value like 0.5. Also, in case the state estimator is enabled, the prediction may not be accurate enough. Consider tuning the state estimator parameters based on the expected motion dynamics.

Frequent tracking ID switches to the nearby objects

Solution:

Make the data association policy stricter by increasing the minimum qualifications such as: * minMatchingScore4SizeSimilarity * minMatchingScore4Iou * minMatchingScore4VisualSimilarity

Consider enabling the instance-awareness to allow the correlation filters learned more discriminatively against the nearby objects.