Troubleshooting

If you run into to trouble while using DeepStream, consider the following solutions.
You are migrating from DeepStream 4.0+ to DeepStream 5.0.
Solution: You must clean up the DeepStream 4.0 libraries and binaries. The one of these commands to clean up:
For dGPU:
To remove DeepStream 4.0 or later installations:
1. Open the uninstall.sh file which will be present in /opt/nvidia/deepstream/deepstream/
2. Set PREV_DS_VER as 4.0
3. Run the script as sudo ./uninstall.sh
For Jetson: Flash the target device with the latest release of JetPack.
“NvDsBatchMeta not found for input buffer” error while running DeepStream pipeline.
Solution: The Gst-nvstreammux plugin is not in the pipeline. Starting with DeepStream 4.0, Gst-nvstreammux is a required plugin.
This is an example pipeline:
Gst‑nvv4l2decoder→ Gst‑nvstreammux→ Gst‑nvinfer→ Gst‑nvtracker→ Gst‑nvmultistreamtiler→ Gst‑nvvideoconvert→ Gst‑nvosd→ Gst‑nveglglessink
The DeepStream reference application fails to launch, or any plugin fails to load.
Solution: Try clearing the GStreamer cache by running the command:
$ rm -rf ${HOME}/.cache/gstreamer-1.0
Also run this command if there is an issue with loading any of the plugins. Warnings or errors for failing plugins are displayed on the terminal.
$ gst-inspect-1.0
Then run this command to find missing dependencies:
$ ldd <plugin>.so
Where <plugin> is the name of the plugin that failed to load.
Application fails to run when the neural network is changed.
Solution: Be sure that the network parameters are updated for the corresponding [GIE] group in the configuration file (e.g. source30_720p_dec_infer-resnet_tiled_display_int8.txt). Also be sure that the Gst-nvinfer plugin’s configuration file is updated accordingly.
When the model is changed, make sure that the application is not using old engine files.
The DeepStream application is running slowly (Jetson only).
Solution: Ensure that Jetson clocks are set high. Run these commands to set Jetson clocks high.
$ sudo nvpmodel -m <mode> --for MAX perf and power mode is 0
$ sudo jetson_clocks
The DeepStream application is running slowly.
Solution1: One of the plugins in the pipeline may be running slowly.
You can measure the latency of each plugin in the pipeline to determine whether one of them is slow.
To enable frame latency measurement, run this command on the console:
$ export NVDS_ENABLE_LATENCY_MEASUREMENT=1
To enable latency for all plugins, run this command on the console:
$ export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1
Solution 2 (dGPU only): Ensure that your GPU card is in the PCI slot with the greatest bus width.
Solution 3: In the configuration file’s [streammux] group, set batched-push-timeout to (1/max_fps).
Solution 4: In the configuration file’s [streammux] group, set width and height to the stream’s resolution.
Solution 5: For RTSP streaming input, in the configuration file’s [streammux] group, set live-source=1. Also make sure that all [sink#] groups have the sync property set to 0.
Solution 6: If secondary inferencing is enabled, try to increase batch-size in the the configuration file’s [secondary-gie#] group in case the number of objects to be inferred is greater than the batch-size setting.
Solution 7: On Jetson, use Gst-nvoverlaysink instead of Gst-nveglglessink as nveglglessink requires GPU utilization.
Solution 8: If the GPU is bottlenecking performance, try increasing the interval at which th primary detector infers on input frames by modifying the interval property of [primary-gie] group in the application configuration, or the interval property of the Gst-nvinfer configuration file
Solution 9: If the elements in the pipeline are getting starved for buffers (you can check if CPU/GPU utilization is low), try increasing the number of buffers allocated by the decoder by setting the num-extra-surfaces property of the [source#] group in the application or the num-extra-surfaces property of Gst-nvv4l2decoder element.
Solution 10: If you are running the application inside docker/on console and it delivers low FPS, set qos=0 in the configuration file’s [sink0] group.
The issue is caused by initial load. With qos set to 1, as the property’s default value in the [sink0] group, decodebin starts dropping frames.
On NVIDIA Jetson Nano™, deepstream-segmentation-test starts as expected, but crashes after a few minutes. The system reboots.
Solution: NVIDIA recommends that you power the Jetson module through the DC power connector when running this app. USB adapters may not be able to handle the transients.
Errors occur when deepstream-app is run with a number of streams greater than 100.
For example:
(deepstream-app:15751): GStreamer-CRITICAL **: 19:25:29.810: gst_poll_write_control: assertion 'set != NULL' failed.
Solution: Run this command on the console:
ulimit -Sn 4096
Then run deepstream-app again.
Errors occur when deepstream-app fails to load plugin Gst-nvinferserver on dGPU only
For example:
(deepstream-app:16632): GStreamer-WARNING **: 13:13:31.201: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory.
This is a harmless warning indicating that the DeepStream's nvinferserver plugin
cannot be used since "Triton Inference Server" is not installed on x86(dGPU) platforms only. Jetson platforms should not have this problem since Triton installed automatically by DeepStream package.
Solution 1: Ignore this message if Users do not need Triton support. Otherwise see Solution 2, 3.
Solution 2: Pull deepstream-triton docker image and start the container. Retry deepstream-app to launch triton models.
Solution 3: Build Triton server library from source(https://github.com/NVIDIA/triton-inference-server/releases/tag/v1.12.0) and fix dynamic link problem manually.
Tensorflow models are running into OOM(Out-Of-Memory) problem
This problem might be observed as ‘CUDA_ERROR_OUT_OF_MEMORY’, ‘core dump’, ‘application get killed’ or similar issues once GPU memory was aet up by tensorflow component.
Solution: Tune parameter tf_gpu_memory_fraction in config file (e.g. config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt) to a proper value. For more details, see:
samples/configs/deepstream-app-trtis/README