Known Issues#

  • Multi-stream with chat enabled in /summarize API call can result in Neo4j errors if number of streams >4. Neo4j Enterprise Edition is required for multi-stream processing for streams >4. Summarization with chat disabled works fine for different number of streams.

  • Audio Processing and CV pipeline do not work together for now. You may try either of the two at a time.

  • Models are trained on specific data/use cases so if tested on other inputs then it might give incorrect results.

  • VLM Model accuracy: Sometimes time stamps returned are not accurate. Also, it can hallucinate for certain questions. Prompt tuning might be required.

  • Summarization accuracy: Summarization accuracy is heavily dependent on VLM accuracy. Also, the default configs have been tuned for the warehouse use case. User can supply custom VLM and summarization prompts to the /summarize API.

  • The following harmless warnings might be seen during VSS application execution. These can be safely ignored.

    • GLib (gthread-posix.c): Unexpected error from C library during ‘pthread_setspecific’: Invalid argument. Aborting

  • Due to a browser limitation, loading multiple Gradio sessions in the same browser may cause Gradio sessions to get stuck or appear to be slow.

  • Guardrails might not reject some prompts that are expected to be rejected. This could be because the prompt might be relevant in other contexts as well as topics in the prompt might not be configured to be rejected. You can try tuning the guardrails configuration if required.

  • OpenAI connection errors or 429 (too many requests) errors might be seen sometimes if too many requests are sent to GPT-4v or GPT-4o VLMs. It can be due to lower TPM/RPM limits associated with the OpenAI account.

  • CA-RAG Summarization might show a truncated summary response. This is due to the max_tokens. Try increasing the number in the CA-RAG config file.

  • Helm deployment: VSS deployment pod fails with Error: (LLM call Exception: llm-nim-svc)

    Inspite of having a init container wait for LLM pod to come up, VSS deployment can for an unknown reason error out like below.

    2024-11-27 17:51:44,763 [91mERROR[0m Failed to load VIA stream handler - LLM Call Exception: HTTPConnectionPool(host='llm-nim-svc', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2c9d0ad6c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
    

    If this happens, please wait for additional few minutes and a pod restart fixes the issue.

    Users can monitor this using sudo watch microk8s kubectl get pod.

  • CV pipeline is currently supported for video files and live-streams only. Images are not supported.

  • Deleting RTSP streams can be hung sometimes. This is because, rtspsrc indefinitely retries TCP transport after UDP timeout when the timeout property is set. Once TCP link is established, pipeline teardown hangs. This is a GSTreamer issue: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1570. Workaround can be exporting VSS_RTSP_TIMEOUT=0. This will disable TCP transport after UDP timeout. But, this could cause streaming to not work at all when network is not good.

  • Some video encoded formats such as H.264 High 4:4:4 Profile (Hi444pp) are not supported. For such videos summarization request will fail and “No summary was generated for given request” message will be shown. In VSS container logs you will see an error similar to below:

    Error String : Feature not supported on this GPUError Code : 801
    
  • If you see “An internal error occurred” as the output, the likely cause is that the Initialization failed. This is because the VSS container failed to start. Please check the logs for more details.

    One of common reasons is that VSS is not able to connect to the embeddings or reranking service.

  • In “IMAGE FILE SUMMARIZATION & Q&A”, When one of the samples is selected, VSS produce a summary with multiple timestamps, even though only one image is selected.

    The VLM, CA RAG prompts by default are for video/multi image.

    More info about setting these prompts in the UI: Image Summarization, Q&A, and Alerts.

    The users need to prompt tune for single image to avoid this issue.

    More info about tuning these prompts: Tuning Prompts.

  • Rarely Guardrails failure might cause VSS to stop. In such cases, either Guardrails can be disabled or a custom VSS container can be built with following changes:

    diff --git a/src/vss-engine/src/via_stream_handler.py b/src/vss-engine/src/via_stream_handler.py
    index e4fb893..6fed288 100644
    --- a/src/vss-engine/src/via_stream_handler.py
    +++ b/src/vss-engine/src/via_stream_handler.py
    @@ -1265,7 +1265,8 @@ class ViaStreamHandler:
                                 )
                             except Exception as e:
                                 logger.error("Error in guardrails: %s", str(e))
    -                            self.stop(True)
    +                            with self._lock:
    +                                self._LLMRailsPool.append(rails)
                                 raise Exception("Guardrails failed")
                             # Return the rails to the pool
                             with self._lock:
    @@ -1506,7 +1507,8 @@ class ViaStreamHandler:
                         response = rails.generate(messages=[{"role": "user", "content": query}])
                     except Exception as e:
                         logger.error("Error in guardrails: %s", str(e))
    -                    self.stop(True)
    +                    with self._lock:
    +                        self._LLMRailsPool.append(rails)
                         raise Exception("Guardrails failed")
                     # Return the rails to the pool
                     with self._lock:
    @@ -1904,7 +1906,8 @@ class ViaStreamHandler:
                         response = rails.generate(messages=[{"role": "user", "content": query}])
                     except Exception as e:
                         logger.error("Error in guardrails: %s", str(e))
    -                    self.stop(True)
    +                    with self._lock:
    +                        self._LLMRailsPool.append(rails)
                         raise Exception("Guardrails failed")
                     # Return the rails to the pool
                     with self._lock:
    
  • Rarely, issuing a high number of file summarization requests back to back for long period of time might cause VSS to stop.