This module provides the APIs to initialize, query, and release NVIDIA proprietary camera blindness detection neural network: ClearSightNet. The relevant member data structures are:
dwClearSightNetParams
: allows users to specify the network precision and processor type to run inference upon. Supported processors: GPU (Default), DLA (only on DDPX). DLA inference only works with FP16 precisiondwBlindnessDetectorParams
: encapsulates following parameters:dwBlindnessDetectorParams.clearSightNetHandle
: handle to ClearSightNetdwBlindnessDetectorParams.temporalFilterWindow
: temporal filter window (num frames over which blindness ratio is filtered)dwBlindnessDetectorParams.numRegionsX
: number of sub-regions in X directiondwBlindnessDetectorParams.numRegionsY
: number of sub-regions in Y directiondwBlindnessDetectorParams.regionDividersX
: sub-region dividers (as fraction of image width) in X directiondwBlindnessDetectorParams.regionDividersY
: sub-region dividers (as fraction of image width) in Y directiondwBlindnessDetectionOutput
: instances of this are used to get processed network inference output - overlay RGB mask and blindness ratioIn order to run ClearSightNet, users must use the ClearSightNet Interface and Camera Blindness Detection Interface APIs. The ClearSightNet Interface API helps load and prepare the network for inference. The Camera Blindness Detection Interface API, using the loaded network, runs inference on input images and returns processed output.
The default model should work on all i.e. surround cameras.
ClearSightNet consumes RCCB frames with a resolution of 480x240 pixels from AR0231 cameras (revision >= 4).
ClearSightNet outputs intermediate signals to feed Camera Blindness Detection Interface pipeline which returns the following:
dwBlindnessDetectionOutput.blindnessRatio
: a value between 0 and 1 indicating fraction of overall input image that is blinded or compromised.dwBlindnessDetectionOutput.fullBlindRatio
: a value between 0 and 1 indicating fraction of overall input image that is fully blinded or blocked.dwBlindnessDetectionOutput.partBlindRatio
: a value between 0 and 1 indicating fraction of overall input image that is partially blinded or blurred.dwBlindnessDetectionOutput.mask
: an RGBA mask that can be overlayed on input image to visualize blocked (R channel), blurred (G channel) and sky (B channel) regions.dwBlindnessDetectionOutput.numRegionsX
: number of sub-regions in X directiondwBlindnessDetectionOutput.numRegionsY
: number of sub-regions in Y directiondwBlindnessDetectionOutput.regionDividersX
: sub-region dividers (in input image coordinates) in X directiondwBlindnessDetectionOutput.regionDividersY
: sub-region dividers (in input image coordinates) in Y directiondwBlindnessDetectionOutput.regionBlindnessRatio
: values between 0 and 1 indicating fraction of each sub-region that is blinded or compromiseddwBlindnessDetectionOutput.regionFullBlindRatio
: values between 0 and 1 indicating fraction of each sub-region that is fully blinded or blockeddwBlindnessDetectionOutput.regionPartBlindRatio
: values between 0 and 1 indicating fraction of each sub-region that is partially blinded