1 # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
3 @page clearsightnet_mainsection ClearSightNet
6 This module provides the APIs to initialize, query, and release NVIDIA proprietary camera blindness detection neural network: **ClearSightNet**. The relevant member data structures are:
7 - `dwClearSightNetParams`: allows users to specify the network precision and processor type to run inference upon. Supported processors: GPU (Default), DLA (only on DDPX). DLA inference only works with FP16 precision
8 - `dwBlindnessDetectorParams`: encapsulates following parameters:
9 1. `dwBlindnessDetectorParams.clearSightNetHandle`: handle to ClearSightNet
10 2. `dwBlindnessDetectorParams.temporalFilterWindow`: temporal filter window (num frames over which blindness ratio is filtered)
11 3. `dwBlindnessDetectorParams.numRegionsX`: number of sub-regions in X direction
12 4. `dwBlindnessDetectorParams.numRegionsY`: number of sub-regions in Y direction
13 5. `dwBlindnessDetectorParams.regionDividersX`: sub-region dividers (as fraction of image width) in X direction
14 6. `dwBlindnessDetectorParams.regionDividersY`: sub-region dividers (as fraction of image width) in Y direction
15 - `dwBlindnessDetectionOutput`: instances of this are used to get processed network inference output - overlay RGB mask and blindness ratio
17 In order to run ClearSightNet, users must use the @ref clearsightnet_group and @ref blindness_detector_group APIs. The @ref clearsightnet_group API helps load and prepare the network for inference. The @ref blindness_detector_group API, using the loaded network, runs inference on input images and returns processed output.
19 The default model should work on all i.e. surround cameras.
22 ClearSightNet consumes RCCB frames with a resolution of 480x240 pixels from AR0231 cameras (revision >= 4).
25 ClearSightNet outputs intermediate signals to feed @ref blindness_detector_group pipeline which returns the following:
26 1. `dwBlindnessDetectionOutput.blindnessRatio`: a value between 0 and 1 indicating fraction of overall input image that is blinded or compromised.
27 2. `dwBlindnessDetectionOutput.fullBlindRatio`: a value between 0 and 1 indicating fraction of overall input image that is fully blinded or blocked.
28 3. `dwBlindnessDetectionOutput.partBlindRatio`: a value between 0 and 1 indicating fraction of overall input image that is partially blinded or blurred.
29 4. `dwBlindnessDetectionOutput.mask`: an RGBA mask that can be overlayed on input image to visualize blocked (R channel) or blurred (G channel) regions.
30 5. `dwBlindnessDetectionOutput.numRegionsX`: number of sub-regions in X direction
31 6. `dwBlindnessDetectionOutput.numRegionsY`: number of sub-regions in Y direction
32 7. `dwBlindnessDetectionOutput.regionDividersX`: sub-region dividers (in input image coordinates) in X direction
33 8. `dwBlindnessDetectionOutput.regionDividersY`: sub-region dividers (in input image coordinates) in Y direction
34 9. `dwBlindnessDetectionOutput.regionBlindnessRatio`: values between 0 and 1 indicating fraction of each sub-region that is blinded or compromised
35 10. `dwBlindnessDetectionOutput.regionFullBlindRatio`: values between 0 and 1 indicating fraction of each sub-region that is fully blinded or blocked
36 11. `dwBlindnessDetectionOutput.regionPartBlindRatio`: values between 0 and 1 indicating fraction of each sub-region that is partially blinded
40 - @ref clearsightnet_usecase1
44 - @ref clearsightnet_group
45 - @ref blindness_detector_group