The Traffic Light Classification sample demonstrates how to use the NVIDIA® proprietary LightNet deep neural network (DNN) to perform traffic light classification. It detects the state of the traffic lights facing the ego car. LightNet currently supports RCB images. RGBA images are not supported.
This sample shows a simple implementation of traffic light classification built around the NVIDIA LightNet DNN. For more information on the LightNet DNN and how to customize it for your applications, consult your NVIDIA sales or business representative.
The image datasets used to train LightNet have been captured by a View Sekonix Camera Module (SF3325) with AR0231 RCCB sensor. The camera is mounted high up at the rear view mirror position. Demo videos are captured at 2.3 MP and down-sampled to 960 x 604.
To achieve the best traffic light detection performance, NVIDIA recommends to adopt a similar camera setup and align the video center vertically with the horizon before recording new videos.
The LightNet DNN is trained to support any of the following six camera configurations:
./sample_light_classifier --rig=[path/to/rig/file] --liveCam=[0|1]
where
--rig=[path/to/rig/file] Rig file containing all information about vehicle sensors and calibration. Default value with video: path/to/data/samples/waitcondition/rig.json Default value with live camera: path/to/data/samples/waitcondition/live_cam_rig.json --liveCam=[0|1] Use live camera or video file. Takes no effect on x86. Need to be set to 1 if passing in a rig with live camera setup. To switch the mode, pass `--liveCam=0/1` as the argument. Default value: 0
./sample_light_classifier
./sample_light_classifier --liveCam=1
The sample creates a window, displays a video, and overlays bounding boxes for traffic light objects. The state of the traffic light is displayed on the text on top of the bounding box. The color of the bounding boxes represents the status of the traffic light, as follows:
For more information, see LightNet.