DriveWorks SDK Reference
3.5.78 Release
For Test and Development only

Traffic Light Classification Sample (LightNet)
Note
SW Release Applicability: This sample is available in NVIDIA DRIVE Software releases.

Description

The Traffic Light Classification sample demonstrates how to use the NVIDIA® proprietary LightNet deep neural network (DNN) to perform traffic light classification. It detects the state of the traffic lights facing the ego car. LightNet currently supports RCB images. RGBA images are not supported.

This sample shows a simple implementation of traffic light classification built around the NVIDIA LightNet DNN. For more information on the LightNet DNN and how to customize it for your applications, consult your NVIDIA sales or business representative.

Sensor Details

The image datasets used to train LightNet have been captured by a View Sekonix Camera Module (SF3325) with AR0231 RCCB sensor. The camera is mounted high up at the rear view mirror position. Demo videos are captured at 2.3 MP and down-sampled to 960 x 604.

To achieve the best traffic light detection performance, NVIDIA recommends to adopt a similar camera setup and align the video center vertically with the horizon before recording new videos.

Limitations

Warning
Currently, the LightNet DNN has limitations that could affect its performance:
  • It is optimized for daytime, clear-weather data. As a result, it does not perform well in dark or rainy conditions.
  • It is trained on data collected in the United States. As a result, it may have reduced accuracy in other locales.

The LightNet DNN is trained to support any of the following six camera configurations:

  • Front camera location with a 60° field of view
  • Front camera location with a 120° field of view

Running the Sample

./sample_light_classifier --rig=[path/to/rig/file]
                          --liveCam=[0|1]

where

--rig=[path/to/rig/file]
    Rig file containing all information about vehicle sensors and calibration.
    Default value with video: path/to/data/samples/waitcondition/rig.json
    Default value with live camera: path/to/data/samples/waitcondition/live_cam_rig.json

--liveCam=[0|1]
    Use live camera or video file. Takes no effect on x86.
    Need to be set to 1 if passing in a rig with live camera setup.
    To switch the mode, pass `--liveCam=0/1` as the argument.
    Default value: 0

To run the sample on Linux

./sample_light_classifier

To run the sample on a camera on NVIDIA DRIVE platforms

./sample_light_classifier --liveCam=1

Output

The sample creates a window, displays a video, and overlays bounding boxes for traffic light objects. The state of the traffic light is displayed on the text on top of the bounding box. The color of the bounding boxes represents the status of the traffic light, as follows:

  • Green: Green_Arrow_Traffic_Light, Green_Solid_Traffic_Light, Green_Arrow_Green_Solid_Traffic_Light
  • Red: Red_Arrow_Traffic_Light, Red_Solid_Traffic_Light, Red_Arrow_Red_Solid_Traffic_Light
  • White: Stateless or non-facing Traffic Light, Red_Arrow_Green_Solid_Traffic_Light, Green_Arrow_Red_Solid_Traffic_Light
  • Yellow: Yellow_Arrow_Traffic_Light, Yellow_Solid_Traffic_Light
sample_trafficlight_classification.png
Wait Conditions Classification Sample

Additional Information

For more information, see LightNet.