DriveWorks SDK Reference
3.0.4260 Release
For Test and Development only

Traffic Light Classification Sample (LightNet)
Note
SW Release Applicability: This sample is available in NVIDIA DRIVE Software releases.

Description

The Traffic Light Classification sample demonstrates how to use the NVIDIA® proprietary LightNet deep neural network (DNN) to perform traffic light classification. It detects the state of the traffic lights facing the ego car. LightNet currently supports RCB images. RGBA images are not supported.

This sample shows a simple implementation of traffic light classification built around the NVIDIA LightNet DNN. For more information on the LightNet DNN and how to customize it for your applications, consult your NVIDIA sales or business representative.

Sensor Details

The image datasets used to train LightNet have been captured by a View Sekonix Camera Module (SF3325) with AR0231 RCCB sensor. The camera is mounted high up at the rear view mirror position. Demo videos are captured at 2.3 MP and down-sampled to 960 x 604.

To achieve the best traffic light detection performance, NVIDIA recommends to adopt a similar camera setup and align the video center vertically with the horizon before recording new videos.

Limitations

Warning
Currently, the LightNet DNN has limitations that could affect its performance:
  • It is optimized for daytime, clear-weather data. As a result, it does not perform well in dark or rainy conditions.
  • It is trained on data collected in the United States. As a result, it may have reduced accuracy in other locales.

The LightNet DNN is trained to support any of the following six camera configurations:

  • Front camera location with a 60° field of view
  • Front camera location with a 120° field of view

Running the Sample

./sample_light_classifier --input-type=[video|camera]
                          --video=[path/to/video]
                          --camera-type=[camera]
                          --camera-group=[a|b|c|d]
                          --slave=[0|1]
                          --camera-index=[0|1|2|3]
                          --precision=[fp16|fp32]
                          --useCudaGraph=[0|1]
                          --dla=[0|1]
                          --dlaEngineNo=[0|1]

where

--input-type=[video|camera]
        Defines if the input is from live camera or from a recorded video.
        Live camera is supported only on NVIDIA DRIVE(tm) platforms.
        It is not supported on Linux (x86 architecture) host systems.
        Default value: video

--video=[path/to/video]
        Specifies the absolute or relative path of a raw, lraw or h264 recording.
        Only applicable if --input-type=video
        Default value: path/to/data/samples/raw/rccb.raw

--camera-type=[camera]
        Specifies a supported AR0231 `RCCB` sensor.
        Only applicable if --input-type=camera.
        Default value: ar0231-rccb-bae-sf3324

--camera-group=[a|b|c|d]
        Is the group where the camera is connected to.
        Only applicable if --input-type=camera.
        Default value: b

--slave=[0|1]
        Setting this parameter to 1 when running the sample on Xavier B accesses the camera
        on Xavier A.
        Applicable only when --input-type=camera.
        Default value: 0

--camera-index=[0|1|2|3]
        Specifies the camera index within the camera group.
        Default value: 0

--precision=[fp16|fp32]
        Specifies the precision for the LightNet model.
        Default value: fp32

--useCudaGraph=[0|1]
        Setting this parameter to 1 runs LightNet DNN inference by CUDAGraph if the hardware supports.
        Default value: 0

--dla=[0|1]
        Runs inference on the DLA.
        Default value: 0

--dlaEngineNo=[integer]
        Specifies the DLA engine to run LightNet.
        Applicable only when --dla=1.
        Default value: 0

To run the sample on Linux

./sample_light_classifier --video=<video file.raw>

or ./sample_light_classifier –video=<video file.lraw> or ./sample_light_classifier –video=<video file.h264>

To run the sample on a camera on NVIDIA DRIVE platforms

./sample_light_classifier --input-type=camera --camera-type=<rccb camera type> --camera-group=<camera group> --camera-index=<camera idx on camera group>

where <rccb camera type> is one of the following:

  • ar0231-rccb-bae-sf3324
  • ar0231-rccb-bae-sf3325

Output

The sample creates a window, displays a video, and overlays bounding boxes for traffic light objects. The state of the traffic light is displayed on the text on top of the bounding box. The color of the bounding boxes represents the status of the traffic light, as follows:

  • Green: Green_Arrow_Traffic_Light, Green_Solid_Traffic_Light, Green_Arrow_Green_Solid_Traffic_Light
  • Red: Red_Arrow_Traffic_Light, Red_Solid_Traffic_Light, Red_Arrow_Red_Solid_Traffic_Light
  • White: Stateless or non-facing Traffic Light, Red_Arrow_Green_Solid_Traffic_Light, Green_Arrow_Red_Solid_Traffic_Light
  • Yellow: Yellow_Arrow_Traffic_Light, Yellow_Solid_Traffic_Light
sample_trafficlight_classification.png
Wait Conditions Classification Sample

Additional Information

For more information, see LightNet.