DriveWorks SDK Reference
3.5.78 Release
For Test and Development only

Traffic Sign Classification Sample (SignNet)
SW Release Applicability: This sample is available in NVIDIA DRIVE Software releases.


The Traffic Sign Classification sample demonstrates how to use the NVIDIA® proprietary SignNet deep neural network (DNN) to perform traffic sign classification. It outputs the class of the traffic signs from images captured by the ego car.

SignNet currently supports RCB images. RGBA images are not supported. SignNet models currently cover three geographical regions. For one of such regions, the United States (US), one model is provided: US_V2 and US_V4. For another supported region, the European Union (EU), there are also two available models: EU_V3 and EU_V4. For the last supported region, Japan, there is only one supported model - JP_V1. The main difference between different model versions for the EU and US regions is in the number of supported classes. Models version 2 for each region have significantly more traffic sign classes covered than version 1 models.

The default model for the sample app is US_V2 for the US. In order to use the European traffic sign models, the Japan model, or to use the US model version 4, one would need to explicitly select it with a corresponding command line parameter when running the sample.

This sample shows a simple implementation of traffic sign classification built around the NVIDIA SignNet DNN. The classification is done by first detecting the traffic signs with the help of NVidia DriveNet DNN and then classification of found image-crops with the help of SignNet DNN. There is no tracking of traffic signs applied, so one may notice some flickering of detections. For more information on the SignNet DNN and how to customize it for your applications, consult your NVIDIA sales or business representative.

Sensor Details

The image datasets used to train SignNet have been captured by a View Sekonix Camera Module (SF3325) with AR0231 RCCB sensor. The camera is mounted on a rig on top of the vehicle. Demo videos are captured at 2.3 MP and downsampled to 960 x 604. Eight cameras were used to collect the data for training the provided SignNet models. The following list shows the setup position and field of view (FOV) of each such camera:

  • Center Front 60° FOV
  • Center Front 120° FOV
  • Center Front 30° FOV
  • Center Right 120° FOV
  • Center Left 120° FOV
  • Rear Left 120° FOV
  • Rear Center 120° FOV
  • Rear Center 60° FOV

To achieve the best traffic sign detection performance, NVIDIA recommends to adopt a camera setup similar to one or more cameras from the list above and align the video center vertically with the horizon before recording new videos.


Currently, the SignNet DNN has limitations that could affect its performance:
  • It was trained mostly for bright day-light, overcast, twilight, non-rain visibility conditions. Training for artificial light, night-light conditions, and rainy-weather visibility was limited and, thus, the performance of the classifier may suffer in rain or in constrained illumination.
  • The classification performance of SignNet depends on the size of the traffic signs detected in an image frame. Good classification performance is observed when the height of the traffic signs is 20 pixels or more. Predictions for very small signs may be unreliable.
  • The provided SignNet models were trained on data collected in the United States, Japan, and the countries comprising European Union. As a result, SignNet models may not be suitable for other geographical regions. However, the use of the EU model may be appropriate for other countries which adopted the Vienna convention for traffic signs. But the specific sign classes available in those countries should be reviewed case by cases basis against the ones available in the provided model.

Even though the SignNet DNN was trained with data from cameras setup pointing in various direction of the sensor rig (see the list above), it is recommeded to use it for the following directional and FOV setup::

  • Center-front camera location with a 60° FOV.
  • Center-front camera location with a 120° FOV.

Running the Sample

The command line for the sample is:

./sample_sign_classifier --rig=[path/to/rig/file]


    Rig file containing all information about vehicle sensors and calibration.
    Default value with video: path/to/data/samples/waitcondition/rig.json
    Default value with live camera: path/to/data/samples/waitcondition/live_cam_rig.json

    Use live camera or video file. Takes no effect on x86.
    Need to be set to 1 if passing in a rig with live camera setup.
    To switch the mode, pass `--liveCam=0/1` as the argument.
    Default value: 0

To run the sample on Linux


To run the sample on video with european signs classifier version 2:

./sample_sign_classifier --model=EU_V2    

To run the sample on a camera on NVIDIA DRIVE platforms

./sample_sign_classifier --liveCam=1


The sample creates a window, displays a video, and overlays bounding boxes for detected traffic signs. The class of the sign is displayed with the text label above the bounding box.

The following table describes the models provided as part of the package. Follow the hyperlinks to the the full list of classes supported by each model.

SignNet US v2.0 US_V2 United States of America 312 273 Advanced USA model with expanded sign coverage.
TrafficSign_US_v4 US_V4 United States of America 312 273 Advanced USA model with expanded sign coverage and HWISP support.
signnet_EU_v3_0 EU_V3 European Union 242 232 Advanced EU model with expanded sign coverage.
TrafficSign_EU_v4 EU_V4 European Union 242 232 Advanced EU model with expanded sign coverage and HWISP.
SignNet JP v1.0 JP_V1 Japan 184 151 Japan sign model.

Note, the EU SignNet models may be appropriate to classify road signs from other non-EU countries that follow Vienna convention.

Wait Conditions Classification Sample

Additional Information

For more information, see SignNet.