The Visual ChangeNet-Classification model detects defective PCB (Printed Circuit Board) components given component level images of PCBs. The inputs are a “golden or reference” image and the image of the PCB component under inspection and the output is a binary classification label denoting ‘defect’ or ‘no-defect’.
Visual ChangeNet is a state of the art transformer-based Change Detection model. Visual ChangeNet is also based on Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. In TAO, Visual ChangeNet supports two images as input where the end goal is to either classify or segment the change between the “golden or reference” image and the “test” image. TAO supports the FAN backbone network for both Visual ChangeNet architectures. In TAO, two different types of Change Detection networks are supported: Visual ChangeNet-Segmentation and Visual ChangeNet-Classification intended for segmentation and classification of change between the two input images, respectively. Visual ChangeNet-Classification is specifically intended for change classification.
Following are sample images showing PCB components with and without defects. The component images shown below were captured under 4 LED illuminations (Solder, Uniform, LowAngle and White). Images for the 4 LED lights were concatenated to display within 2 X 2 grid. Left grid denotes the images captured under 4 LED illuminations for “golden” image and right grid denotes the images captured for same lighting conditions for the “test” image.
Missing Component Defect
The training algorithm optimizes the VisualChangeNet network to differentiate between defective and good samples compared with a golden reference. This model was trained using the VisualChangeNet-Classification training app in the TAO Toolkit v5.1.