LPRNet aims to recognize characters in license plates from cropped RGB license plate images. Two pretrained LPRNet models are delivered — one is trained on a NVIDIA-owned US license plate dataset and another is trained on a Chinese license plate dataset.
The model is a sequence classification model with a ResNet backbone. And it will take the image as network input and produce sequence output.
The training algorithm optimizes the network to minimize the connectionist temporal classification (CTC) loss between a ground truth characters sequence of a license plate and a predicted characters sequence. Then the license plate will be decoded from the sequence output of the model through best path decoding method (greedy decoding).
The pretrained LPRNet models were trained using the NVIDIA LPRNet training app in TLT v3.0.
Primary use case intended for this model is to recognize the license plate from the cropped RGB license plate image.
The datasheet for the model is captured in it’s model card hosted at NGC.