ReidentificationNet Transformer

The ReidentificationNet Transformer model generates embeddings to identify objects captured in different scenes.

The model is essentially a Swin Transformer backbone that receives cropped images of objects as input and produces feature embeddings as output.

The training algorithm optimizes the network to minimize the triplet, center, and cross entropy loss.

The primary use case for this model is generating embeddings for an object, then performing similarity matching across embeddings from different scenes.

The datasheet for this model is hosted with its NGC model card.

Previous ReidentificationNet
Next Open Images
© Copyright 2024, NVIDIA. Last updated on Mar 22, 2024.