Performance

Training Accuracy Results

Training Accuracy: NVIDIA DGX SuperPOD (8 x 8 x A100 80GB for CLIP B/32 Model)

We followed the training recipe from Open CLIP blog to verify our training pipeline. Our results are displayed in the table below:

Framework

Dataset

Model Name

Batch Size

Samples Seen

ImageNet Top-1

OpenCLIP

LAION 400M

B/32

32k

12B

62.90%

NeMo

Our Multimodal Blend*

B/32

32k

12B

60.13%

Note

Our multimodal dataset is originated from Common Crawl with custom filtering and contains 670M image-caption pairs.

Our multimodal dataset is originated from Common Crawl with custom filtering and contains 670M image-caption pairs. We believe the final accuracy difference is due to the dataset, as LAION 400M is filtered with CLIP scores. To ensure our implementation is consistent with OpenCLIP, we trained OpenCLIP with our dataset and found out that the loss curve and validation accuracy were nearly identical to NeMo’s CLIP.