Vision-Language Foundation

Humans naturally process information using multiple senses like sight and sound. Similarly, multimodal learning aims to create models that handle different data types, such as images, text, and audio. There’s a growing trend in models that combine vision and language, like OpenAI’s CLIP. These models excel at tasks like aligning image and text features, image captioning, and visual question-answering. Their ability to generalize without specific training offers many practical uses. Please refer to NeMo Framework User Guide for Multimodal Models for detailed support information.

Previous NeVA
Next Datasets
© Copyright 2023-2024, NVIDIA. Last updated on Apr 25, 2024.