Vision-Language Model (VLM) Fine-Tuning#
TAO supports fine-tuning of Vision-Language Models (VLMs), which can understand and process both visual and textual information. VLM fine-tuning enables you to adapt pretrained multimodal models to your specific use cases, such as video understanding, visual question answering, and multimodal content generation.
The VLM fine-tuning pipeline in TAO is designed to work with state-of-the-art vision-language models, and provides:
Multimodal understanding: Trains models that can process both visual (images/videos) and textual inputs.
Flexible data formats: Supports various annotation formats, including LLaVA format.
AutoML integration: Automates hyperparameter optimization for optimal model performance.
Scalable training: Supports multi-GPU and distributed training capabilities.
API-first approach: Provides complete workflow management through the TAO Toolkit API.
Note
VLM fine-tuning is currently available only through the TAO Toolkit API and tao-client interfaces.
There is no launcher-based interface for VLM models.
Supported Models#
TAO currently supports the following VLM architectures:
Cosmos-Reason: A state-of-the-art video-language model for video understanding tasks
Key Features#
Multimodal data processing: Handle datasets containing both visual content (images and videos) and corresponding text annotations.
Advanced training techniques: Leverages techniques like LoRA (Low-Rank Adaptation) for efficient fine-tuning of large models.
AutoML support: Automatically optimizes hyperparameters, including learning rates, batch sizes, and training policies.
Cloud integration: Supports seamless integration with cloud storage services (AWS, Azure) for dataset management and result storage.
Inference microservices: Deploys trained models as persistent microservices for fast, repeated inference without model reloading overhead. Refer to Inference Microservices for details.
Getting Started#
To get started with VLM fine-tuning:
Setup environment: Follow the TAO Toolkit API setup guide to prepare your environment.
Prepare data: Organize your multimodal dataset in a supported format.
Configure training: Use the API to configure your training parameters and AutoML settings.
Monitor training: Track training progress and metrics through the API.
Deploy models: Use Inference Microservices for model deployment.