Gemma 3 VL / Gemma 3n#
Gemma 3 VL is Googleβs multimodal extension of Gemma 3, supporting image-text inputs for tasks like image captioning and visual question answering. Gemma 3n is a next-generation efficiency-focused variant.
Task |
Image-Text-to-Text |
Architecture |
|
Parameters |
4B β 27B |
HF Org |
Available Models#
Gemma 3 27B IT (VL)
Gemma 3 4B IT (VL)
Gemma 3n 4B (VL)
Architecture#
Gemma3ForConditionalGeneration
Example HF Models#
Model |
HF ID |
|---|---|
Gemma 3 4B IT |
|
Gemma 3 27B IT |
Example Recipes#
Recipe |
Dataset |
Description |
|---|---|---|
cord-v2 |
SFT β Gemma 3 4B VL on CORD-v2 |
|
cord-v2 |
LoRA β Gemma 3 4B VL on CORD-v2 |
|
cord-v2 |
SFT β Gemma 3 4B VL with MegatronFSDP |
|
MedPix-VQA |
SFT β Gemma 3 4B VL on MedPix |
|
MedPix-VQA |
SFT β Gemma 3n 4B VL on MedPix |
|
MedPix-VQA |
LoRA β Gemma 3n 4B VL on MedPix |
Try with NeMo AutoModel#
1. Install (full instructions):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2.yaml
Run with Docker
1. Pull the container and mount a checkpoint directory:
docker run --gpus all -it --rm \
--shm-size=8g \
-v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
nvcr.io/nvidia/nemo-automodel:26.02.00
2. Navigate to the AutoModel directory (where the recipes are):
cd /opt/Automodel
3. Run the recipe:
automodel --nproc-per-node=8 examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2.yaml
See the Installation Guide and VLM Fine-Tuning Guide.
Fine-Tuning#
See the Gemma 3 & Gemma 3n Fine-Tuning Guide for detailed instructions on dataset preparation, configuration, and multi-GPU training.