Gemma 4#
Gemma 4 is Googleβs next-generation multimodal Gemma family, supporting image-text inputs with a Mixture-of-Experts (MoE) language backbone at larger scales. NeMo AutoModel replaces the HF-native dense matmul over experts with the NeMo GroupedExperts backend, enabling Expert Parallelism (EP) via the standard MoE parallelizer.
Task |
Image-Text-to-Text |
Architecture |
|
Parameters |
2B β 31B (dense) Β· 26B-A4B (MoE) |
HF Org |
Available Models#
Gemma 4 E2B IT (VL, dense)
Gemma 4 E4B IT (VL, dense, kv-shared layers)
Gemma 4 31B IT (VL, dense)
Gemma 4 26B-A4B IT (VL, MoE)
Architecture#
Gemma4ForConditionalGeneration
Example HF Models#
Model |
HF ID |
|---|---|
Gemma 4 E2B IT |
|
Gemma 4 E4B IT |
|
Gemma 4 31B IT |
|
Gemma 4 26B-A4B IT (MoE) |
Example Recipes#
Recipe |
Description |
|---|---|
SFT β Gemma 4 E2B on MedPix |
|
LoRA β Gemma 4 E2B on MedPix |
|
SFT β Gemma 4 E4B on MedPix |
|
LoRA β Gemma 4 E4B on MedPix |
|
SFT β Gemma 4 31B on MedPix |
|
LoRA β Gemma 4 31B on MedPix |
|
SFT β Gemma 4 31B with TP=4 |
|
SFT β Gemma 4 31B with TP=4, PP=2 |
|
SFT β Gemma 4 31B with TP=4, PP=4 (multi-node) |
|
SFT β Gemma 4 26B-A4B MoE on MedPix |
|
LoRA β Gemma 4 26B-A4B MoE on MedPix |
Try with NeMo AutoModel#
1. Install (full instructions):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/vlm_finetune/gemma4/gemma4_4b.yaml
Run with Docker
1. Pull the container and mount a checkpoint directory:
docker run --gpus all -it --rm \
--shm-size=8g \
-v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
nvcr.io/nvidia/nemo-automodel:26.02.00
2. Navigate to the AutoModel directory (where the recipes are):
cd /opt/Automodel
3. Run the recipe:
automodel --nproc-per-node=8 examples/vlm_finetune/gemma4/gemma4_4b.yaml
See the Installation Guide and VLM Fine-Tuning Guide.
Fine-Tuning#
See the VLM Fine-Tuning Guide.