Important

You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.

Gemma#

Released in February 2024, Google’s Gemma is an open model based on the work (Gemini v1.5 Report) done to create Google’s Gemini family of models. It adopts the transformer decoder framework while adding multi-query attention, RoPE, GeGLU activations, and more. Gemma is offered at 2B and 7B, providing a powerful model at reasonable sizes. More information is available in Google’s release blog.

Subsequently released in April 2024, CodeGemma joins the Gemma family with a specialization in code understanding and generation.

Note

Currently, Gemma does not support CuDNN Fused Attention. The recipes disable CuDNN attention and use Flash Attention instead.

We provide pre-defined recipes for finetuning Gemma models using NeMo 2.0 and NeMo-Run. Models supported include Gemma 1 (2B, 7B), and Gemma 2 (9B, 27B). These recipes configure a run.Partial for one of the nemo.collections.llm api functions introduced in NeMo 2.0. The recipes are hosted in gemma_2b gemma_7b

NeMo 2.0 Finetuning Recipes#

Note

The finetuning recipes use the SquadDataModule for the data argument. You can replace the SquadDataModule with your custom dataset.

To import the HF model and convert to NeMo 2.0 format, run the following command (this only needs to be done once)

from nemo.collections import llm
llm.import_ckpt(model=llm.GemmaModel(llm.GemmaConfig2B()), source='hf://google/gemma-2b')

By default, the non-instruct version of the model is loaded. To load a different model, set finetune.resume.restore_config.path=nemo://<hf model id> or finetune.resume.restore_config.path=<local model path>

We provide an example below on how to invoke the default recipe and override the data argument:

from nemo.collections import llm

recipe = llm.gemma_2b.finetune_recipe(
    name="gemma_2b_finetuning",
    dir=f"/path/to/checkpoints",
    num_nodes=1,
    num_gpus_per_node=8,
    peft_scheme='lora',  # 'lora', 'none'
    packed_sequence=False,
)

# # To override the data argument
# dataloader = a_function_that_configures_your_custom_dataset(
#     gbs=gbs,
#     mbs=mbs,
#     seq_length=recipe.model.config.seq_length,
# )
# recipe.data = dataloader

By default, the finetuning recipe will run LoRA finetuning with LoRA applied to all linear layers in the language model. To finetune the entire model without LoRA, set peft_scheme='none' in the recipe argument.

To finetune with sequence packing for a higher throughput, set packed_sequence=True. Note that you may need to tune the global batch size in order to achieve similar convergence.

Note

The configuration in the recipes is done using the NeMo-Run run.Config and run.Partial configuration objects. Please review the NeMo-Run documentation to learn more about its configuration and execution system.

Once you have your final configuration ready, you can execute it on any of the NeMo-Run supported executors. The simplest is the local executor, which just runs the pretraining locally in a separate process. You can use it as follows:

import nemo_run as run

run.run(recipe, executor=run.LocalExecutor())

Additionally, you can also run it directly in the same Python process as follows:

run.run(recipe, direct=True)

Recipe

Status

Gemma 2B

Yes

Gemma 7B

Yes