nemo_automodel.components.checkpoint.addons
#
Module Contents#
Classes#
Optional hooks that run around backend IO (used for PEFT and consolidated HF metadata). |
|
Addon that writes consolidated Hugging Face metadata alongside sharded weights. |
|
Addon that writes PEFT-specific metadata and tokenizer alongside adapter weights. |
Functions#
Get the minimal PEFT config in the format expected by Hugging Face. |
|
Get the PEFT metadata in the format expected by Automodel. |
|
Extract the target modules from the model used by LoRA/PEFT layers. |
API#
- class nemo_automodel.components.checkpoint.addons.CheckpointAddon#
Bases:
typing.Protocol
Optional hooks that run around backend IO (used for PEFT and consolidated HF metadata).
- pre_save(**kwargs) None #
- class nemo_automodel.components.checkpoint.addons.ConsolidatedHFAddon#
Addon that writes consolidated Hugging Face metadata alongside sharded weights.
On rank 0, this saves
config.json
,generation_config.json
, and tokenizer artifacts into the provided consolidated directory, then synchronizes ranks.- pre_save(**kwargs) None #
Pre-save hook to emit consolidated HF artifacts.
Expected kwargs: model_state (ModelState): Wrapper holding the model parts. consolidated_path (str): Target directory for consolidated artifacts. tokenizer (PreTrainedTokenizerBase | None): Optional tokenizer to save.
- class nemo_automodel.components.checkpoint.addons.PeftAddon#
Addon that writes PEFT-specific metadata and tokenizer alongside adapter weights.
On rank 0, this saves
adapter_config.json
,automodel_peft_config.json
, the tokenizer (if provided), and synchronizes all ranks afterward.- pre_save(**kwargs) None #
Pre-save hook to emit PEFT artifacts.
Expected kwargs: model_path (str): Directory in which to save PEFT files. tokenizer (PreTrainedTokenizerBase | None): Optional tokenizer to save. model_state (ModelState): Wrapper holding the model parts. peft_config (PeftConfig): PEFT configuration for serialization.
- nemo_automodel.components.checkpoint.addons._get_hf_peft_config(
- peft_config: peft.PeftConfig,
- model_state: nemo_automodel.components.checkpoint.stateful_wrappers.ModelState,
Get the minimal PEFT config in the format expected by Hugging Face.
- Parameters:
peft_config – Source PEFT configuration.
model_state – Model wrapper used to infer target modules and model task.
- Returns:
A dictionary containing the minimal HF-compatible PEFT configuration (e.g., task type, LoRA rank/alpha, and discovered target modules).
- nemo_automodel.components.checkpoint.addons._get_automodel_peft_metadata(peft_config: peft.PeftConfig) dict #
Get the PEFT metadata in the format expected by Automodel.
- Parameters:
peft_config – Source PEFT configuration.
- Returns:
A dict containing Automodel-specific PEFT metadata fields filtered from the full PEFT configuration.
- nemo_automodel.components.checkpoint.addons._extract_target_modules(model: torch.nn.Module) list[str] #
Extract the target modules from the model used by LoRA/PEFT layers.
.. note::
When torch.compile is used, module names get prefixed with
_orig_mod.
. This function strips those prefixes to get the original module names.- Parameters:
model – The model whose named modules are scanned.
- Returns:
A sorted list of unique module name prefixes that contain LoRA layers.