nemo_export.utils.lora_converter#

Module Contents#

Functions#

replace_number_add_offset

rename_qkv_keys

reformat_module_names_to_hf

convert_lora_weights_to_canonical

This function converts nemo style (fused) lora weights to canonical (unfused) LoRA weights.

convert_lora_nemo_to_canonical

API#

nemo_export.utils.lora_converter.replace_number_add_offset(key, offset_value)[source]#
nemo_export.utils.lora_converter.rename_qkv_keys(key)[source]#
nemo_export.utils.lora_converter.reformat_module_names_to_hf(
tensors: Dict[str, torch.Tensor],
) Tuple[Dict[str, torch.Tensor], List[str]][source]#
nemo_export.utils.lora_converter.convert_lora_weights_to_canonical(
config: Dict[str, Any],
lora_weights: Dict[str, torch.Tensor],
) Dict[str, torch.Tensor][source]#

This function converts nemo style (fused) lora weights to canonical (unfused) LoRA weights.

Namely, it unfuses the QKV adapter layers and the H-to-4H adapter layers.

Returns:

The new LoRA weights with unfused layers.

Return type:

Dict[str, torch.Tensor]

nemo_export.utils.lora_converter.convert_lora_nemo_to_canonical(
lora_nemo,
save_path,
hf_format=False,
donor_hf_config=None,
)[source]#