nemo_microservices.types.shared_params.lora_finetuning_data#
Module Contents#
Classes#
API#
- class nemo_microservices.types.shared_params.lora_finetuning_data.LoraFinetuningData#
Bases:
typing_extensions.TypedDict- alpha: typing_extensions.Required[int]#
None
A scaling factor that controls how much influence the LoRA adaptations have on the base model’s behavior. The alpha parameter should typically be set to dim or 0.5 ** dim as the actual scaling applied in the training loop is alpha / dim.
- apply_lora_to_mlp: typing_extensions.Required[bool]#
None
Controls whether to adapt the model’s feed-forward neural network layers using LoRA.
- apply_lora_to_output: typing_extensions.Required[bool]#
None
Controls whether to adapt the model’s final output layer using LoRA.