nemo_microservices.types.shared.lora_finetuning_data#

Module Contents#

Classes#

API#

class nemo_microservices.types.shared.lora_finetuning_data.LoraFinetuningData(/, **data: Any)#

Bases: nemo_microservices._models.BaseModel

alpha: int#

None

A scaling factor that controls how much influence the LoRA adaptations have on the base model’s behavior. The alpha parameter should typically be set to dim or 0.5 ** dim as the actual scaling applied in the training loop is alpha / dim.

apply_lora_to_mlp: bool#

None

Controls whether to adapt the model’s feed-forward neural network layers using LoRA.

apply_lora_to_output: bool#

None

Controls whether to adapt the model’s final output layer using LoRA.