nemo_automodel.components.utils.model_utils#
Module Contents#
Functions#
Best-effort retrieval of |
|
Check if the model supports logits_to_keep. |
|
Check if the model’s forward() accepts seq_lens. |
|
Drop kwargs that |
|
Return the logical number of elements for a parameter, accounting for quantized (packed) storage. |
|
Get the number of trainable parameters and the L2 norm of the model. |
|
Context manager to skip random weight initialization when loading pretrained models. |
|
Whitelist NVIDIA models to allow remote code execution. |
|
Count total and trainable parameters. Safe to call on meta-device models. |
|
Print the number of trainable parameters in the model. |
|
Helper function to freeze parameters by attribute name and name patterns. |
|
Apply parameter freezing based on configuration. |
|
Freeze dead K/V parameters in KV-shared layers. |
|
Cast fp32 parameters and buffers to bf16 for FSDP2 compatibility. |
|
Squeeze batch dimension and prepare inputs for THD (total, hidden, depth) format. |
|
A context manager under which models are initialized with all parameters on the specified device. |
Data#
API#
- nemo_automodel.components.utils.model_utils.logger#
‘getLogger(…)’
- nemo_automodel.components.utils.model_utils._get_forward_signature(
- model: torch.nn.Module,
Best-effort retrieval of
model.forwardsignature.
- nemo_automodel.components.utils.model_utils._supports_logits_to_keep(model: torch.nn.Module) bool#
Check if the model supports logits_to_keep.
- Parameters:
model (nn.Module) – The model to check.
- Returns:
True if the model supports logits_to_keep, False otherwise.
- Return type:
bool
- nemo_automodel.components.utils.model_utils._supports_seq_lens(model: torch.nn.Module) bool#
Check if the model’s forward() accepts seq_lens.
Returns True if:
forward() has an explicit
seq_lensparameter, ORforward() has **kwargs (so it won’t crash if seq_lens is passed)
Returns False otherwise (passing seq_lens would cause “unexpected kwarg” error).
- nemo_automodel.components.utils.model_utils.filter_forward_kwargs(model: torch.nn.Module, kwargs: dict) dict#
Drop kwargs that
model.forwarddoes not accept.If the model exposes
**kwargsor its signature cannot be inspected, the input kwargs are returned unchanged. The original dict is never mutated.
- nemo_automodel.components.utils.model_utils._get_logical_numel(param) int#
Return the logical number of elements for a parameter, accounting for quantized (packed) storage.
For bitsandbytes 4-bit params (Params4bit), the physical tensor packs multiple values per byte. We recover the logical count from the original shape stored in param.quant_state.
- nemo_automodel.components.utils.model_utils._get_model_param_stats(
- model: torch.nn.Module,
Get the number of trainable parameters and the L2 norm of the model.
- Parameters:
model – Model to analyze
- Returns:
int trainable_params: int local_sq_norm: float
- Return type:
total_params
- nemo_automodel.components.utils.model_utils.skip_random_init()#
Context manager to skip random weight initialization when loading pretrained models.
- nemo_automodel.components.utils.model_utils.resolve_trust_remote_code(pretrained_model_name_or_path)#
Whitelist NVIDIA models to allow remote code execution.
- Parameters:
pretrained_model_name_or_path (str) – The name or path of the pretrained model.
- Returns:
True if the model should be loaded with trust_remote_code, False otherwise.
- Return type:
bool
- nemo_automodel.components.utils.model_utils.count_model_parameters(model: torch.nn.Module) tuple[int, int]#
Count total and trainable parameters. Safe to call on meta-device models.
- Parameters:
model – Model to analyze
- Returns:
int total_params: int
- Return type:
trainable_params
- nemo_automodel.components.utils.model_utils.print_trainable_parameters(model: torch.nn.Module) tuple[int, int]#
Print the number of trainable parameters in the model.
- Parameters:
model – Model to analyze
- Returns:
int total_params: int
- Return type:
trainable_params
- nemo_automodel.components.utils.model_utils._freeze_module_by_attribute_and_patterns(
- model,
- attribute_name,
- name_patterns,
Helper function to freeze parameters by attribute name and name patterns.
- Parameters:
model – The model to apply freezing to.
attribute_name – Name of the model attribute to freeze (e.g., ‘vision_tower’).
name_patterns – List of patterns to match in module names.
- nemo_automodel.components.utils.model_utils.apply_parameter_freezing(model, freeze_config)#
Apply parameter freezing based on configuration.
- Parameters:
model – The model to apply freezing to.
freeze_config – Configuration dict specifying what to freeze.
freeze_config can contain: - freeze_vision_tower: bool (default True) - freeze_audio_tower: bool (default False) - freeze_language_model: bool (default False)
- nemo_automodel.components.utils.model_utils.freeze_unused_kv_sharing_params(model)#
Freeze dead K/V parameters in KV-shared layers.
Models like Gemma4 E2B/E4B use KV-sharing where the last N layers reuse key/value states from earlier layers. The
k_proj,v_proj,k_norm, andv_normmodules still exist in those shared layers but are never used during forward. Their parameters therefore receive no gradients, yet the optimizer still tracks them. On checkpoint resume the distributed checkpoint framework expects optimizer state for every parameter the optimizer was created with, but zero-gradient params may have been excluded from the saved state — causing aRuntimeError.Calling this function before optimizer creation sets
requires_grad=Falseon the dead parameters so the optimizer never tracks them, keeping save and load consistent.- Parameters:
model – The model (or pipeline-parallel model part).
- nemo_automodel.components.utils.model_utils.cast_mixed_dtype_params_to_bf16(model)#
Cast fp32 parameters and buffers to bf16 for FSDP2 compatibility.
- nemo_automodel.components.utils.model_utils.squeeze_input_for_thd(
- input_ids,
- position_ids,
- padding_mask,
- attn_kwargs,
- seqlens_padding_value=-1000,
Squeeze batch dimension and prepare inputs for THD (total, hidden, depth) format.
This function removes the batch dimension from input tensors and processes attention kwargs for use with Transformer Engine’s THD format. It’s typically used when the batch has already been converted to THD format (with batch_size=1 as a placeholder dimension) and that dimension needs to be removed.
The function performs three key operations:
Removes the batch dimension (dim 0) from input tensors
Filters out padding values from cumulative sequence length tensors
Converts max_seqlen from tensor to scalar if needed
- Parameters:
input_ids (torch.Tensor) – Input token IDs with shape [1, total_tokens] or [1, total_tokens, hidden_dim]. The first dimension will be squeezed.
position_ids (torch.Tensor) – Position IDs with shape [1, total_tokens]. The first dimension will be squeezed.
padding_mask (torch.Tensor) – Padding mask with shape [1, total_tokens]. The first dimension will be squeezed.
attn_kwargs (dict) –
Dictionary of attention-related tensors. May contain:
cu_seqlens: Cumulative sequence lengths [1, num_seqs+1]
cu_seqlens_padded: Cumulative padded sequence lengths [1, num_seqs+1]
max_seqlen: Maximum sequence length (tensor or int)
Other attention parameters (will be squeezed if tensors)
seqlens_padding_value (int) – Sentinel value used to indicate padding in cu_seqlens and cu_seqlens_padded tensors. These values will be filtered out. Default: -1000.
- Returns:
A tuple containing: - input_ids (torch.Tensor): Input IDs with batch dimension removed [total_tokens] or [total_tokens, hidden_dim] - position_ids (torch.Tensor): Position IDs with batch dimension removed [total_tokens] - padding_mask (torch.Tensor): Padding mask with batch dimension removed [total_tokens] - attn_kwargs (dict): Updated attention kwargs with: - Batch dimensions removed from all tensor values - Padding values filtered from cu_seqlens and cu_seqlens_padded - max_seqlen converted to scalar if it was a tensor
- Return type:
tuple
.. rubric:: Example
input_ids = torch.tensor([[1, 2, 3, 4, 5]]) # [1, 5] position_ids = torch.tensor([[0, 1, 2, 3, 4]]) # [1, 5] padding_mask = torch.tensor([[False, False, False, False, False]]) # [1, 5] attn_kwargs = { … ‘cu_seqlens’: torch.tensor([[0, 3, 5, -1000]]), # [1, 4] with padding … ‘cu_seqlens_padded’: torch.tensor([[0, 3, 5, -1000]]), … ‘max_seqlen’: torch.tensor([3]) … } ids, pos, mask, kwargs = squeeze_input_for_thd( … input_ids, position_ids, padding_mask, attn_kwargs … ) ids.shape torch.Size([5]) kwargs[‘cu_seqlens’] # Padding value filtered out tensor([0, 3, 5]) kwargs[‘max_seqlen’] # Converted to scalar 3
.. note::
This function modifies attn_kwargs in-place. If you need to preserve the original dictionary, pass a copy.
- nemo_automodel.components.utils.model_utils.init_empty_weights()#
A context manager under which models are initialized with all parameters on the specified device.
- Parameters:
device (
torch.device) – Device to initialize all parameters on.
Example:
import torch.nn as nn from nemo_automodel.components.utils.model_utils import init_empty_weights with init_empty_weights(): tst = nn.Linear(100, 100) # on `cuda` device