nemo_automodel.components.models.minimax_m2.layers#

Module Contents#

Classes#

MiniMaxM2Attention

MiniMax-M2 attention with optional Q/K RMSNorm and partial RoPE.

API#

class nemo_automodel.components.models.minimax_m2.layers.MiniMaxM2Attention(
config: Any,
backend: nemo_automodel.components.models.common.BackendConfig,
)#

Bases: torch.nn.Module

MiniMax-M2 attention with optional Q/K RMSNorm and partial RoPE.

Initialization

forward(
x: torch.Tensor,
*,
freqs_cis: torch.Tensor,
attention_mask: torch.Tensor | None = None,
**attn_kwargs: Any,
) torch.Tensor#
init_weights(buffer_device: torch.device, init_std: float = 0.02)#