nemo_automodel.components.models.glm4_moe.layers#
Module Contents#
Classes#
GLM4 MoE attention with optional query/key per-head RMSNorm + partial RoPE. |
API#
- class nemo_automodel.components.models.glm4_moe.layers.Glm4MoeAttention(
- config: transformers.models.glm4_moe.configuration_glm4_moe.Glm4MoeConfig,
- backend: nemo_automodel.components.moe.utils.BackendConfig,
Bases:
torch.nn.ModuleGLM4 MoE attention with optional query/key per-head RMSNorm + partial RoPE.
Key differences from Qwen3 MoE:
Optional QK normalization (controlled by use_qk_norm)
Partial rotary embeddings (controlled by partial_rotary_factor)
o_proj bias is False (unlike Qwen3 which has configurable attention_bias)
Shapes:
Input: x -> [B, S, H]
Projections: q: [B, S, n_heads, head_dim] k/v: [B, S, n_kv_heads, head_dim] -> repeated to n_heads via groups
Output: [B, S, H]
Initialization
- forward(
- x: torch.Tensor,
- *,
- freqs_cis: torch.Tensor,
- attention_mask: torch.Tensor | None = None,
- **attn_kwargs: Any,
- init_weights(buffer_device: torch.device, init_std: float = 0.02)#