nemo_automodel.components.models.gpt_oss.layers#
Module Contents#
Classes#
API#
- class nemo_automodel.components.models.gpt_oss.layers.GptOssAttention(
- config: transformers.models.gpt_oss.configuration_gpt_oss.GptOssConfig,
- backend: nemo_automodel.components.moe.utils.BackendConfig,
- use_sliding_attention: bool = False,
Bases:
torch.nn.ModuleInitialization
- forward(
- x: torch.Tensor,
- freqs_cis: torch.Tensor,
- attention_mask: torch.Tensor | None = None,
- **attn_kwargs: Any,
- init_weights(buffer_device: torch.device, init_std: float = 0.02)#