nemo_automodel.components.models.deepseek_v3.layers
#
Module Contents#
Classes#
Functions#
Preprocess attention inputs based on backend requirements. |
|
Postprocess attention output based on backend requirements. |
API#
- nemo_automodel.components.models.deepseek_v3.layers.preprocess_args_and_kwargs_for_attn(
- q: torch.Tensor,
- k: torch.Tensor,
- v: torch.Tensor,
- attention_mask: torch.Tensor | None,
- backend: nemo_automodel.components.moe.utils.BackendConfig,
Preprocess attention inputs based on backend requirements.
- nemo_automodel.components.models.deepseek_v3.layers.postprocess_output_for_attn(
- x: torch.Tensor,
- backend: nemo_automodel.components.moe.utils.BackendConfig,
Postprocess attention output based on backend requirements.
- class nemo_automodel.components.models.deepseek_v3.layers.MLA(
- config: transformers.models.deepseek_v3.configuration_deepseek_v3.DeepseekV3Config,
- backend: nemo_automodel.components.moe.utils.BackendConfig,
Bases:
torch.nn.Module
Initialization
- forward(
- x: torch.Tensor,
- freqs_cis: torch.Tensor,
- attention_mask: torch.Tensor | None = None,
- **attn_kwargs: Any,
- init_weights(buffer_device: torch.device, init_std: float = 0.02)#