nemo_automodel.components.models.qwen3_5_moe.model#
Qwen3.5-MoE (VL) NeMo Automodel support.
Module Contents#
Classes#
Block that uses the Qwen3.5-MoE native GatedDeltaNet (separate in_proj_qkv, in_proj_z, in_proj_b, in_proj_a) |
|
Ensure inv_freq stays in float32 across |
|
Ensure the vision rotary inv_freq buffer remains float32. |
|
Thin wrapper that exposes |
|
Qwen3.5-MoE text decoder rebuilt on top of the Qwen3-Next Block. |
|
Qwen3.5-MoE VL conditional generation model using NeMo backend components. |
Functions#
API#
- nemo_automodel.components.models.qwen3_5_moe.model._make_missing(name: str)#
- class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeBlock(layer_idx, config, moe_config, backend)#
Bases:
nemo_automodel.components.models.qwen3_next.model.BlockBlock that uses the Qwen3.5-MoE native GatedDeltaNet (separate in_proj_qkv, in_proj_z, in_proj_b, in_proj_a)
Initialization
- init_weights(buffer_device: torch.device)#
- class nemo_automodel.components.models.qwen3_5_moe.model.Fp32SafeQwen3_5MoeTextRotaryEmbedding#
Bases:
transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeTextRotaryEmbeddingEnsure inv_freq stays in float32 across
.to(dtype)calls.- _apply(fn: Any, recurse: bool = True)#
- class nemo_automodel.components.models.qwen3_5_moe.model.Fp32SafeQwen3_5MoeVisionRotaryEmbedding#
Bases:
transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeVisionRotaryEmbeddingEnsure the vision rotary inv_freq buffer remains float32.
- _apply(fn: Any, recurse: bool = True)#
- class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeModel#
Bases:
transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeModelThin wrapper that exposes
language_modelinternals as properties expected by the NeMo training loop (e.g.model.layers).- property layers#
- property embed_tokens#
- property norm#
- forward(
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- past_key_values=None,
- inputs_embeds=None,
- pixel_values=None,
- pixel_values_videos=None,
- image_grid_thw=None,
- video_grid_thw=None,
- cache_position=None,
- **kwargs,
- class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeTextModelBackend(
- config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeTextConfig,
- backend: nemo_automodel.components.models.common.BackendConfig,
- *,
- moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
Bases:
torch.nn.ModuleQwen3.5-MoE text decoder rebuilt on top of the Qwen3-Next Block.
Initialization
- forward(
- input_ids: torch.Tensor | None = None,
- *,
- inputs_embeds: torch.Tensor | None = None,
- attention_mask: torch.Tensor | None = None,
- position_ids: torch.Tensor | None = None,
- cache_position: torch.Tensor | None = None,
- padding_mask: torch.Tensor | None = None,
- past_key_values: Any | None = None,
- use_cache: bool | None = None,
- **attn_kwargs: Any,
- get_input_embeddings() torch.nn.Module#
- set_input_embeddings(value: torch.nn.Module) None#
- init_weights(buffer_device: torch.device | None = None) None#
- class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeForConditionalGeneration(
- config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeConfig,
- moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
- backend: nemo_automodel.components.models.common.BackendConfig | None = None,
- **kwargs,
Bases:
nemo_automodel.components.models.common.hf_checkpointing_mixin.HFCheckpointingMixin,transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeForConditionalGeneration,nemo_automodel.components.moe.fsdp_mixin.MoEFSDPSyncMixinQwen3.5-MoE VL conditional generation model using NeMo backend components.
Inherits the HF model to reuse:
Vision encoder (
Qwen3_5MoeVisionModel)VL forward logic (image/video scatter, M-RoPE position computation)
prepare_inputs_for_generation/_expand_inputs_for_generation
Replaces:
model.language_modelwithQwen3_5MoeTextModelBackendlm_headwith NeMo backend linear
Initialization
- classmethod from_config(
- config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeConfig,
- moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
- backend: nemo_automodel.components.models.common.BackendConfig | None = None,
- **kwargs,
- classmethod from_pretrained(
- pretrained_model_name_or_path: str,
- *model_args,
- **kwargs,
- forward(
- input_ids: torch.Tensor | None = None,
- *,
- position_ids: torch.Tensor | None = None,
- attention_mask: torch.Tensor | None = None,
- padding_mask: torch.Tensor | None = None,
- inputs_embeds: torch.Tensor | None = None,
- cache_position: torch.Tensor | None = None,
- **kwargs: Any,
- initialize_weights(
- buffer_device: torch.device | None = None,
- dtype: torch.dtype = torch.bfloat16,