nemo_automodel.components.models.qwen3_5_moe.model#

Qwen3.5-MoE (VL) NeMo Automodel support.

Module Contents#

Classes#

Qwen3_5MoeBlock

Block that uses the Qwen3.5-MoE native GatedDeltaNet (separate in_proj_qkv, in_proj_z, in_proj_b, in_proj_a)

Fp32SafeQwen3_5MoeTextRotaryEmbedding

Ensure inv_freq stays in float32 across .to(dtype) calls.

Fp32SafeQwen3_5MoeVisionRotaryEmbedding

Ensure the vision rotary inv_freq buffer remains float32.

Qwen3_5MoeModel

Thin wrapper that exposes language_model internals as properties expected by the NeMo training loop (e.g. model.layers).

Qwen3_5MoeTextModelBackend

Qwen3.5-MoE text decoder rebuilt on top of the Qwen3-Next Block.

Qwen3_5MoeForConditionalGeneration

Qwen3.5-MoE VL conditional generation model using NeMo backend components.

Functions#

API#

nemo_automodel.components.models.qwen3_5_moe.model._make_missing(name: str)#
class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeBlock(layer_idx, config, moe_config, backend)#

Bases: nemo_automodel.components.models.qwen3_next.model.Block

Block that uses the Qwen3.5-MoE native GatedDeltaNet (separate in_proj_qkv, in_proj_z, in_proj_b, in_proj_a)

Initialization

init_weights(buffer_device: torch.device)#
class nemo_automodel.components.models.qwen3_5_moe.model.Fp32SafeQwen3_5MoeTextRotaryEmbedding#

Bases: transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeTextRotaryEmbedding

Ensure inv_freq stays in float32 across .to(dtype) calls.

_apply(fn: Any, recurse: bool = True)#
class nemo_automodel.components.models.qwen3_5_moe.model.Fp32SafeQwen3_5MoeVisionRotaryEmbedding#

Bases: transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeVisionRotaryEmbedding

Ensure the vision rotary inv_freq buffer remains float32.

_apply(fn: Any, recurse: bool = True)#
class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeModel#

Bases: transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeModel

Thin wrapper that exposes language_model internals as properties expected by the NeMo training loop (e.g. model.layers).

property layers#
property embed_tokens#
property norm#
forward(
input_ids=None,
attention_mask=None,
position_ids=None,
past_key_values=None,
inputs_embeds=None,
pixel_values=None,
pixel_values_videos=None,
image_grid_thw=None,
video_grid_thw=None,
cache_position=None,
**kwargs,
)#
class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeTextModelBackend(
config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeTextConfig,
backend: nemo_automodel.components.models.common.BackendConfig,
*,
moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
)#

Bases: torch.nn.Module

Qwen3.5-MoE text decoder rebuilt on top of the Qwen3-Next Block.

Initialization

forward(
input_ids: torch.Tensor | None = None,
*,
inputs_embeds: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
position_ids: torch.Tensor | None = None,
cache_position: torch.Tensor | None = None,
padding_mask: torch.Tensor | None = None,
past_key_values: Any | None = None,
use_cache: bool | None = None,
**attn_kwargs: Any,
) transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeModelOutputWithPast#
get_input_embeddings() torch.nn.Module#
set_input_embeddings(value: torch.nn.Module) None#
init_weights(buffer_device: torch.device | None = None) None#
class nemo_automodel.components.models.qwen3_5_moe.model.Qwen3_5MoeForConditionalGeneration(
config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeConfig,
moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
backend: nemo_automodel.components.models.common.BackendConfig | None = None,
**kwargs,
)#

Bases: nemo_automodel.components.models.common.hf_checkpointing_mixin.HFCheckpointingMixin, transformers.models.qwen3_5_moe.modeling_qwen3_5_moe.Qwen3_5MoeForConditionalGeneration, nemo_automodel.components.moe.fsdp_mixin.MoEFSDPSyncMixin

Qwen3.5-MoE VL conditional generation model using NeMo backend components.

Inherits the HF model to reuse:

  • Vision encoder (Qwen3_5MoeVisionModel)

  • VL forward logic (image/video scatter, M-RoPE position computation)

  • prepare_inputs_for_generation / _expand_inputs_for_generation

Replaces:

  • model.language_model with Qwen3_5MoeTextModelBackend

  • lm_head with NeMo backend linear

Initialization

classmethod from_config(
config: transformers.models.qwen3_5_moe.configuration_qwen3_5_moe.Qwen3_5MoeConfig,
moe_config: nemo_automodel.components.moe.layers.MoEConfig | None = None,
backend: nemo_automodel.components.models.common.BackendConfig | None = None,
**kwargs,
)#
classmethod from_pretrained(
pretrained_model_name_or_path: str,
*model_args,
**kwargs,
)#
forward(
input_ids: torch.Tensor | None = None,
*,
position_ids: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
padding_mask: torch.Tensor | None = None,
inputs_embeds: torch.Tensor | None = None,
cache_position: torch.Tensor | None = None,
**kwargs: Any,
)#
initialize_weights(
buffer_device: torch.device | None = None,
dtype: torch.dtype = torch.bfloat16,
) None#