bridge.models.qwen3_asr.modeling_qwen3_asr.model#

Module Contents#

Classes#

Qwen3ASRModel

Qwen3-ASR Model.

API#

class bridge.models.qwen3_asr.modeling_qwen3_asr.model.Qwen3ASRModel(
language_transformer_config: megatron.bridge.models.qwen3_asr.modeling_qwen3_asr.transformer_config.Qwen3ASRTransformerConfig,
language_transformer_layer_spec: megatron.core.transformer.spec_utils.ModuleSpec,
thinker_transformer_config,
parallel_output: bool = True,
pre_process: bool = True,
post_process: bool = True,
add_encoder: bool = True,
add_decoder: bool = True,
pg_collection: megatron.core.process_groups_config.ProcessGroupCollection | None = None,
)#

Bases: megatron.core.transformer.MegatronModule

Qwen3-ASR Model.

Top-level wrapper that delegates to Qwen3ASRThinkerModel. Audio-only model (no vision/video), follows Qwen2.5-Omni pattern simplified for ASR.

Initialization

shared_embedding_or_output_weight()#

This is a convenience method to surface the language model’s word embeddings, which is necessary for finalize_model_grads._allreduce_word_embedding_grads.

set_input_tensor(input_tensor) None#
freeze(
freeze_language_model: bool = False,
freeze_audio_model: bool = False,
)#

Freeze model modules.

Parameters:
  • freeze_language_model (bool) – Freeze the language model module.

  • freeze_audio_model (bool) – Freeze the audio model module.

forward(
input_ids: torch.Tensor,
input_features: torch.Tensor | None = None,
position_ids: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
labels: torch.Tensor | None = None,
loss_mask: torch.Tensor | None = None,
inference_params: megatron.core.InferenceParams | None = None,
packed_seq_params: megatron.core.packed_seq_params.PackedSeqParams | None = None,
extra_block_kwargs: dict | None = None,
feature_attention_mask: torch.Tensor | None = None,
audio_feature_lengths: torch.Tensor | None = None,
**kwargs,
) torch.Tensor#