bridge.models.qwen3_asr.modeling_qwen3_asr.thinker_model#

Module Contents#

Classes#

Qwen3ASRThinkerModel

Qwen3-ASR Thinker Model.

API#

class bridge.models.qwen3_asr.modeling_qwen3_asr.thinker_model.Qwen3ASRThinkerModel(
language_transformer_config: megatron.bridge.models.qwen3_asr.modeling_qwen3_asr.transformer_config.Qwen3ASRTransformerConfig,
language_transformer_layer_spec: megatron.core.transformer.spec_utils.ModuleSpec,
thinker_transformer_config: megatron.bridge.models.qwen3_asr.hf_qwen3_asr.configuration_qwen3_asr.Qwen3ASRThinkerConfig,
parallel_output: bool = True,
pre_process: bool = True,
post_process: bool = True,
add_encoder: bool = True,
add_decoder: bool = True,
pg_collection: megatron.core.process_groups_config.ProcessGroupCollection | None = None,
)#

Bases: megatron.core.transformer.MegatronModule

Qwen3-ASR Thinker Model.

Audio-only model that combines an HF audio encoder with a Qwen3-based language model. Follows the Qwen2.5-Omni thinker pattern but simplified for ASR (no vision/video).

Initialization

shared_embedding_or_output_weight()#

This is a convenience method to surface the language model’s word embeddings, which is necessary for finalize_model_grads._allreduce_word_embedding_grads.

set_input_tensor(input_tensor) None#
freeze(
freeze_language_model: bool = False,
freeze_audio_model: bool = False,
)#

Freeze model modules.

Parameters:
  • freeze_language_model (bool) – Freeze the language model module.

  • freeze_audio_model (bool) – Freeze the audio model module.

get_audio_features(
input_features: torch.FloatTensor,
feature_attention_mask: torch.LongTensor | None = None,
audio_feature_lengths: torch.LongTensor | None = None,
)#

Extract audio features using the HF audio encoder.

Follows the HF Qwen3ASRThinkerForConditionalGeneration.get_audio_features pattern: processes each audio individually for precision.

forward(
input_ids: torch.Tensor,
input_features: torch.Tensor | None = None,
position_ids: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
labels: torch.Tensor | None = None,
loss_mask: torch.Tensor | None = None,
inference_params: megatron.core.InferenceParams | None = None,
packed_seq_params: megatron.core.packed_seq_params.PackedSeqParams | None = None,
extra_block_kwargs: dict | None = None,
feature_attention_mask: torch.Tensor | None = None,
audio_feature_lengths: torch.Tensor | None = None,
**kwargs,
) torch.Tensor#