bridge.models.ministral3.modeling_ministral3#

Ministral 3 Vision-Language Model for Megatron.

This module provides the Ministral3Model class that combines:

  • HuggingFace’s vision encoder (vision_tower) for image processing

  • HuggingFace’s multimodal projector for vision-to-language projection

  • Megatron’s language model for text generation

Reference: https://huggingface.co/mistralai/Ministral-3-3B-Base-2512

Module Contents#

Classes#

Ministral3Model

Ministral 3 Vision-Language (VL) model wrapper for Megatron.

API#

class bridge.models.ministral3.modeling_ministral3.Ministral3Model(
config: megatron.bridge.models.gpt_provider.GPTModelProvider,
pre_process: bool = True,
post_process: bool = True,
vp_stage: Optional[int] = None,
)#

Bases: megatron.core.transformer.module.MegatronModule

Ministral 3 Vision-Language (VL) model wrapper for Megatron.

This class combines HuggingFace’s vision components with Megatron’s language model:

  • Vision tower (HF): Processes images through the vision encoder

  • Multimodal projector (HF): Projects vision features to language model space

  • Language model (Megatron): Generates text conditioned on vision and text inputs

The vision encoder forward pass uses HuggingFace implementation via monkey-patching, while the language model forward pass uses Megatron’s optimized implementation.

Parameters:
  • config (GPTModelProvider) – Model provider containing configuration for language and vision modules.

  • pre_process (bool, optional) – Whether to construct the vision tower and projector. Default: True.

  • post_process (bool, optional) – Whether to apply post-processing. Default: True.

  • vp_stage (Optional[int], optional) – Pipeline stage for model parallelism. Default: None.

.. attribute:: pre_process

If True, enables vision and multimodal components.

Type:

bool

.. attribute:: post_process

If True, enables post-processing.

Type:

bool

.. attribute:: vp_stage

Pipeline stage for model parallelism.

Type:

Optional[int]

.. attribute:: vision_tower

Vision encoder from HuggingFace.

Type:

nn.Module

.. attribute:: multi_modal_projector

Projects vision features to language model space.

Type:

nn.Module

.. attribute:: language_model

Megatron language model.

Type:

nn.Module

.. attribute:: get_image_features

Method to extract image features (monkey-patched from HF).

Type:

callable

Forward Inputs: input_ids (torch.LongTensor, optional): Tokenized input ids for the language model. attention_mask (torch.Tensor, optional): Attention mask for the language model. position_ids (torch.LongTensor, optional): Position ids for the language model. inputs_embeds (torch.FloatTensor, optional): Precomputed input embeddings. pixel_values (torch.Tensor, optional): Image tensor(s) for the vision tower. labels (torch.Tensor, optional): Target labels for supervised training. runtime_gather_output (bool, optional): If True, gather outputs across pipeline stages. loss_mask (Tensor, optional): Mask for loss computation.

Returns:

Model output (e.g., logits or loss, depending on mode).

Return type:

Tensor

.. note::

  • If pre_process is False, only the language model is constructed.

  • The vision tower and projector are only active if pre_process is True.

  • This class is intended for use within the Megatron-LM framework.

  • Requires transformers >= 5.0.0 for Mistral3 model support.

Initialization

set_input_tensor(input_tensor) None#

Set model chunk input tensor.

forward(
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
pixel_values: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
runtime_gather_output: Optional[bool] = None,
image_sizes: Optional[torch.Tensor] = None,
*,
loss_mask: Optional[torch.Tensor] = None,
) torch.Tensor#

Forward pass combining HuggingFace vision encoder with Megatron language model.

Parameters:
  • input_ids – Tokenized input ids for the language model.

  • attention_mask – Attention mask for the language model.

  • position_ids – Position ids for the language model.

  • inputs_embeds – Precomputed input embeddings.

  • pixel_values – Image tensor(s) for the vision tower.

  • labels – Target labels for supervised training.

  • runtime_gather_output – If True, gather outputs across pipeline stages.

  • loss_mask – Mask for loss computation.

Returns:

Model output (logits or loss depending on mode).

freeze(
freeze_language_model: bool,
freeze_vision_model: bool,
freeze_vision_projection: bool,
)#

Freeze model modules.

Make specific modules non-trainable by setting requires_grad to False.

Parameters:
  • freeze_language_model (bool) – Freeze the language model module.

  • freeze_vision_model (bool) – Freeze the vision model module (vision_tower).

  • freeze_vision_projection (bool) – Freeze the vision projection module (multi_modal_projector).