bridge.models.ministral3.ministral3_bridge#
Megatron Bridge for Ministral 3 Vision-Language Models.
This module provides the bridge implementation for converting between HuggingFace Ministral-3 models and Megatron-Core format.
Supported models:
Ministral-3-3B-Base-2512
Ministral-3-3B-Instruct-2512
Ministral-3-3B-Reasoning-2512
Ministral-3-8B-Base-2512
Ministral-3-8B-Instruct-2512
Ministral-3-8B-Reasoning-2512
Ministral-3-14B-Base-2512
Ministral-3-14B-Instruct-2512
Ministral-3-14B-Reasoning-2512
Reference: https://huggingface.co/mistralai/Ministral-3-3B-Base-2512
Module Contents#
Classes#
Megatron Bridge for Ministral 3 Vision-Language Models. |
API#
- class bridge.models.ministral3.ministral3_bridge.Ministral3Bridge#
Bases:
megatron.bridge.models.conversion.model_bridge.MegatronModelBridgeMegatron Bridge for Ministral 3 Vision-Language Models.
This bridge handles conversion between HuggingFace Mistral3ForConditionalGeneration and Megatron-Core Ministral3Model format for vision-language models.
The weight mappings handle:
Vision model weights (vision encoder)
Language model weights
Multimodal projector weights
Special token embeddings
.. rubric:: Example
from megatron.bridge import AutoBridge bridge = AutoBridge.from_hf_pretrained(“mistralai/Ministral-3-3B-Base-2512”) provider = bridge.to_megatron_provider()
- provider_bridge(
- hf_pretrained: megatron.bridge.models.hf_pretrained.vlm.PreTrainedVLM,
Create a Ministral3ModelProvider from a HuggingFace pretrained VL model.
- Parameters:
hf_pretrained – HuggingFace pretrained VLM model
- Returns:
Ministral3ModelProvider configured with the HF model’s parameters
- mapping_registry() megatron.bridge.models.conversion.mapping_registry.MegatronMappingRegistry#
Return MegatronMappingRegistry containing parameter mappings for VL models.
HuggingFace weight structure:
language_model.model.embed_tokens.weight
language_model.model.layers.{i}.input_layernorm.weight
language_model.model.layers.{i}.self_attn.{q,k,v,o}_proj.weight
language_model.model.layers.{i}.post_attention_layernorm.weight
language_model.model.layers.{i}.mlp.{gate,up,down}_proj.weight
language_model.model.norm.weight
language_model.lm_head.weight
vision_tower.** (patch_conv, ln_pre, transformer layers)
multi_modal_projector.{norm,linear}.weight
- Returns:
MegatronMappingRegistry with all parameter mappings