bridge.models.nemotron_vl.nemotron_vl_provider#

Module Contents#

Classes#

NemotronNano12Bv2VLModelProvider

Configuration provider for Nemotron-VL models.

API#

class bridge.models.nemotron_vl.nemotron_vl_provider.NemotronNano12Bv2VLModelProvider#

Bases: megatron.bridge.models.nemotronh.nemotron_h_provider.NemotronNano12Bv2Provider

Configuration provider for Nemotron-VL models.

scatter_embedding_sequence_parallel: bool#

False

attention_softmax_in_fp32: bool#

True

vision_model_type: str#

‘radio’

language_model_type: str#

‘nemotron5-hybrid-12b’

generation_config: Optional[Any]#

None

freeze_language_model: bool#

False

freeze_vision_model: bool#

False

freeze_vision_projection: bool#

False

provide(pre_process=None, post_process=None, vp_stage=None)#

Assemble a full :class:~megatron.core.models.multimodal.llava_model.LLaVAModel and wrap it.

This is a very trimmed-down version of the assembly code used in pretrain_vlm.py – it relies only on parameters already stored in the provider so that it works in any script (no Megatron-training CLI required).

provide_language_model(
pre_process=None,
post_process=None,
vp_stage=None,
)#