core.resharding.refit#
Module Contents#
Classes#
Cache key for reshard plans. |
Functions#
Extract (TP, PP, EP, DP, expt_tp) sizes from a model core. |
|
Build cache key for reshard plan. |
|
Get or create a cached CopyService instance for the given backend. |
|
Clear the cached refit services. |
|
Clear the cached refit plans. |
|
Clear both service and plan caches. |
|
Extract (src_core, tgt_core, num_experts) from model arguments. |
|
Return the cached reshard plan, building it (collectively) if not yet cached. |
|
Check if a model uses FlashInfer MXFP8 inference and needs weight conversion. |
|
Detect MXFP8 needs and attach a transform to the plan if required. |
|
Pre-build and cache the reshard plan and any format-conversion transforms. |
|
Orchestrate weight swap/refit. |
|
Reshard and copy model weights from |
Data#
API#
- core.resharding.refit.RefitBackendName#
None
- class core.resharding.refit._PlanCacheKey#
Cache key for reshard plans.
- rank: int#
None
- src_config: Optional[Tuple[int, int, int, int, int]]#
None
- dst_config: Optional[Tuple[int, int, int, int, int]]#
None
- num_experts: Optional[int]#
None
- core.resharding.refit._get_config_tuple(
- core,
Extract (TP, PP, EP, DP, expt_tp) sizes from a model core.
- Returns:
Tuple of (TP, PP, EP, DP, expt_tp) sizes, or None if core is None.
TP: Tensor parallelism
PP: Pipeline parallelism
EP: Expert parallelism
DP: Data parallelism
expt_tp: Expert tensor parallelism
- core.resharding.refit._build_plan_cache_key(
- src_core,
- tgt_core,
- num_experts: Optional[int],
- group=None,
Build cache key for reshard plan.
- Parameters:
src_core – Source model core (or None for non-collocated destination/idle ranks)
tgt_core – Target model core (or None for non-collocated source/idle ranks)
num_experts – Number of MoE experts (or None for non-MoE models)
group – Optional process group for rank query
- Returns:
Cache key that uniquely identifies this reshard configuration for this rank
- core.resharding.refit._service_cache: dict[str, core.resharding.copy_services.base.CopyService]#
None
- core.resharding.refit._plan_cache: dict[core.resharding.refit._PlanCacheKey, Any]#
None
- core.resharding.refit.get_or_create_service(
- backend: core.resharding.refit.RefitBackendName,
- group=None,
Get or create a cached CopyService instance for the given backend.
This avoids expensive repeated allocations (especially for NVSHMEM buffers) when swap_model_weights is called multiple times with the same backend.
- Parameters:
backend – Backend name (“nccl”, “gloo”, or “nvshmem”).
group – Optional process group for NCCL backend.
- core.resharding.refit.clear_service_cache()#
Clear the cached refit services.
Call this if you need to invalidate the cache, for example when reinitializing distributed state.
This properly finalizes services to free GPU buffers before clearing the cache.
- core.resharding.refit.clear_plan_cache()#
Clear the cached refit plans.
- core.resharding.refit.clear_all_caches()#
Clear both service and plan caches.
- core.resharding.refit._unwrap_model_cores(src_model, target_model)#
Extract (src_core, tgt_core, num_experts) from model arguments.
Handles list-wrapped modules and None (non-collocated) models. Fills in missing DP groups from Megatron’s parallel state on the source.
- Returns:
(src_core, tgt_core, num_experts)
- core.resharding.refit._build_or_get_plan(
- src_core,
- tgt_core,
- num_experts,
- group,
- src_rank_offset,
- dst_rank_offset,
Return the cached reshard plan, building it (collectively) if not yet cached.
All participating ranks must call this simultaneously when the plan is not yet cached, because build_centralized_reshard_plan uses collective communication.
- core.resharding.refit._needs_mxfp8_conversion(model) bool#
Check if a model uses FlashInfer MXFP8 inference and needs weight conversion.
- core.resharding.refit._setup_mxfp8_transform_on_plan(plan, target_model) None#
Detect MXFP8 needs and attach a transform to the plan if required.
If the target_model uses an inference-optimized layer spec with MXFP8, this function:
Computes which params are eligible for MXFP8 conversion.
Quantizes the target model’s decoder weights to FlashInfer MXFP8Tensor (creating persistent buffers whose addresses are later captured by CUDA graphs).
Builds an
MXFP8ReshardTransformand attaches it to the plan asplan.transform.
If the model doesn’t need MXFP8,
plan.transformis set to None. Subsequent calls are no-ops if the plan already has a transform attribute.
- core.resharding.refit.prepare_swap_model_weights(
- src_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- target_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- group=None,
- src_rank_offset: int = 0,
- dst_rank_offset: int = 0,
Pre-build and cache the reshard plan and any format-conversion transforms.
Call this during initialization while models are in their native (BF16) format, before any weight format conversion (e.g., MXFP8). The plan is stored in the same module-level cache as swap_model_weights, so subsequent calls reuse it without needing to inspect named_parameters() again.
If the target_model uses an inference-optimized layer spec with MXFP8 (
config.transformer_impl == 'inference_optimized'andconfig.fp8_recipe == 'mxfp8'), this function also:computes which parameters are eligible for MXFP8 conversion,
quantizes the target decoder weights to persistent FlashInfer MXFP8Tensor buffers (whose addresses are later baked into CUDA graphs),
creates an
MXFP8ReshardTransformthat subsequentswap_model_weightscalls use automatically.
Callers do not need to know about MXFP8 — the transform is created and cached transparently.
All participating ranks must call this simultaneously — the plan builder uses collective communication internally.
- Parameters:
src_model – Source model, or None if this rank only receives weights.
target_model – Target model, or None if this rank only sends weights.
group – Optional process group for collective communication.
src_rank_offset – Rank offset for source (training) workers.
dst_rank_offset – Rank offset for destination (inference) workers.
- core.resharding.refit.swap_model_weights(
- src_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- target_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- refit_method: Union[core.resharding.refit.RefitBackendName, core.resharding.copy_services.base.CopyService],
- group=None,
- src_rank_offset: int = 0,
- dst_rank_offset: int = 0,
- transform: Optional[core.resharding.transforms.ReshardTransform] = None,
Orchestrate weight swap/refit.
If transform is not explicitly provided, the function automatically uses any
MXFP8ReshardTransformthat was created and cached by a priorprepare_swap_model_weightscall for the same model pair. This makes MXFP8 handling transparent to callers.- Parameters:
refit_method – a string backend name (one of the supported refit backends) or a CopyService instance.
group – Optional process group for communication.
dst_rank_offset (src_rank_offset /) – Offsets applied to local process group ranks so that metadata contains globally unique rank IDs across independent torch.distributed worlds.
transform – Optional ReshardTransform for custom format conversion. If None, the cached transform (from prepare_swap_model_weights) is used automatically when the receiver needs MXFP8 conversion.
- core.resharding.refit.reshard_model_weights(
- src_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- target_model: megatron.core.models.common.language_module.language_module.LanguageModule,
- service: core.resharding.copy_services.base.CopyService,
- group=None,
- src_rank_offset: int = 0,
- dst_rank_offset: int = 0,
- transform: Optional[core.resharding.transforms.ReshardTransform] = None,
Reshard and copy model weights from
src_modeltotarget_modelusingservice.Supports None for src_model and/or target_model to enable non-collocated mode:
(src_model, target_model): Both models present (collocated mode)
(src_model, None): Source rank - only sends data (non-collocated)
(None, target_model): Destination rank - only receives data (non-collocated)
(None, None): Idle rank - participates in collectives but has no transfers (non-collocated)
- Parameters:
group – Optional process group for collective communication.
dst_rank_offset (src_rank_offset /) – Offsets for mapping local ranks to global ranks in independent torch.distributed worlds.
transform – Optional ReshardTransform for custom format conversion.