TP / DP / PP Communication Overlap Skill#
For stable background and recommendation level, see:
docs/training/communication-overlap.md
Enablement#
Minimal Bridge override:
from megatron.bridge.training.comm_overlap import CommOverlapConfig
cfg.model.tensor_model_parallel_size = 4
cfg.model.sequence_parallel = True
cfg.model.pipeline_model_parallel_size = 4
cfg.model.virtual_pipeline_model_parallel_size = 2
cfg.comm_overlap = CommOverlapConfig(
tp_comm_overlap=True,
)
cfg.ddp.use_distributed_optimizer = True
cfg.ddp.overlap_grad_reduce = True
cfg.ddp.overlap_param_gather = True
Optional TP preset:
from megatron.bridge.training.comm_overlap import userbuffers_bf16_h100_h12288_tp4_mbs1_seqlen2048
cfg.comm_overlap.tp_comm_overlap_cfg = userbuffers_bf16_h100_h12288_tp4_mbs1_seqlen2048
Precision knobs belong to mixed precision:
cfg.mixed_precision.grad_reduce_in_fp32 = False
cfg.mixed_precision.fp8_param_gather = False
Code Anchors#
Bridge overlap gating:
if self.user_comm_overlap_cfg.tp_comm_overlap is True:
if model_cfg.tensor_model_parallel_size < 2:
...
elif not model_cfg.sequence_parallel:
...
elif not HAVE_TE:
...
PP overlap selection:
if model_cfg.pipeline_model_parallel_size > 1:
if vp_size > 1:
comm_overlap_cfg.overlap_p2p_comm = True
comm_overlap_cfg.batch_p2p_comm = False
else:
comm_overlap_cfg.overlap_p2p_comm = False
comm_overlap_cfg.batch_p2p_comm = True
DP overlap defaults:
if self.data_parallel_size > 1:
comm_overlap_cfg.bucket_size = 128 * 1024 * 1024
comm_overlap_cfg.overlap_grad_reduce = True
comm_overlap_cfg.overlap_param_gather = True
Launch-time env tuning:
executor.env_vars["CUDA_DEVICE_MAX_CONNECTIONS"] = str(cuda_device_max_connections)
...
executor.env_vars["NVTE_FWD_LAYERNORM_SM_MARGIN"] = str(self.layernorm_sm_margin)
executor.env_vars["NVTE_BWD_LAYERNORM_SM_MARGIN"] = str(self.layernorm_sm_margin)
Pitfalls#
TP overlap silently disables itself if
sequence_parallel=Falseor Transformer Engine is unavailable.PP overlap is not enabled for all PP cases. Bridge only auto-selects
overlap_p2p_comm=TruewhenPP > 1andVPP > 1.bucket_sizeis a parameter-count knob, not a byte-size knob.grad_reduce_in_fp32andfp8_param_gathershould be set through mixed precision, not as standalone DDP tuning first.CUDA_DEVICE_MAX_CONNECTIONSand LayerNorm SM margin are launch-time plugin settings, notCommOverlapConfigfields.
Verification#
Use the checked-in overlap unit coverage first:
uv run python -m pytest tests/unit_tests/training/test_comm_overlap.py -q
Optional second check if nemo_run is available:
uv run python -m pytest tests/unit_tests/recipes/test_run_plugins.py -q
Success criteria:
first command reports
26 passedsecond command validates plugin-owned env wiring when not skipped