bridge.training.flex_dispatcher_backend#

Module Contents#

Functions#

apply_flex_dispatcher_backend

Apply DeepEP or HybridEP optimizations to the model config.

validate_flex_dispatcher_backend

Validate DeepEP or HybridEP is supported for the current GPU architecture.

Data#

API#

bridge.training.flex_dispatcher_backend.logger: logging.Logger#

‘getLogger(…)’

bridge.training.flex_dispatcher_backend.apply_flex_dispatcher_backend(
model_config: megatron.core.transformer.TransformerConfig,
moe_flex_dispatcher_backend: str | None = None,
) None#

Apply DeepEP or HybridEP optimizations to the model config.

DeepEP is applicable only for MoE models on Ampere, Hopper, B200 and B300 GPUs. HybridEP is applicable only for MoE models on GB200, GB300 with NVL72 and on Ampere, Hopper, B200 and B300 GPUs.

bridge.training.flex_dispatcher_backend.validate_flex_dispatcher_backend(
model_config: megatron.core.transformer.TransformerConfig,
) None#

Validate DeepEP or HybridEP is supported for the current GPU architecture.