bridge.training.flex_dispatcher_backend#
Module Contents#
Functions#
Apply DeepEP or HybridEP optimizations to the model config. |
|
Validate DeepEP or HybridEP is supported for the current GPU architecture. |
Data#
API#
- bridge.training.flex_dispatcher_backend.logger: logging.Logger#
‘getLogger(…)’
- bridge.training.flex_dispatcher_backend.apply_flex_dispatcher_backend(
- model_config: megatron.core.transformer.TransformerConfig,
- moe_flex_dispatcher_backend: str | None = None,
Apply DeepEP or HybridEP optimizations to the model config.
DeepEP is applicable only for MoE models on Ampere, Hopper, B200 and B300 GPUs. HybridEP is applicable only for MoE models on GB200, GB300 with NVL72 and on Ampere, Hopper, B200 and B300 GPUs.
- bridge.training.flex_dispatcher_backend.validate_flex_dispatcher_backend(
- model_config: megatron.core.transformer.TransformerConfig,
Validate DeepEP or HybridEP is supported for the current GPU architecture.