bridge.utils.common_utils
#
Module Contents#
Functions#
Get the distributed rank safely, even if torch.distributed is not initialized. |
|
Get the distributed world size safely, even if torch.distributed is not initialized. |
|
Get the last rank in the distributed group |
|
Get the local rank from the environment variable, intended for use before full init. |
|
Print a message only on global rank 0. |
|
Warn only on rank 0. |
|
Check if the current rank is the last rank in the default process group. |
|
Print a message only on the last rank of the default process group. |
|
Mark params for TP grad sync and hook setattr on a module and its children. |
|
Extract the expert number from a parameter name. |
API#
- bridge.utils.common_utils.get_rank_safe() int #
Get the distributed rank safely, even if torch.distributed is not initialized.
- Returns:
The current process rank.
- bridge.utils.common_utils.get_world_size_safe() int #
Get the distributed world size safely, even if torch.distributed is not initialized.
- Returns:
The total number of processes in the distributed job.
- bridge.utils.common_utils.get_last_rank() int #
Get the last rank in the distributed group
- bridge.utils.common_utils.get_local_rank_preinit() int #
Get the local rank from the environment variable, intended for use before full init.
- Returns:
The local rank of the current process.
- bridge.utils.common_utils.print_rank_0(message: str) None #
Print a message only on global rank 0.
- Parameters:
message – The message string to print.
- bridge.utils.common_utils.warn_rank_0(message)#
Warn only on rank 0.
- bridge.utils.common_utils.is_last_rank() bool #
Check if the current rank is the last rank in the default process group.
- Returns:
True if the current rank is the last one, False otherwise.
- bridge.utils.common_utils.print_rank_last(message: str) None #
Print a message only on the last rank of the default process group.
- Parameters:
message – The message string to print.
- bridge.utils.common_utils.hook_hf_module_setattr_for_tp_grad_sync(
- module: torch.nn.Module,
Mark params for TP grad sync and hook setattr on a module and its children.
This ensures that all existing parameters under the provided module have the attribute
average_gradients_across_tp_domain=True
and that any future submodules assigned onto this module (or any of its current children) will also have their parameters marked automatically.- Parameters:
module – The root module (typically a Hugging Face module instance).
- Returns:
The same module instance for convenience.
- bridge.utils.common_utils.extract_expert_number_from_param(param_name: str) int #
Extract the expert number from a parameter name.
- Parameters:
param_name – The parameter name to extract the expert number from.
- Returns:
The expert number.