nemo_automodel.components.moe.uccl_ep._buffer#
Module Contents#
Classes#
The core expert-parallel (EP) communication buffers for Mixture of Experts (MoE) model, which supports: - high-throughput intranode all-to-all (dispatch and combine, using NVLink) - high-throughput internode all-to-all (dispatch and combine, using RDMA and NVLink) - low-latency all-to-all (dispatch and combine, using RDMA) |
API#
- class nemo_automodel.components.moe.uccl_ep._buffer.Buffer(
- group: torch.distributed.ProcessGroup,
- num_nvl_bytes: int = 0,
- num_rdma_bytes: int = 0,
- low_latency_mode: bool = False,
- num_qps_per_rank: int = 24,
- allow_nvlink_for_low_latency_mode: bool = True,
- allow_mnnvl: bool = False,
- explicitly_destroy: bool = False,
- is_intranode: bool = False,
The core expert-parallel (EP) communication buffers for Mixture of Experts (MoE) model, which supports: - high-throughput intranode all-to-all (dispatch and combine, using NVLink) - high-throughput internode all-to-all (dispatch and combine, using RDMA and NVLink) - low-latency all-to-all (dispatch and combine, using RDMA)
.. attribute:: num_sms
the SMs used in high-throughput kernels.
.. attribute:: rank
the local rank number.
.. attribute:: group_size
the number of ranks in the group.
.. attribute:: group
the communication group.
.. attribute:: num_nvl_bytes
the buffer size for intranode NVLink communication.
.. attribute:: num_rdma_bytes
the buffer size for internode (also for intranode with low-latency mode) RDMA communication.
.. attribute:: runtime
the C++ runtime.
Initialization
Initialize the communication buffer.
- Parameters:
group β the communication group.
num_nvl_bytes β the buffer size for intranode NVLink communication.
num_rdma_bytes β the buffer size for internode (also for intranode with low-latency mode) RDMA communication.
low_latency_mode β whether to enable low-latency mode.
num_qps_per_rank β the number of QPs for RDMA, the low-latency mode requires that this number equals to the number of local experts.
allow_nvlink_for_low_latency_mode β whether allow NVLink traffic for low-latency mode, you should notice this is somehow incompatible with the hook-based overlapping. Warning: PCIe connections may lead to errors due to memory ordering issues, please make sure all connections are via NVLink.
allow_mnnvl β whether to allow MNNVL
explicitly_destroy β If this flag is set to True, you need to explicitly call
destroy()to release resources; otherwise, the resources will be released by the destructor. Note: Releasing resources in the destructor may cause Pythonβs exception handling process to hang.
- num_sms: int#
20
- _ll_compute_stream_ptr(device: torch.device)#
Return the current CUDA stream pointer for low-latency runtime calls.
- reset_rdma_buffer()#
Reset the RDMA buffer, this is useful when you want to reuse the RDMA buffer for another run.
- connect_atomic_buffer(proxy: uccl.ep.UcclProxy)#
- destroy()#
Destroy the cpp runtime and release resources.
- static is_sm90_compiled()#
- static set_num_sms(new_num_sms: int) None#
Set the number of SMs to use in high-throughput kernels.
- Parameters:
new_num_sms β the new number to be set.
- static capture() nemo_automodel.components.moe.uccl_ep._utils.EventOverlap#
Capture a CUDA event on the current stream, i.e.
torch.cuda.current_stream().- Returns:
the captured event.
- Return type:
event
- low_latency_dispatch(
- x: torch.Tensor,
- topk_idx: torch.Tensor,
- num_max_dispatch_tokens_per_rank: int,
- num_experts: int,
- cumulative_local_expert_recv_stats: Optional[torch.Tensor] = None,
- dispatch_wait_recv_cost_stats: Optional[torch.Tensor] = None,
- use_fp8: bool = True,
- round_scale: bool = False,
- use_ue8m0: bool = False,
- async_finish: bool = False,
- return_recv_hook: bool = False,
A low-latency implementation for dispatching with IBGDA. This kernel requires all the ranks (no matter intranode or internode) should be visible via RDMA (specifically, IBGDA must be enabled). Warning: as there are only two buffers, and the returned tensors reuse the buffer, you cannot hold more than 2 low-latency kernelsβ result tensors at a single moment.
- Parameters:
x β
torch.Tensorwithtorch.bfloat16, shaped as[num_tokens, hidden], only several hidden shapes are supported. The number of tokens to be dispatched must be less thannum_max_dispatch_tokens_per_rank.topk_idx β
torch.Tensorwithtorch.int64, shaped as[num_tokens, num_topk], only several top-k shapes are supported.-1indices (not selecting any expert) are supported.num_max_dispatch_tokens_per_rank β the maximum number of tokens to dispatch, all the ranks must hold the same value.
num_experts β the number of all experts.
cumulative_local_expert_recv_stats β a cumulative expert count tensor for statistics, which should have shape
[num_local_experts]and be typed astorch.int. This is useful for online service EP load balance monitoring.dispatch_wait_recv_cost_stats β a cumulative time spent waiting to receive each token tensor for statistics, which should have shape
[num_ranks, num_ranks]and be typed astorch.int64. This is useful for detecting and pre-cisely localizing slow anomalies.use_fp8 β whether to enable FP8 casting, with this, the received data will be a tuple of FP8 tensor and scaling factors.
round_scale β whether round the scaling factors into power of 2.
use_ue8m0 β whether use UE8M0 as scaling factor format (available only with
round_scale=True).async_finish β the current stream will not wait for the communication kernels to be finished if set.
return_recv_hook β return a receiving hook if set. If set, the kernel will just do the RDMA request issues, but without actually receiving the data. You must call the received hook to make sure the dataβs arrival. If you do not set this flag, the kernel will ensure the dataβs arrival.
- Returns:
a tensor or tuple with received tokens for each expert. With
use_fp8=True: the first element is atorch.Tensorshaped as[num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden]withtorch.float8_e4m3fn. The second tensor is the corresponding scales for the first element with shape[num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden // 128]withtorch.float, ifuse_ue8m0=False. Withuse_ue8m0=True, the second one is packed and shaped as[num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden // 512]with typetorch.int. Notice that, the last-two-dimension of the scaling tensors are in column-major for TMA compatibility. Withuse_fp8=False, the result would be a tensor shaped as[num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden]withtorch.bfloat16. Moreover, not all tokens are valid, only some of thenum_max_dispatch_tokens_per_rank * num_ranksare, as we do not synchronize CPU received count with GPU (also not incompatible with CUDA graph if synced). recv_count: a tensor shaped[num_local_experts]with typetorch.int, indicating how many tokens each expert receives. As mentioned before, not all tokens are valid inrecv_x. handle: the communication handle to be used in thelow_latency_combinefunction. event: the event after executing the kernel (valid only ifasync_finishis set). hook: the receiving hook function (valid only ifreturn_recv_hookis set).- Return type:
recv_x
- low_latency_combine(
- x: torch.Tensor,
- topk_idx: torch.Tensor,
- topk_weights: torch.Tensor,
- handle: tuple,
- use_logfmt: bool = False,
- zero_copy: bool = False,
- async_finish: bool = False,
- return_recv_hook: bool = False,
- out: Optional[torch.Tensor] = None,
- combine_wait_recv_cost_stats: Optional[torch.Tensor] = None,
A low-latency implementation for combining tokens (reduce with weights) with IBGDA. This kernel requires all the ranks (no matter intranode or internode) should be visible via RDMA (specifically, IBGDA must be enabled). Warning: as there are only two buffers, and the returned tensors reuse the buffer, you cannot hold more than 2 low-latency kernelsβ result tensors at a single moment.
- Parameters:
x β
[num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden]withtorch.bfloat16, the local calculated tokens to be sent to this original rank and reduced.topk_idx β
[num_combined_tokens, num_topk]withtorch.int64, the expert indices selected by the dispatched tokens.-1indices (not selecting any expert) are supported. Note that,num_combined_tokensequals to the number of dispatched tokens.topk_weights β
[num_combined_tokens, num_topk]withtorch.float, the expert weights selected by the dispatched tokens. The received tokens will be reduced with the weights in this tensor.handle β the communication handle given by the
dispatchfunction.use_logfmt β whether to use an internal βLogFMT with dynamic per-64-channel castβ format (10 bits).
zero_copy β whether the tensor is already copied into the RDMA buffer, should be cooperative with
get_next_low_latency_combine_buffer.async_finish β the current stream will not wait for the communication kernels to be finished if set.
return_recv_hook β return a receiving hook if set. If set, the kernel will just do the RDMA request issues, but without actually receiving the data. You must call the received hook to make sure the dataβs arrival. If you do not set this flag, the kernel will ensure the dataβs arrival.
out β the in-place output tensor, if set, the kernel will write the result to this tensor and return it directly.
combine_wait_recv_cost_stats β a cumulative time spent waiting to receive each token tensor for statistics, which should have shape
[num_ranks, num_ranks]and be typed astorch.int64. This is useful for detecting and pre-cisely localizing slow anomalies.
- Returns:
the reduced token tensor, with shape
[num_combined_tokens, hidden]and typetorch.bfloat16. event: the event after executing the kernel (valid only ifasync_finishis set). hook: the receiving hook function (valid only ifreturn_recv_hookis set).- Return type:
combined_x
- get_next_low_latency_combine_buffer(handle: object)#
Get the raw registered RDMA buffer tensor for next low-latency combine, so that the next combine kernel can skip the copying.
- Parameters:
handle β the communication handle given by the
dispatchfunction.- Returns:
the raw RDMA low-latency buffer as a BF16 PyTorch tensor with shape
[num_local_experts, num_ranks * num_max_dispatch_tokens_per_rank, hidden], you should fill this buffer by yourself.- Return type:
buffer
- static get_low_latency_rdma_size_hint(
- num_max_dispatch_tokens_per_rank: int,
- hidden: int,
- num_ranks: int,
- num_experts: int,
Get a minimum size requirement for the RDMA buffer. The size calculation will be done with BF16.
- Parameters:
num_max_dispatch_tokens_per_rank β the maximum number of tokens to dispatch, all the ranks must hold the same value.
hidden β the hidden dimension of each token.
num_ranks β the number of EP group ranks.
num_experts β the number of all experts.
- Returns:
the RDMA buffer size recommended.
- Return type:
size
- get_comm_stream() torch.Stream#
Get the communication stream.
- Returns:
the communication stream.
- Return type:
stream
- get_local_buffer_tensor(
- dtype: torch.dtype,
- size: Optional[torch.Size] = None,
- offset: int = 0,
- use_rdma_buffer: bool = False,
Get the raw buffer (slice supported) as a PyTorch tensor.
Argument: dtype: the data type (PyTorch
dtype) for the tensor. size: the slice size (by elements) to get from the buffer. offset: the offset of the beginning element. use_rdma_buffer: whether to return the RDMA buffer.
- static _unpack_bias(
- bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
- static _dtype_code(dtype: torch.dtype) int#
- static get_dispatch_config(num_ranks: int) uccl.ep.Config#
Get a recommended dispatch config.
Argument: num_ranks: the number of ranks.
- Returns:
the recommended config.
- Return type:
config
- static get_combine_config(num_ranks: int) uccl.ep.Config#
Get a recommended combine config.
Argument: num_ranks: the number of ranks.
- Returns:
the recommended config.
- Return type:
config
- get_dispatch_layout(
- topk_idx: torch.Tensor,
- num_experts: int,
- previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
- async_finish: bool = False,
- allocate_on_comm_stream: bool = False,
Calculate the layout required for later communication.
- Parameters:
topk_idx β
[num_tokens, num_topk], dtype must betorch.int64, the expert indices selected by each token,-1means no selections.num_experts β the number of experts.
previous_event β the event to wait before actually executing the kernel.
async_finish β the current stream will not wait for the communication kernels to be finished if set.
allocate_on_comm_stream β control whether all the allocated tensorsβ ownership to be on the communication stream.
- Returns:
[num_ranks]withtorch.int, the number of tokens to be sent to each rank. num_tokens_per_rdma_rank:[num_rdma_ranks]withtorch.int, the number of tokens to be sent to each RDMA rank (with the same GPU index), returnNonefor intranode settings. num_tokens_per_expert:[num_experts]withtorch.int, the number of tokens to be sent to each expert. is_token_in_rank:[num_tokens, num_ranks]withtorch.bool, whether a token be sent to a rank. event: the event after executing the kernel (valid only ifasync_finishis set).- Return type:
num_tokens_per_rank
- dispatch(
- x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
- handle: Optional[Tuple] = None,
- num_tokens_per_rank: Optional[torch.Tensor] = None,
- num_tokens_per_rdma_rank: Optional[torch.Tensor] = None,
- is_token_in_rank: Optional[torch.Tensor] = None,
- num_tokens_per_expert: Optional[torch.Tensor] = None,
- topk_idx: Optional[torch.Tensor] = None,
- topk_weights: Optional[torch.Tensor] = None,
- expert_alignment: int = 1,
- num_worst_tokens: int = 0,
- config: Optional[uccl.ep.Config] = None,
- previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
- async_finish: bool = False,
- allocate_on_comm_stream: bool = False,
Dispatch tokens to different ranks, both intranode and internode settings are supported. Intranode kernels require all the ranks should be visible via NVLink. Internode kernels require the ranks in a node should be visible via NVLink, while the ranks with the same GPU index should be visible via RDMA.
- Parameters:
x β
torch.Tensoror tuple oftorch.Tensor, for the first type, the shape must be[num_tokens, hidden], and type must betorch.bfloat16; for the second type, the first element of the tuple must be shaped as[num_tokens, hidden]with typetorch.float8_e4m3fn, the second must be[num_tokens, hidden // 128](requiring divisible) with typetorch.float.handle β an optional communication handle, if set, the CPU will reuse the layout information to save some time.
num_tokens_per_rank β
[num_ranks]withtorch.int, the number of tokens to be sent to each rank.num_tokens_per_rdma_rank β
[num_rdma_ranks]withtorch.int, the number of tokens to be sent to each RDMA rank (with the same GPU index), returnNonefor intranode settings.is_token_in_rank β
[num_tokens, num_ranks]withtorch.bool, whether a token be sent to a rank.num_tokens_per_expert β
[num_experts]withtorch.int, the number of tokens to be sent to each expert.topk_idx β
[num_tokens, num_topk]withtorch.int64, the expert indices selected by each token,-1means no selections.topk_weights β
[num_tokens, num_topk]withtorch.float, the expert weights of each token to dispatch.expert_alignment β align the number of tokens received by each local expert to this variable.
num_worst_tokens β the worst number of tokens to receive, if specified, there will be no CPU sync, and it will be CUDA-graph compatible. Please also notice that this flag is for intranode only.
config β the performance tuning config.
previous_event β the event to wait before actually executing the kernel.
async_finish β the current stream will not wait for the communication kernels to be finished if set.
allocate_on_comm_stream β control whether all the allocated tensorsβ ownership to be on the communication stream.
- Returns:
received tokens, the same type and tuple as the input
x, but the number of tokens equals to the received token count. recv_topk_idx: received expert indices. recv_topk_weights: received expert weights. num_recv_tokens_per_expert_list: Python list shaped[num_local_experts], the received token count by each local expert, aligned to the inputexpert_alignment. Ifnum_worst_tokensis specified, the list will be empty. handle: the returned communication handle. event: the event after executing the kernel (valid only ifasync_finishis set).- Return type:
recv_x
- combine(
- x: torch.Tensor,
- handle: Tuple,
- topk_weights: Optional[torch.Tensor] = None,
- bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = None,
- config: Optional[uccl.ep.Config] = None,
- previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
- async_finish: bool = False,
- allocate_on_comm_stream: bool = False,
Combine (reduce) tokens (addition without weights) from different ranks, both intranode and internode settings are supported. Intranode kernels require all the ranks should be visible via NVLink. Internode kernels require the ranks in a node should be visible via NVLink, while the ranks with the same GPU index should be visible via RDMA.
- Parameters:
x β
[num_tokens, hidden]withtorch.bfloat16, the tokens to send for reducing to its original ranks.handle β a must-set communication handle, you can obtain this from the dispatch function.
topk_weights β
[num_tokens, num_topk]withtorch.float, the tokensβ top-k weights for reducing to its original ranks.config β the performance tuning config.
previous_event β the event to wait before actually executing the kernel.
async_finish β the current stream will not wait for the communication kernels to be finished if set.
allocate_on_comm_stream β control whether all the allocated tensorsβ ownership to be on the communication stream.
- Returns:
the reduced token from its dispatched ranks. recv_topk_weights: the reduced top-k weights from its dispatch ranks. event: the event after executing the kernel (valid only if
async_finishis set).- Return type:
recv_x
- internode_dispatch(
- x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
- handle: Optional[Tuple] = None,
- num_tokens_per_rank: Optional[torch.Tensor] = None,
- num_tokens_per_rdma_rank: Optional[torch.Tensor] = None,
- is_token_in_rank: Optional[torch.Tensor] = None,
- num_tokens_per_expert: Optional[torch.Tensor] = None,
- topk_idx: Optional[torch.Tensor] = None,
- topk_weights: Optional[torch.Tensor] = None,
- expert_alignment: int = 1,
- num_worst_tokens: int = 0,
- config: Optional[uccl.ep.Config] = None,
- previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
- async_finish: bool = False,
- allocate_on_comm_stream: bool = False,
Internode dispatch implementation, for more details, please refer to the
dispatchdocs. Normally, you should not directly call this function.
- internode_combine(
- x: torch.Tensor,
- handle: Union[tuple, list],
- topk_weights: Optional[torch.Tensor] = None,
- bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = None,
- config: Optional[uccl.ep.Config] = None,
- previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
- async_finish: bool = False,
- allocate_on_comm_stream: bool = False,
Internode combine implementation, for more details, please refer to the
combinedocs. Normally, you should not directly call this function.
- clean_low_latency_buffer(
- num_max_dispatch_tokens_per_rank: int,
- hidden: int,
- num_experts: int,
As low-latency kernels require part of the buffer to be zero-initialized, so it is vital to clean the buffer if the buffer is dirty at some time. For example, after running the normal dispatch/combine, you must run this function before executing any low-latency kernel.
- Parameters:
num_max_dispatch_tokens_per_rank β the maximum number of tokens to dispatch, all the ranks must hold the same value.
hidden β the hidden dimension of each token.
num_experts β the number of all experts.