nemo_automodel.components.moe.uccl_ep._buffer#

Module Contents#

Classes#

Buffer

The core expert-parallel (EP) communication buffers for Mixture of Experts (MoE) model, which supports: - high-throughput intranode all-to-all (dispatch and combine, using NVLink) - high-throughput internode all-to-all (dispatch and combine, using RDMA and NVLink) - low-latency all-to-all (dispatch and combine, using RDMA)

API#

class nemo_automodel.components.moe.uccl_ep._buffer.Buffer(
group: torch.distributed.ProcessGroup,
num_nvl_bytes: int = 0,
num_rdma_bytes: int = 0,
low_latency_mode: bool = False,
num_qps_per_rank: int = 24,
allow_nvlink_for_low_latency_mode: bool = True,
allow_mnnvl: bool = False,
explicitly_destroy: bool = False,
is_intranode: bool = False,
)#

The core expert-parallel (EP) communication buffers for Mixture of Experts (MoE) model, which supports: - high-throughput intranode all-to-all (dispatch and combine, using NVLink) - high-throughput internode all-to-all (dispatch and combine, using RDMA and NVLink) - low-latency all-to-all (dispatch and combine, using RDMA)

.. attribute:: num_sms

the SMs used in high-throughput kernels.

.. attribute:: rank

the local rank number.

.. attribute:: group_size

the number of ranks in the group.

.. attribute:: group

the communication group.

.. attribute:: num_nvl_bytes

the buffer size for intranode NVLink communication.

.. attribute:: num_rdma_bytes

the buffer size for internode (also for intranode with low-latency mode) RDMA communication.

.. attribute:: runtime

the C++ runtime.

Initialization

Initialize the communication buffer.

Parameters:
  • group – the communication group.

  • num_nvl_bytes – the buffer size for intranode NVLink communication.

  • num_rdma_bytes – the buffer size for internode (also for intranode with low-latency mode) RDMA communication.

  • low_latency_mode – whether to enable low-latency mode.

  • num_qps_per_rank – the number of QPs for RDMA, the low-latency mode requires that this number equals to the number of local experts.

  • allow_nvlink_for_low_latency_mode – whether allow NVLink traffic for low-latency mode, you should notice this is somehow incompatible with the hook-based overlapping. Warning: PCIe connections may lead to errors due to memory ordering issues, please make sure all connections are via NVLink.

  • allow_mnnvl – whether to allow MNNVL

  • explicitly_destroy – If this flag is set to True, you need to explicitly call destroy() to release resources; otherwise, the resources will be released by the destructor. Note: Releasing resources in the destructor may cause Python’s exception handling process to hang.

num_sms: int#

20

_ll_compute_stream_ptr(device: torch.device)#

Return the current CUDA stream pointer for low-latency runtime calls.

reset_rdma_buffer()#

Reset the RDMA buffer, this is useful when you want to reuse the RDMA buffer for another run.

connect_atomic_buffer(proxy: uccl.ep.UcclProxy)#
destroy()#

Destroy the cpp runtime and release resources.

static is_sm90_compiled()#
static set_num_sms(new_num_sms: int) None#

Set the number of SMs to use in high-throughput kernels.

Parameters:

new_num_sms – the new number to be set.

static capture() nemo_automodel.components.moe.uccl_ep._utils.EventOverlap#

Capture a CUDA event on the current stream, i.e. torch.cuda.current_stream().

Returns:

the captured event.

Return type:

event

low_latency_dispatch(
x: torch.Tensor,
topk_idx: torch.Tensor,
num_max_dispatch_tokens_per_rank: int,
num_experts: int,
cumulative_local_expert_recv_stats: Optional[torch.Tensor] = None,
dispatch_wait_recv_cost_stats: Optional[torch.Tensor] = None,
use_fp8: bool = True,
round_scale: bool = False,
use_ue8m0: bool = False,
async_finish: bool = False,
return_recv_hook: bool = False,
) Tuple[Tuple[torch.Tensor, torch.Tensor], torch.Tensor, Tuple, nemo_automodel.components.moe.uccl_ep._utils.EventOverlap, Callable]#

A low-latency implementation for dispatching with IBGDA. This kernel requires all the ranks (no matter intranode or internode) should be visible via RDMA (specifically, IBGDA must be enabled). Warning: as there are only two buffers, and the returned tensors reuse the buffer, you cannot hold more than 2 low-latency kernels’ result tensors at a single moment.

Parameters:
  • x – torch.Tensor with torch.bfloat16, shaped as [num_tokens, hidden], only several hidden shapes are supported. The number of tokens to be dispatched must be less than num_max_dispatch_tokens_per_rank.

  • topk_idx – torch.Tensor with torch.int64, shaped as [num_tokens, num_topk], only several top-k shapes are supported. -1 indices (not selecting any expert) are supported.

  • num_max_dispatch_tokens_per_rank – the maximum number of tokens to dispatch, all the ranks must hold the same value.

  • num_experts – the number of all experts.

  • cumulative_local_expert_recv_stats – a cumulative expert count tensor for statistics, which should have shape [num_local_experts] and be typed as torch.int. This is useful for online service EP load balance monitoring.

  • dispatch_wait_recv_cost_stats – a cumulative time spent waiting to receive each token tensor for statistics, which should have shape [num_ranks, num_ranks] and be typed as torch.int64. This is useful for detecting and pre-cisely localizing slow anomalies.

  • use_fp8 – whether to enable FP8 casting, with this, the received data will be a tuple of FP8 tensor and scaling factors.

  • round_scale – whether round the scaling factors into power of 2.

  • use_ue8m0 – whether use UE8M0 as scaling factor format (available only with round_scale=True).

  • async_finish – the current stream will not wait for the communication kernels to be finished if set.

  • return_recv_hook – return a receiving hook if set. If set, the kernel will just do the RDMA request issues, but without actually receiving the data. You must call the received hook to make sure the data’s arrival. If you do not set this flag, the kernel will ensure the data’s arrival.

Returns:

a tensor or tuple with received tokens for each expert. With use_fp8=True: the first element is a torch.Tensor shaped as [num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden] with torch.float8_e4m3fn. The second tensor is the corresponding scales for the first element with shape [num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden // 128] with torch.float, if use_ue8m0=False. With use_ue8m0=True, the second one is packed and shaped as [num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden // 512] with type torch.int. Notice that, the last-two-dimension of the scaling tensors are in column-major for TMA compatibility. With use_fp8=False, the result would be a tensor shaped as [num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden] with torch.bfloat16. Moreover, not all tokens are valid, only some of the num_max_dispatch_tokens_per_rank * num_ranks are, as we do not synchronize CPU received count with GPU (also not incompatible with CUDA graph if synced). recv_count: a tensor shaped [num_local_experts] with type torch.int, indicating how many tokens each expert receives. As mentioned before, not all tokens are valid in recv_x. handle: the communication handle to be used in the low_latency_combine function. event: the event after executing the kernel (valid only if async_finish is set). hook: the receiving hook function (valid only if return_recv_hook is set).

Return type:

recv_x

low_latency_combine(
x: torch.Tensor,
topk_idx: torch.Tensor,
topk_weights: torch.Tensor,
handle: tuple,
use_logfmt: bool = False,
zero_copy: bool = False,
async_finish: bool = False,
return_recv_hook: bool = False,
out: Optional[torch.Tensor] = None,
combine_wait_recv_cost_stats: Optional[torch.Tensor] = None,
) Tuple[torch.Tensor, nemo_automodel.components.moe.uccl_ep._utils.EventOverlap, Callable]#

A low-latency implementation for combining tokens (reduce with weights) with IBGDA. This kernel requires all the ranks (no matter intranode or internode) should be visible via RDMA (specifically, IBGDA must be enabled). Warning: as there are only two buffers, and the returned tensors reuse the buffer, you cannot hold more than 2 low-latency kernels’ result tensors at a single moment.

Parameters:
  • x – [num_local_experts, num_max_dispatch_tokens_per_rank * num_ranks, hidden] with torch.bfloat16, the local calculated tokens to be sent to this original rank and reduced.

  • topk_idx – [num_combined_tokens, num_topk] with torch.int64, the expert indices selected by the dispatched tokens. -1 indices (not selecting any expert) are supported. Note that, num_combined_tokens equals to the number of dispatched tokens.

  • topk_weights – [num_combined_tokens, num_topk] with torch.float, the expert weights selected by the dispatched tokens. The received tokens will be reduced with the weights in this tensor.

  • handle – the communication handle given by the dispatch function.

  • use_logfmt – whether to use an internal β€œLogFMT with dynamic per-64-channel cast” format (10 bits).

  • zero_copy – whether the tensor is already copied into the RDMA buffer, should be cooperative with get_next_low_latency_combine_buffer.

  • async_finish – the current stream will not wait for the communication kernels to be finished if set.

  • return_recv_hook – return a receiving hook if set. If set, the kernel will just do the RDMA request issues, but without actually receiving the data. You must call the received hook to make sure the data’s arrival. If you do not set this flag, the kernel will ensure the data’s arrival.

  • out – the in-place output tensor, if set, the kernel will write the result to this tensor and return it directly.

  • combine_wait_recv_cost_stats – a cumulative time spent waiting to receive each token tensor for statistics, which should have shape [num_ranks, num_ranks] and be typed as torch.int64. This is useful for detecting and pre-cisely localizing slow anomalies.

Returns:

the reduced token tensor, with shape [num_combined_tokens, hidden] and type torch.bfloat16. event: the event after executing the kernel (valid only if async_finish is set). hook: the receiving hook function (valid only if return_recv_hook is set).

Return type:

combined_x

get_next_low_latency_combine_buffer(handle: object)#

Get the raw registered RDMA buffer tensor for next low-latency combine, so that the next combine kernel can skip the copying.

Parameters:

handle – the communication handle given by the dispatch function.

Returns:

the raw RDMA low-latency buffer as a BF16 PyTorch tensor with shape [num_local_experts, num_ranks * num_max_dispatch_tokens_per_rank, hidden], you should fill this buffer by yourself.

Return type:

buffer

static get_low_latency_rdma_size_hint(
num_max_dispatch_tokens_per_rank: int,
hidden: int,
num_ranks: int,
num_experts: int,
) int#

Get a minimum size requirement for the RDMA buffer. The size calculation will be done with BF16.

Parameters:
  • num_max_dispatch_tokens_per_rank – the maximum number of tokens to dispatch, all the ranks must hold the same value.

  • hidden – the hidden dimension of each token.

  • num_ranks – the number of EP group ranks.

  • num_experts – the number of all experts.

Returns:

the RDMA buffer size recommended.

Return type:

size

get_comm_stream() torch.Stream#

Get the communication stream.

Returns:

the communication stream.

Return type:

stream

get_local_buffer_tensor(
dtype: torch.dtype,
size: Optional[torch.Size] = None,
offset: int = 0,
use_rdma_buffer: bool = False,
) torch.Tensor#

Get the raw buffer (slice supported) as a PyTorch tensor.

Argument: dtype: the data type (PyTorch dtype) for the tensor. size: the slice size (by elements) to get from the buffer. offset: the offset of the beginning element. use_rdma_buffer: whether to return the RDMA buffer.

static _unpack_bias(
bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
)#
static _dtype_code(dtype: torch.dtype) int#
static get_dispatch_config(num_ranks: int) uccl.ep.Config#

Get a recommended dispatch config.

Argument: num_ranks: the number of ranks.

Returns:

the recommended config.

Return type:

config

static get_combine_config(num_ranks: int) uccl.ep.Config#

Get a recommended combine config.

Argument: num_ranks: the number of ranks.

Returns:

the recommended config.

Return type:

config

get_dispatch_layout(
topk_idx: torch.Tensor,
num_experts: int,
previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
async_finish: bool = False,
allocate_on_comm_stream: bool = False,
) Tuple[torch.Tensor, Optional[torch.Tensor], torch.Tensor, torch.Tensor, nemo_automodel.components.moe.uccl_ep._utils.EventOverlap]#

Calculate the layout required for later communication.

Parameters:
  • topk_idx – [num_tokens, num_topk], dtype must be torch.int64, the expert indices selected by each token, -1 means no selections.

  • num_experts – the number of experts.

  • previous_event – the event to wait before actually executing the kernel.

  • async_finish – the current stream will not wait for the communication kernels to be finished if set.

  • allocate_on_comm_stream – control whether all the allocated tensors’ ownership to be on the communication stream.

Returns:

[num_ranks] with torch.int, the number of tokens to be sent to each rank. num_tokens_per_rdma_rank: [num_rdma_ranks] with torch.int, the number of tokens to be sent to each RDMA rank (with the same GPU index), return None for intranode settings. num_tokens_per_expert: [num_experts] with torch.int, the number of tokens to be sent to each expert. is_token_in_rank: [num_tokens, num_ranks] with torch.bool, whether a token be sent to a rank. event: the event after executing the kernel (valid only if async_finish is set).

Return type:

num_tokens_per_rank

dispatch(
x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
handle: Optional[Tuple] = None,
num_tokens_per_rank: Optional[torch.Tensor] = None,
num_tokens_per_rdma_rank: Optional[torch.Tensor] = None,
is_token_in_rank: Optional[torch.Tensor] = None,
num_tokens_per_expert: Optional[torch.Tensor] = None,
topk_idx: Optional[torch.Tensor] = None,
topk_weights: Optional[torch.Tensor] = None,
expert_alignment: int = 1,
num_worst_tokens: int = 0,
config: Optional[uccl.ep.Config] = None,
previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
async_finish: bool = False,
allocate_on_comm_stream: bool = False,
) Tuple[Union[Tuple[torch.Tensor, torch.Tensor], torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor], List[int], Tuple, nemo_automodel.components.moe.uccl_ep._utils.EventOverlap]#

Dispatch tokens to different ranks, both intranode and internode settings are supported. Intranode kernels require all the ranks should be visible via NVLink. Internode kernels require the ranks in a node should be visible via NVLink, while the ranks with the same GPU index should be visible via RDMA.

Parameters:
  • x – torch.Tensor or tuple of torch.Tensor, for the first type, the shape must be [num_tokens, hidden], and type must be torch.bfloat16; for the second type, the first element of the tuple must be shaped as [num_tokens, hidden] with type torch.float8_e4m3fn, the second must be [num_tokens, hidden // 128] (requiring divisible) with type torch.float.

  • handle – an optional communication handle, if set, the CPU will reuse the layout information to save some time.

  • num_tokens_per_rank – [num_ranks] with torch.int, the number of tokens to be sent to each rank.

  • num_tokens_per_rdma_rank – [num_rdma_ranks] with torch.int, the number of tokens to be sent to each RDMA rank (with the same GPU index), return None for intranode settings.

  • is_token_in_rank – [num_tokens, num_ranks] with torch.bool, whether a token be sent to a rank.

  • num_tokens_per_expert – [num_experts] with torch.int, the number of tokens to be sent to each expert.

  • topk_idx – [num_tokens, num_topk] with torch.int64, the expert indices selected by each token, -1 means no selections.

  • topk_weights – [num_tokens, num_topk] with torch.float, the expert weights of each token to dispatch.

  • expert_alignment – align the number of tokens received by each local expert to this variable.

  • num_worst_tokens – the worst number of tokens to receive, if specified, there will be no CPU sync, and it will be CUDA-graph compatible. Please also notice that this flag is for intranode only.

  • config – the performance tuning config.

  • previous_event – the event to wait before actually executing the kernel.

  • async_finish – the current stream will not wait for the communication kernels to be finished if set.

  • allocate_on_comm_stream – control whether all the allocated tensors’ ownership to be on the communication stream.

Returns:

received tokens, the same type and tuple as the input x, but the number of tokens equals to the received token count. recv_topk_idx: received expert indices. recv_topk_weights: received expert weights. num_recv_tokens_per_expert_list: Python list shaped [num_local_experts], the received token count by each local expert, aligned to the input expert_alignment. If num_worst_tokens is specified, the list will be empty. handle: the returned communication handle. event: the event after executing the kernel (valid only if async_finish is set).

Return type:

recv_x

combine(
x: torch.Tensor,
handle: Tuple,
topk_weights: Optional[torch.Tensor] = None,
bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = None,
config: Optional[uccl.ep.Config] = None,
previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
async_finish: bool = False,
allocate_on_comm_stream: bool = False,
) Tuple[torch.Tensor, Optional[torch.Tensor], nemo_automodel.components.moe.uccl_ep._utils.EventOverlap]#

Combine (reduce) tokens (addition without weights) from different ranks, both intranode and internode settings are supported. Intranode kernels require all the ranks should be visible via NVLink. Internode kernels require the ranks in a node should be visible via NVLink, while the ranks with the same GPU index should be visible via RDMA.

Parameters:
  • x – [num_tokens, hidden] with torch.bfloat16, the tokens to send for reducing to its original ranks.

  • handle – a must-set communication handle, you can obtain this from the dispatch function.

  • topk_weights – [num_tokens, num_topk] with torch.float, the tokens’ top-k weights for reducing to its original ranks.

  • config – the performance tuning config.

  • previous_event – the event to wait before actually executing the kernel.

  • async_finish – the current stream will not wait for the communication kernels to be finished if set.

  • allocate_on_comm_stream – control whether all the allocated tensors’ ownership to be on the communication stream.

Returns:

the reduced token from its dispatched ranks. recv_topk_weights: the reduced top-k weights from its dispatch ranks. event: the event after executing the kernel (valid only if async_finish is set).

Return type:

recv_x

internode_dispatch(
x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
handle: Optional[Tuple] = None,
num_tokens_per_rank: Optional[torch.Tensor] = None,
num_tokens_per_rdma_rank: Optional[torch.Tensor] = None,
is_token_in_rank: Optional[torch.Tensor] = None,
num_tokens_per_expert: Optional[torch.Tensor] = None,
topk_idx: Optional[torch.Tensor] = None,
topk_weights: Optional[torch.Tensor] = None,
expert_alignment: int = 1,
num_worst_tokens: int = 0,
config: Optional[uccl.ep.Config] = None,
previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
async_finish: bool = False,
allocate_on_comm_stream: bool = False,
) Tuple[Union[Tuple[torch.Tensor, torch.Tensor], torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor], List[int], Tuple, nemo_automodel.components.moe.uccl_ep._utils.EventOverlap]#

Internode dispatch implementation, for more details, please refer to the dispatch docs. Normally, you should not directly call this function.

internode_combine(
x: torch.Tensor,
handle: Union[tuple, list],
topk_weights: Optional[torch.Tensor] = None,
bias: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = None,
config: Optional[uccl.ep.Config] = None,
previous_event: Optional[nemo_automodel.components.moe.uccl_ep._utils.EventOverlap] = None,
async_finish: bool = False,
allocate_on_comm_stream: bool = False,
) Tuple[torch.Tensor, Optional[torch.Tensor], nemo_automodel.components.moe.uccl_ep._utils.EventOverlap]#

Internode combine implementation, for more details, please refer to the combine docs. Normally, you should not directly call this function.

clean_low_latency_buffer(
num_max_dispatch_tokens_per_rank: int,
hidden: int,
num_experts: int,
) None#

As low-latency kernels require part of the buffer to be zero-initialized, so it is vital to clean the buffer if the buffer is dirty at some time. For example, after running the normal dispatch/combine, you must run this function before executing any low-latency kernel.

Parameters:
  • num_max_dispatch_tokens_per_rank – the maximum number of tokens to dispatch, all the ranks must hold the same value.

  • hidden – the hidden dimension of each token.

  • num_experts – the number of all experts.