nemo_automodel.components.distributed.cp_utils#
Module Contents#
Functions#
Add position_ids to the batch only if they are missing. |
|
Create a train context. |
|
Create a context parallel context. |
|
Attach forward pre-hooks to self_attn modules to fix attention masks for context parallelism. |
|
Inject CP-aware SDPA into self_attn modules for compile + CP>1 correctness. |
|
Forward pre-hook on decoder layers to pass position_ids to linear_attn. |
|
Build a CP context manager and shards a batch. If the input device_mesh is None or the size of the context_parallel submesh is 1, this function is effectively a no-op. |
|
Build a CP batch for Transformer Engine using THD format. |
|
API#
- nemo_automodel.components.distributed.cp_utils._build_position_ids(batch, device)#
Add position_ids to the batch only if they are missing.
- nemo_automodel.components.distributed.cp_utils.get_train_context(
- enable_loss_parallel: bool,
- enable_compiled_autograd: bool,
- cp_context=None,
Create a train context.
- Parameters:
enable_loss_parallel (bool) – Whether to enable loss parallelism.
enable_compiled_autograd (bool) – Whether to enable compiled autograd.
- nemo_automodel.components.distributed.cp_utils.create_context_parallel_ctx(
- cp_mesh: torch.distributed.device_mesh.DeviceMesh,
- cp_buffers: List[torch.Tensor],
- cp_seq_dims: List[int],
- cp_no_restore_buffers: Set[torch.Tensor],
- cp_rotate_method: Optional[str] = None,
Create a context parallel context.
- Parameters:
cp_mesh (DeviceMesh) – The device mesh for context parallel.
cp_buffers (List[torch.Tensor]) – The buffers for context parallel.
cp_seq_dims (List[int]) – The sequence dimensions for context parallel.
cp_no_restore_buffers (Set[torch.Tensor]) – The no restore buffers for context parallel.
cp_rotate_method (str) – The rotation method for context parallel, such as “allgather” or “addtoall”.
- nemo_automodel.components.distributed.cp_utils.attach_context_parallel_hooks(model: torch.nn.Module)#
Attach forward pre-hooks to self_attn modules to fix attention masks for context parallelism.
Context parallelism shards Q/K/V on the sequence dimension as DTensors, so explicit 4D attention masks would have mismatched shapes. This function registers a hook on every
self_attnsub-module that strips theattention_maskkwarg and setsis_causal=Trueinstead, letting SDPA handle causal masking internally.Based on
accelerate.big_modeling._attach_context_parallel_hooks.
- nemo_automodel.components.distributed.cp_utils.attach_cp_sdpa_hooks(model: torch.nn.Module, cp_mesh) None#
Inject CP-aware SDPA into self_attn modules for compile + CP>1 correctness.
Problem: when per-layer torch.compile is active, Dynamo traces through the decoder layer including Q/K/V projections. At the F.scaled_dot_product_attention call site, Q/K/V are already local tensors (DTensor metadata was never propagated through the compiled graph). The DTensor SDPA dispatch — which triggers the CP allgather — never fires, so each rank silently attends only to its local sequence shard.
Fix: swap F.scaled_dot_product_attention with a @torch._dynamo.disable wrapper for the duration of each self_attn forward. Dynamo sees the disabled function and creates a graph break there, so:
Everything before (Q/K/V proj + RoPE) is compiled and fused.
The disabled wrapper runs eagerly: re-wraps local Q/K/V as DTensors with Shard(2) on the CP mesh so the DTensor SDPA dispatch fires the allgather.
Everything after (O proj + residual + MLP) is compiled and fused.
Seq dim at the SDPA call is 2: tensors are [B, nH, S/cp_size, D] after HF reshape.
- nemo_automodel.components.distributed.cp_utils.attach_linear_attn_position_hooks(model: torch.nn.Module)#
Forward pre-hook on decoder layers to pass position_ids to linear_attn.
HF Qwen3.5 decoder layers don’t pass position_ids to linear_attn, but CPAwareGatedDeltaNet needs them under CP to undo load-balanced sharding. This hook captures position_ids from the decoder layer’s kwargs and stores it on the linear_attn module so its forward can read it.
- nemo_automodel.components.distributed.cp_utils.make_cp_batch_and_ctx(
- device_mesh,
- batch,
- loss_mask=None,
- use_te: bool = False,
- padding_token_id: int = 0,
- num_chunks: int = 1,
- seq_lens_padding_value: int = -1000,
Build a CP context manager and shards a batch. If the input device_mesh is None or the size of the context_parallel submesh is 1, this function is effectively a no-op.
- Parameters:
cp_mesh (DeviceMesh) – The device mesh for context parallel.
batch (Dict[str, torch.Tensor]) – The input batch containing (string, torch.Tensor)
- Returns:
Returns a tuple with a context manager and a new batch. The context manager is either nullcontext (no CP) or CP context manager as returned by
create_context_parallel_ctx. The batch has also been passed tocreate_context_parallel_ctxand is accordingly sharded.- Return type:
tuple (contextmanager, dict[str, torch.Tensor])
- nemo_automodel.components.distributed.cp_utils.make_cp_batch_for_te(
- cp_mesh,
- batch,
- qkv_format='thd',
- padding_token_id: int = 0,
- num_chunks: int = 1,
- seq_lens_padding_value: int = -1000,
Build a CP batch for Transformer Engine using THD format.
This function converts BSHD format batches to THD format and shards them across context parallel ranks for use with Transformer Engine. It processes the batch in chunks if num_chunks > 1, allowing for better memory efficiency with large sequences.
The function performs three main steps:
Converts BSHD format to THD format using split_batch_into_thd_chunks
Optionally splits the batch into multiple chunks for memory efficiency
Shards each chunk across CP ranks using Transformer Engine’s partitioning
- Parameters:
cp_mesh (DeviceMesh or None) – The device mesh for context parallel. If None or size <= 1, returns the batch in THD format without sharding.
batch (Dict[str, torch.Tensor]) –
The input batch in BSHD format containing:
input_ids: Input token IDs [batch_size, seq_len] or [batch_size, seq_len, hidden_dim]
labels: Label token IDs [batch_size, seq_len]
position_ids (optional): Position IDs [batch_size, seq_len]
seq_lens: Actual sequence lengths [batch_size, num_packs]
seq_lens_padded: Padded sequence lengths [batch_size, num_packs]
qkv_format (str) – Format for QKV tensors. Currently only “thd” is supported.
padding_token_id (int) – Token ID used for padding in input_ids (default: 0)
num_chunks (int) – Number of chunks to split the batch into. If > 1, the batch dimension is split and each chunk is processed separately (default: 1)
seq_lens_padding_value (int) – Sentinel value used to indicate padding in seq_lens/seq_lens_padded tensors (default: -1000)
- Returns:
Processed batch in THD format with the following keys: - input_ids: Sharded input token IDs [total_tokens] or [num_chunks, chunk_tokens] - labels: Sharded labels [total_tokens] or [num_chunks, chunk_tokens] - position_ids: Generated and sharded position IDs [total_tokens] or [num_chunks, chunk_tokens] - cu_seqlens: Cumulative sequence lengths [num_seqs+1] or [num_chunks, max_seqs+1] - cu_seqlens_padded: Cumulative padded sequence lengths [num_seqs+1] or [num_chunks, max_seqs+1] - max_seqlen: Maximum sequence length (int32 tensor) - qkv_format: Format string (“thd”) - padding_mask: Boolean mask indicating padding tokens
- Return type:
dict
- Raises:
ValueError – If qkv_format is not “thd”
KeyError – If required fields (seq_lens, seq_lens_padded) are missing from batch
.. rubric:: Example
Single chunk, no CP
batch = { … ‘input_ids’: torch.tensor([[1, 2, 3, 4]]), … ‘labels’: torch.tensor([[2, 3, 4, 5]]), … ‘seq_lens’: torch.tensor([[4]]), … ‘seq_lens_padded’: torch.tensor([[4]]) … } result = make_cp_batch_for_te(None, batch) result[‘input_ids’].shape # [4] in THD format torch.Size([4])
Multiple chunks with CP
batch = { … ‘input_ids’: torch.tensor([[1, 2, 3, 4], [5, 6, 7, 8]]), … ‘labels’: torch.tensor([[2, 3, 4, 5], [6, 7, 8, 9]]), … ‘seq_lens’: torch.tensor([[4], [4]]), … ‘seq_lens_padded’: torch.tensor([[4], [4]]) … } result = make_cp_batch_for_te(cp_mesh, batch, num_chunks=2) result[‘input_ids’].shape # [2, chunk_tokens] - 2 chunks torch.Size([2, 2]) # Example: 2 chunks, 2 tokens each after sharding
- nemo_automodel.components.distributed.cp_utils._shard_thd_chunk_for_te(
- batch,
- cp_mesh,
- qkv_format,
- seq_lens_padding_value,
- padding_token_id,