nemo_automodel.loss.masked_ce
#
Module Contents#
Functions#
Compute the masked cross-entropy loss between logits and targets. |
API#
- nemo_automodel.loss.masked_ce.masked_cross_entropy(
- logits,
- targets,
- mask=None,
- fp32_upcast=True,
- ignore_index=-100,
- reduction='mean',
Compute the masked cross-entropy loss between logits and targets.
If a mask is provided, the loss is computed per element, multiplied by the mask, and then averaged. If no mask is provided, the standard cross-entropy loss is used.
- Parameters:
logits (torch.Tensor) β The predicted logits with shape (N, C) where C is the number of classes.
targets (torch.Tensor) β The ground truth class indices with shape (N,).
mask (torch.Tensor, optional) β A tensor that masks the loss computation. Items marked with 1 will be used to calculate loss, otherwise ignored. Must be broadcastable to the shape of the loss. Defaults to None.
fp32_upcast (bool, optional) β if True it will cast logits to float32 before computing
Default (cross entropy.) β True.
ignore_index (int) β label to ignore in CE calculation. Defaults to -100.
reduction (str) β type of reduction. Defaults to βmeanβ.
- Returns:
The computed loss as a scalar tensor.
- Return type:
torch.Tensor