Regularization and Parameterization Functionals#
- physicsnemo.nn.functional.drop_path(
- x: Float[Tensor, 'batch ...'],
- drop_prob: float = 0.0,
- training: bool = False,
- scale_by_keep: bool = True,
Drop paths (stochastic depth) per sample.
Cut & paste from timm master. Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). This is the same as the DropConnect implementation used for EfficientNet and related networks, but the original name is misleading as “Drop Connect” is a different form of dropout. See: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 for discussion.
- Parameters:
x (torch.Tensor) – Input tensor.
drop_prob (float, optional) – Drop probability, by default 0.0.
training (bool, optional) – Whether stochastic depth is enabled, by default False.
scale_by_keep (bool, optional) – Scale by keep probability, by default True.
implementation ({"torch"} or None) – Implementation to use. When
None, dispatch selects the available implementation.
Notes
The layer and argument names use “drop path” rather than mixing DropConnect or “survival rate” to align with common usage.
- physicsnemo.nn.functional.weight_fact(
- w: Float[Tensor, 'rows cols'],
- mean: float = 1.0,
- stddev: float = 0.1,
Randomly factorize the weight matrix into a product of vectors and a matrix.
- Parameters:
w (torch.Tensor) – Weight tensor to factorize.
mean (float, optional) – Mean of the normal distribution used to sample the scale factor.
stddev (float, optional) – Standard deviation of the normal distribution used to sample the scale factor.
implementation ({"torch"} or None) – Implementation to use. When
None, dispatch selects the available implementation.