Specialized Layers#
- class physicsnemo.nn.module.kan_layers.KolmogorovArnoldNetwork(
- input_dim,
- output_dim,
- num_harmonics=5,
- add_bias=True,
Bases:
ModuleKolmogorov–Arnold Network (KAN) layer using Fourier-based function approximation.
- Parameters:
input_dim (int) – Dimensionality of the input features.
output_dim (int) – Dimensionality of the output features.
num_harmonics (int, optional) – Number of Fourier harmonics to use (default: 5).
add_bias (bool, optional) – Whether to include an additive bias term (default: True).
- class physicsnemo.nn.module.siren_layers.SirenLayer(
- in_features: int,
- out_features: int,
- layer_type: SirenLayerType = SirenLayerType.HIDDEN,
- omega_0: float = 30.0,
Bases:
ModuleSiReN layer.
- Parameters:
in_features (int) – Number of input features.
out_features (int) – Number of output features.
layer_type (SirenLayerType) – Layer type.
omega_0 (float) – Omega_0 parameter in SiReN.
- forward(x: Tensor) Tensor[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class physicsnemo.nn.module.siren_layers.SirenLayerType(*values)[source]#
Bases:
EnumSiReN layer types.
- class physicsnemo.nn.module.unet_layers.UNetBlock(
- in_channels: int,
- out_channels: int,
- emb_channels: int,
- up: bool = False,
- down: bool = False,
- attention: bool = False,
- num_heads: int | None = None,
- channels_per_head: int = 64,
- dropout: float = 0.0,
- skip_scale: float = 1.0,
- eps: float = 1e-05,
- resample_filter: List[int] = [1, 1],
- resample_proj: bool = False,
- adaptive_scale: bool = True,
- init: Dict[str, Any] = {},
- init_zero: Dict[str, Any] = {'init_weight': 0},
- init_attn: Any = None,
- use_apex_gn: bool = False,
- act: str = 'silu',
- fused_conv_bias: bool = False,
- profile_mode: bool = False,
- amp_mode: bool = False,
Bases:
ModuleUnified U-Net block with optional up/downsampling and self-attention. Represents the union of all features employed by the DDPM++, NCSN++, and ADM architectures.
Parameters:#
- in_channelsint
Number of input channels \(C_{in}\).
- out_channelsint
Number of output channels \(C_{out}\).
- emb_channelsint
Number of embedding channels \(C_{emb}\).
- upbool, optional, default=False
If True, applies upsampling in the forward pass.
- downbool, optional, default=False
If True, applies downsampling in the forward pass.
- attentionbool, optional, default=False
If True, enables the self-attention mechanism in the block.
- num_headsint, optional, default=None
Number of attention heads. If None, defaults to \(C_{out} / 64\).
- channels_per_headint, optional, default=64
Number of channels per attention head.
- dropoutfloat, optional, default=0.0
Dropout probability.
- skip_scalefloat, optional, default=1.0
Scale factor applied to skip connections.
- epsfloat, optional, default=1e-5
Epsilon value used for normalization layers.
- resample_filterList[int], optional, default=``[1, 1]``
Filter for resampling layers.
- resample_projbool, optional, default=False
If True, resampling projection is enabled.
- adaptive_scalebool, optional, default=True
If True, uses adaptive scaling in the forward pass.
- initdict, optional, default=``{}``
Initialization parameters for convolutional and linear layers.
- init_zerodict, optional, default=``{‘init_weight’: 0}``
Initialization parameters with zero weights for certain layers.
- init_attndict, optional, default=``None``
Initialization parameters specific to attention mechanism layers. Defaults to
initif not provided.- use_apex_gnbool, optional, default=False
A boolean flag indicating whether we want to use Apex GroupNorm for NHWC layout. Need to set this as False on cpu.
- actstr, optional, default=None
The activation function to use when fusing activation with GroupNorm.
- fused_conv_bias: bool, optional, default=False
A boolean flag indicating whether bias will be passed as a parameter of conv2d.
- profile_mode: bool, optional, default=False
A boolean flag indicating whether to enable all nvtx annotations during profiling.
- amp_modebool, optional, default=False
A boolean flag indicating whether mixed-precision (AMP) training is enabled.
- Forward:
x (torch.Tensor) – Input tensor of shape \((B, C_{in}, H, W)\), where \(B\) is batch size, \(C_{in}\) is
in_channels, and \(H, W\) are spatial dimensions.emb (torch.Tensor) – Embedding tensor of shape \((B, C_{emb})\), where \(B\) is batch size, and \(C_{emb}\) is
emb_channels.
- Outputs:
torch.Tensor – Output tensor of shape \((B, C_{out}, H, W)\), where \(B\) is batch size, \(C_{out}\) is
out_channels, and \(H, W\) are spatial dimensions.
- forward(x, emb)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class physicsnemo.nn.module.weight_norm.WeightNormLinear(
- in_features: int,
- out_features: int,
- bias: bool = True,
Bases:
ModuleWeight Norm Layer for 1D Tensors
- Parameters:
in_features (int) – Size of the input features
out_features (int) – Size of the output features
bias (bool, optional) – Apply the bias to the output of linear layer, by default True
Example
>>> wnorm = physicsnemo.nn.WeightNormLinear(2,4) >>> input = torch.rand(2,2) >>> output = wnorm(input) >>> output.size() torch.Size([2, 4])
- forward(input: Tensor) Tensor[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class physicsnemo.nn.module.weight_fact.WeightFactLinear(
- in_features: int,
- out_features: int,
- bias: bool = True,
- mean: float = 1.0,
- stddev=0.1,
Bases:
ModuleWeight Factorization Layer for 2D Tensors, more details in https://arxiv.org/abs/2210.01274
- Parameters:
in_features (int) – Size of the input features
out_features (int) – Size of the output features
bias (bool, optional) – Apply the bias to the output of linear layer, by default True
reparam (dict, optional) – Dictionary with the mean and standard deviation to reparametrize the weight matrix, by default {‘mean’: 1.0, ‘stddev’: 0.1}
Example
>>> wfact = physicsnemo.nn.WeightFactLinear(2,4) >>> input = torch.rand(2,2) >>> output = wfact(input) >>> output.size() torch.Size([2, 4])
- forward(input: Tensor) Tensor[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.