Fourier Neural Operators#
- class physicsnemo.models.fno.fno.FNO(*args, **kwargs)[source]#
Bases:
ModuleFourier neural operator (FNO) model.
Note
The FNO architecture supports options for 1D, 2D, 3D and 4D fields which can be controlled using the dimension parameter.
- Parameters:
in_channels (int) – Number of input channels
out_channels (int) – Number of output channels
decoder_layers (int, optional) – Number of decoder layers, by default 1
decoder_layer_size (int, optional) – Number of neurons in decoder layers, by default 32
decoder_activation_fn (str, optional) – Activation function for decoder, by default “silu”
dimension (int) – Model dimensionality (supports 1, 2, 3).
latent_channels (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (int, optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (str, optional) – Activation function, by default “gelu”
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
Example
>>> # define the 2d FNO model >>> model = physicsnemo.models.fno.FNO( ... in_channels=4, ... out_channels=3, ... decoder_layers=2, ... decoder_layer_size=32, ... dimension=2, ... latent_channels=32, ... num_fno_layers=2, ... padding=0, ... ) >>> input = torch.randn(32, 4, 32, 32) #(N, C, H, W) >>> output = model(input) >>> output.size() torch.Size([32, 3, 32, 32])
Note
Reference: Li, Zongyi, et al. “Fourier neural operator for parametric partial differential equations.” arXiv preprint arXiv:2010.08895 (2020).
- class physicsnemo.models.fno.fno.FNO1DEncoder(
- in_channels: int = 1,
- num_fno_layers: int = 4,
- fno_layer_size: int = 32,
- num_fno_modes: int | List[int] = 16,
- padding: int | List[int] = 8,
- padding_type: str = 'constant',
- activation_fn: Module = GELU(approximate='none'),
- coord_features: bool = True,
Bases:
Module1D Spectral encoder for FNO
- Parameters:
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- build_fno(num_fno_modes: List[int]) None[source]#
construct FNO block. :param num_fno_modes: Number of Fourier modes kept in spectral convolutions :type num_fno_modes: List[int]
- grid_to_points(
- value: Tensor,
converting from grid based (image) to point based representation
- Parameters:
value (Meshgrid tensor)
- Returns:
Tensor, meshgrid shape
- Return type:
Tuple
- class physicsnemo.models.fno.fno.FNO2DEncoder(
- in_channels: int = 1,
- num_fno_layers: int = 4,
- fno_layer_size: int = 32,
- num_fno_modes: int | List[int] = 16,
- padding: int | List[int] = 8,
- padding_type: str = 'constant',
- activation_fn: Module = GELU(approximate='none'),
- coord_features: bool = True,
Bases:
Module2D Spectral encoder for FNO
- Parameters:
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- build_fno(num_fno_modes: List[int]) None[source]#
construct FNO block. :param num_fno_modes: Number of Fourier modes kept in spectral convolutions :type num_fno_modes: List[int]
- grid_to_points(
- value: Tensor,
converting from grid based (image) to point based representation
- Parameters:
value (Meshgrid tensor)
- Returns:
Tensor, meshgrid shape
- Return type:
Tuple
- class physicsnemo.models.fno.fno.FNO3DEncoder(
- in_channels: int = 1,
- num_fno_layers: int = 4,
- fno_layer_size: int = 32,
- num_fno_modes: int | List[int] = 16,
- padding: int | List[int] = 8,
- padding_type: str = 'constant',
- activation_fn: Module = GELU(approximate='none'),
- coord_features: bool = True,
Bases:
Module3D Spectral encoder for FNO
- Parameters:
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- build_fno(num_fno_modes: List[int]) None[source]#
construct FNO block. :param num_fno_modes: Number of Fourier modes kept in spectral convolutions :type num_fno_modes: List[int]
- grid_to_points(
- value: Tensor,
converting from grid based (image) to point based representation
- Parameters:
value (Meshgrid tensor)
- Returns:
Tensor, meshgrid shape
- Return type:
Tuple
- class physicsnemo.models.fno.fno.FNO4DEncoder(
- in_channels: int = 1,
- num_fno_layers: int = 4,
- fno_layer_size: int = 32,
- num_fno_modes: int | List[int] = 16,
- padding: int | List[int] = 8,
- padding_type: str = 'constant',
- activation_fn: Module = GELU(approximate='none'),
- coord_features: bool = True,
Bases:
Module4D Spectral encoder for FNO
- Parameters:
in_channels (int, optional) – Number of input channels, by default 1
num_fno_layers (int, optional) – Number of spectral convolutional layers, by default 4
fno_layer_size (int, optional) – Latent features size in spectral convolutions, by default 32
num_fno_modes (Union[int, List[int]], optional) – Number of Fourier modes kept in spectral convolutions, by default 16
padding (Union[int, List[int]], optional) – Domain padding for spectral convolutions, by default 8
padding_type (str, optional) – Type of padding for spectral convolutions, by default “constant”
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
coord_features (bool, optional) – Use coordinate grid as additional feature map, by default True
- build_fno(num_fno_modes: List[int]) None[source]#
construct FNO block. :param num_fno_modes: Number of Fourier modes kept in spectral convolutions :type num_fno_modes: List[int]
- grid_to_points(
- value: Tensor,
converting from grid based (image) to point based representation
- Parameters:
value (Meshgrid tensor)
- Returns:
Tensor, meshgrid shape
- Return type:
Tuple
- class physicsnemo.models.afno.afno.AFNO(*args, **kwargs)[source]#
Bases:
ModuleAdaptive Fourier neural operator (AFNO) model.
Note
AFNO is a model that is designed for 2D images only.
- Parameters:
inp_shape (List[int]) – Input image dimensions [height, width]
in_channels (int) – Number of input channels
out_channels (int) – Number of output channels
patch_size (List[int], optional) – Size of image patches, by default [16, 16]
embed_dim (int, optional) – Embedded channel size, by default 256
depth (int, optional) – Number of AFNO layers, by default 4
mlp_ratio (float, optional) – Ratio of layer MLP latent variable size to input feature size, by default 4.0
drop_rate (float, optional) – Drop out rate in layer MLPs, by default 0.0
num_blocks (int, optional) – Number of blocks in the block-diag frequency weight matrices, by default 16
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
Example
>>> model = physicsnemo.models.afno.AFNO( ... inp_shape=[32, 32], ... in_channels=2, ... out_channels=1, ... patch_size=(8, 8), ... embed_dim=16, ... depth=2, ... num_blocks=2, ... ) >>> input = torch.randn(32, 2, 32, 32) #(N, C, H, W) >>> output = model(input) >>> output.size() torch.Size([32, 1, 32, 32])
Note
Reference: Guibas, John, et al. “Adaptive fourier neural operators: Efficient token mixers for transformers.” arXiv preprint arXiv:2111.13587 (2021).
- forward(x: Tensor) Tensor[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class physicsnemo.models.afno.afno.AFNO2DLayer(
- hidden_size: int,
- num_blocks: int = 8,
- sparsity_threshold: float = 0.01,
- hard_thresholding_fraction: float = 1,
- hidden_size_factor: int = 1,
Bases:
ModuleAFNO spectral convolution layer
- Parameters:
hidden_size (int) – Feature dimensionality
num_blocks (int, optional) – Number of blocks used in the block diagonal weight matrix, by default 8
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
hidden_size_factor (int, optional) – Factor to increase spectral features by after weight multiplication, by default 1
- class physicsnemo.models.afno.afno.AFNOMlp(
- in_features: int,
- latent_features: int,
- out_features: int,
- activation_fn: Module = GELU(approximate='none'),
- drop: float = 0.0,
Bases:
ModuleFully-connected Multi-layer perception used inside AFNO
- Parameters:
in_features (int) – Input feature size
latent_features (int) – Latent feature size
out_features (int) – Output feature size
activation_fn (nn.Module, optional) – Activation function, by default nn.GELU
drop (float, optional) – Drop out rate, by default 0.0
- class physicsnemo.models.afno.afno.Block(
- embed_dim: int,
- num_blocks: int = 8,
- mlp_ratio: float = 4.0,
- drop: float = 0.0,
- activation_fn: ~torch.nn.modules.module.Module = GELU(approximate='none'),
- norm_layer: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.normalization.LayerNorm'>,
- double_skip: bool = True,
- sparsity_threshold: float = 0.01,
- hard_thresholding_fraction: float = 1.0,
Bases:
ModuleAFNO block, spectral convolution and MLP
- Parameters:
embed_dim (int) – Embedded feature dimensionality
num_blocks (int, optional) – Number of blocks used in the block diagonal weight matrix, by default 8
mlp_ratio (float, optional) – Ratio of MLP latent variable size to input feature size, by default 4.0
drop (float, optional) – Drop out rate in MLP, by default 0.0
activation_fn (nn.Module, optional) – Activation function used in MLP, by default nn.GELU
norm_layer (nn.Module, optional) – Normalization function, by default nn.LayerNorm
double_skip (bool, optional) – Residual, by default True
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
- class physicsnemo.models.afno.afno.PatchEmbed(
- inp_shape: List[int],
- in_channels: int,
- patch_size: List[int] = [16, 16],
- embed_dim: int = 256,
Bases:
ModulePatch embedding layer
Converts 2D patch into a 1D vector for input to AFNO
- Parameters:
inp_shape (List[int]) – Input image dimensions [height, width]
in_channels (int) – Number of input channels
patch_size (List[int], optional) – Size of image patches, by default [16, 16]
embed_dim (int, optional) – Embedded channel size, by default 256
- class physicsnemo.models.afno.modafno.ModAFNO(*args, **kwargs)[source]#
Bases:
ModuleModulated Adaptive Fourier neural operator (ModAFNO) model.
- Parameters:
inp_shape (List[int]) – Input image dimensions [height, width]
in_channels (int, optional) – Number of input channels
out_channels (int, optional) – Number of output channels
embed_model (dict, optional) – Dictionary of arguments to pass to the ModEmbedNet embedding model
patch_size (List[int], optional) – Size of image patches, by default [16, 16]
embed_dim (int, optional) – Embedded channel size, by default 256
mod_dim (int) – Modululation input dimensionality
modulate_filter (bool, optional) – Whether to compute the modulation for the FFT filter, by default True
modulate_mlp (bool, optional) – Whether to compute the modulation for the MLP, by default True
scale_shift_mode (["complex", "real"]) – If ‘complex’ (default), compute the scale-shift operation using complex operations. If ‘real’, use real operations.
depth (int, optional) – Number of AFNO layers, by default 4
mlp_ratio (float, optional) – Ratio of layer MLP latent variable size to input feature size, by default 4.0
drop_rate (float, optional) – Drop out rate in layer MLPs, by default 0.0
num_blocks (int, optional) – Number of blocks in the block-diag frequency weight matrices, by default 16
sparsity_threshold (float, optional) – Sparsity threshold (softshrink) of spectral features, by default 0.01
hard_thresholding_fraction (float, optional) – Threshold for limiting number of modes used [0,1], by default 1
below. (The default settings correspond to the implementation in the paper cited)
Example
>>> import torch >>> from physicsnemo.models.afno import ModAFNO >>> model = ModAFNO( ... inp_shape=[32, 32], ... in_channels=2, ... out_channels=1, ... patch_size=(8, 8), ... embed_dim=16, ... depth=2, ... num_blocks=2, ... ) >>> input = torch.randn(32, 2, 32, 32) #(N, C, H, W) >>> time = torch.full((32, 1), 0.5) >>> output = model(input, time) >>> output.size() torch.Size([32, 1, 32, 32])
Note
Reference: Leinonen et al. “Modulated Adaptive Fourier Neural Operators for Temporal Interpolation of Weather Forecasts.” arXiv preprint arXiv:TODO (2024).