PhysicsNeMo Sym Models#
models.afno#
- class physicsnemo.sym.models.afno.AFNOArch(
- input_keys: List[Key],
- output_keys: List[Key],
- img_shape: Tuple[int, int],
- detach_keys: List[Key] = [],
- patch_size: int = 16,
- embed_dim: int = 256,
- depth: int = 4,
- num_blocks: int = 4,
Bases:
ArchAdaptive Fourier neural operator (AFNO) model.
Note
AFNO is a model that is designed for 2D images only.
- Parameters:
input_keys (List[Key]) – Input key list. The key dimension size should equal the variables channel dim.
output_keys (List[Key]) – Output key list. The key dimension size should equal the variables channel dim.
img_shape (Tuple[int, int]) – Input image dimensions (height, width)
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
patch_size (int, optional) – Size of image patchs, by default 16
embed_dim (int, optional) – Embedded channel size, by default 256
depth (int, optional) – Number of AFNO layers, by default 4
num_blocks (int, optional) – Number of blocks in the frequency weight matrices, by default 4
Notes
Input variable tensor shape: \([N, size, H, W]\)
Output variable tensor shape: \([N, size, H, W]\)
Example
>>> afno = .afno.AFNOArch([Key("x", size=2)], [Key("y", size=2)], (64, 64)) >>> model = afno.make_node() >>> input = {"x": torch.randn(20, 2, 64, 64)} >>> output = model.evaluate(input)
- forward(
- in_vars: Dict[str, Tensor],
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.deeponet#
- class physicsnemo.sym.models.deeponet.DeepONetArch(
- branch_net: Arch,
- trunk_net: Arch,
- output_keys: List[Key] = None,
- detach_keys: List[Key] = [],
- branch_dim: None | int = None,
- trunk_dim: None | int = None,
Bases:
ArchDeepONet
- Parameters:
branch_net (Arch) – Branch net model. Output key should be variable “branch”
trunk_net (Arch) – Trunk net model. Output key should be variable “trunk”
output_keys (List[Key], optional) – Output variable keys, by default None
detach_keys (List[Key], optional)
gradients (List of keys to detach)
[] (by default)
branch_dim (Union[None, int], optional) – Dimension of the branch encoding vector. If none, the model will use the variable trunk dimension. Should be set for 2D/3D models. By default None
trunk_dim (Union[None, int], optional) – Dimension of the trunk encoding vector. If none, the model will use the variable trunk dimension. Should be set for 2D/3D models. By default None
Note
The branch and trunk net should ideally output to the same dimensionality, but if this is not the case the DeepO model will use a linear layer to match both branch/trunk dimensionality to (branch_dim + trunk_dim)/2. This vector will then be used for the final output multiplication.
Note
Higher dimension branch networks are supported. If the output is not a 1D vector the DeepO model will reshape for the final output multiplication.
Note
For more info on DeepONet refer to: https://arxiv.org/abs/1910.03193
- forward(
- in_vars: Dict[str, Tensor],
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property supports_func_arch: bool#
Returns whether the instantiate arch object support FuncArch API.
We determine it by checking whether the arch object’s subclass has overridden the _tensor_forward method.
models.dgm#
models.fno#
physicsnemo.models.fourier_net#
models.fully_connected#
models.hash_encoding_net#
models.highway_fourier_net#
models.modified_fourier_net#
models.moving_time_window#
- class physicsnemo.sym.models.moving_time_window.MovingTimeWindowArch(
- arch: Arch,
- window_size: float,
Bases:
ArchMoving time window model the keeps track of current time window and previous window.
- Parameters:
arch (Arch) – PhysicsNeMo architecture to use for moving time window.
window_size (float) – Size of the time window. This will be used to slide the window forward every iteration.
- forward(
- in_vars: Dict[str, Tensor],
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.multiplicative_filter_net#
models.multiscale_fourier_net#
models.pix2pix#
- class physicsnemo.sym.models.pix2pix.Pix2PixArch(
- input_keys: List[Key],
- output_keys: List[Key],
- dimension: int,
- detach_keys: List[Key] = [],
- conv_layer_size: int = 64,
- n_downsampling: int = 3,
- n_blocks: int = 3,
- scaling_factor: int = 1,
- activation_fn: Activation = Activation.RELU,
- batch_norm: bool = False,
- padding_type='reflect',
Bases:
ArchConvolutional encoder-decoder based on pix2pix generator models.
Note
The pix2pix architecture supports options for 1D, 2D and 3D fields which can be constroled using the dimension parameter.
- Parameters:
input_keys (List[Key]) – Input key list. The key dimension size should equal the variables channel dim.
output_keys (List[Key]) – Output key list. The key dimension size should equal the variables channel dim.
dimension (int) – Model dimensionality (supports 1, 2, 3).
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
conv_layer_size (int, optional) – Latent channel size after first convolution, by default 64
n_downsampling (int, optional) – Number of downsampling/upsampling blocks, by default 3
n_blocks (int, optional) – Number of residual blocks in middle of model, by default 3
scaling_factor (int, optional) – Scaling factor to increase the output feature size compared to the input (1, 2, 4, or 8), by default 1
activation_fn (Activation, optional) – Activation function, by default
Activation.RELUbatch_norm (bool, optional) – Batch normalization, by default False
padding_type (str, optional) – Padding type (‘constant’, ‘reflect’, ‘replicate’ or ‘circular’), by default “reflect”
Notes
Input variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Output variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Note
Reference: Isola, Phillip, et al. “Image-To-Image translation with conditional adversarial networks” Conference on Computer Vision and Pattern Recognition, 2017. https://arxiv.org/abs/1611.07004
Reference: Wang, Ting-Chun, et al. “High-Resolution image synthesis and semantic manipulation with conditional GANs” Conference on Computer Vision and Pattern Recognition, 2018. https://arxiv.org/abs/1711.11585
Note
Based on the implementation: NVIDIA/pix2pixHD
- forward(
- in_vars: Dict[str, Tensor],
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.radial_basis#
models.siren#
models.super_res_net#
- class physicsnemo.sym.models.super_res_net.SRResNetArch(
- input_keys: List[Key],
- output_keys: List[Key],
- detach_keys: List[Key] = [],
- large_kernel_size: int = 7,
- small_kernel_size: int = 3,
- conv_layer_size: int = 32,
- n_resid_blocks: int = 8,
- scaling_factor: int = 8,
- activation_fn: Activation = Activation.PRELU,
Bases:
Arch3D super resolution network
Based on the implementation: sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution
- Parameters:
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
large_kernel_size (int, optional) – convolutional kernel size for first and last convolution, by default 7
small_kernel_size (int, optional) – convolutional kernel size for internal convolutions, by default 3
conv_layer_size (int, optional) – Latent channel size, by default 32
n_resid_blocks (int, optional) – Number of residual blocks before , by default 8
scaling_factor (int, optional) – Scaling factor to increase the output feature size compared to the input (2, 4, or 8), by default 8
activation_fn (Activation, optional) – Activation function, by default Activation.PRELU
- forward(
- in_vars: Dict[str, Tensor],
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.