Modulus Models¶
models.afno¶
- class modulus.models.afno.AFNOArch(input_keys: List[Key], output_keys: List[Key], img_shape: Tuple[int, int], detach_keys: List[Key] = [], patch_size: int = 16, embed_dim: int = 256, depth: int = 4, num_blocks: int = 4)[source]¶
Bases:
Arch
Adaptive Fourier neural operator (AFNO) model.
Note
AFNO is a model that is designed for 2D images only.
- Parameters
input_keys (List[Key]) – Input key list. The key dimension size should equal the variables channel dim.
output_keys (List[Key]) – Output key list. The key dimension size should equal the variables channel dim.
img_shape (Tuple[int, int]) – Input image dimensions (height, width)
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
patch_size (int, optional) – Size of image patchs, by default 16
embed_dim (int, optional) – Embedded channel size, by default 256
depth (int, optional) – Number of AFNO layers, by default 4
num_blocks (int, optional) – Number of blocks in the frequency weight matrices, by default 4
Variable Shape
Input variable tensor shape: \([N, size, H, W]\)
Output variable tensor shape: \([N, size, H, W]\)
Example
>>> afno = .afno.AFNOArch([Key("x", size=2)], [Key("y", size=2)], (64, 64)) >>> model = afno.make_node() >>> input = {"x": torch.randn(20, 2, 64, 64)} >>> output = model.evaluate(input)
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.dgm¶
- class modulus.models.dgm.DGMArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], layer_size: int = 512, nr_layers: int = 6, activation_fn=Activation.SIN, adaptive_activations: bool = False, weight_norm: bool = True)[source]¶
Bases:
Arch
A variation of the fully connected network. Reference: Sirignano, J. and Spiliopoulos, K., 2018. DGM: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375, pp.1339-1364.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
layer_size (int = 512) – Layer size for every hidden layer of the model.
nr_layers (int = 6) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
adaptive_activations (bool = False) – If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
weight_norm (bool = True) – Use weight norm on fully connected layers.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.fno¶
- class modulus.models.fno.FNOArch(input_keys: List[Key], dimension: int, decoder_net: Arch, detach_keys: List[Key] = [], nr_fno_layers: int = 4, fno_modes: Union[int, List[int]] = 16, padding: int = 8, padding_type: str = 'constant', activation_fn: Activation = Activation.GELU, coord_features: bool = True)[source]¶
Bases:
Arch
Fourier neural operator (FNO) model.
Note
The FNO architecture supports options for 1D, 2D and 3D fields which can be controlled using the dimension parameter.
- Parameters
input_keys (List[Key]) – Input key list. The key dimension size should equal the variables channel dim.
dimension (int) – Model dimensionality (supports 1, 2, 3).
decoder_net (Arch) – Pointwise decoder network, input key should be the latent variable
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
nr_fno_layers (int, optional) – Number of spectral convolution layers, by default 4
fno_modes (Union[int, List[int]], optional) – Number of Fourier modes with learnable weights, by default 16
padding (int, optional) – Padding size for FFT calculations, by default 8
padding_type (str, optional) – Padding type for FFT calculations (‘constant’, ‘reflect’, ‘replicate’ or ‘circular’), by default “constant”
activation_fn (Activation, optional) – Activation function, by default Activation.GELU
coord_features (bool, optional) – Use coordinate meshgrid as additional input feature, by default True
Variable Shape
Input variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Output variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Example
1D FNO model
>>> decoder = FullyConnectedArch([Key("z", size=32)], [Key("y", size=2)]) >>> fno_1d = FNOArch([Key("x", size=2)], dimension=1, decoder_net=decoder) >>> model = fno_1d.make_node() >>> input = {"x": torch.randn(20, 2, 64)} >>> output = model.evaluate(input)
2D FNO model
>>> decoder = ConvFullyConnectedArch([Key("z", size=32)], [Key("y", size=2)]) >>> fno_2d = FNOArch([Key("x", size=2)], dimension=2, decoder_net=decoder) >>> model = fno_2d.make_node() >>> input = {"x": torch.randn(20, 2, 64, 64)} >>> output = model.evaluate(input)
3D FNO model
>>> decoder = Siren([Key("z", size=32)], [Key("y", size=2)]) >>> fno_3d = FNOArch([Key("x", size=2)], dimension=3, decoder_net=decoder) >>> model = fno_3d.make_node() >>> input = {"x": torch.randn(20, 2, 64, 64, 64)} >>> output = model.evaluate(input)
- add_pino_gradients(derivatives: List[Key], domain_length: List[float] = [1.0, 1.0]) None [source]¶
Adds PINO “exact” gradient calculations model outputs.
Note
This will constraint the FNO decoder to a two layer fully-connected model with Tanh activactions functions. This is done for computational efficiency since gradients calculations are explicit. Auto-diff is far too slow for this method.
- Parameters
derivatives (List[Key]) – List of derivative keys
domain_length (List[float], optional) – Domain size of input grid. Needed for calculating the gradients of the latent variables. By default [1.0, 1.0]
- Raises
ValueError – If domain length list is not the same size as the FNO model dimenion
Note
For details on the “exact” gradient calculation refer to section 3.3 in: https://arxiv.org/pdf/2111.03794.pdf
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
modulus.models.fourier_net¶
- class modulus.models.fourier_net.FourierNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], frequencies: Tuple = ('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params: Tuple = ('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn: Activation = Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)[source]¶
Bases:
Arch
Fourier encoding fully-connected neural network.
This network is a fully-connected neural network that encodes the input features into Fourier space using sinesoidal activation functions. This helps reduce spectal bias during training.
- Parameters
input_keys (List[Key]) – Input key list.
output_keys (List[Key]) – Output key list.
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
frequencies (Tuple, optional) –
A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’, by default (“axis”, [i for i in range(10)])
'gaussian'
samples frequency of Fourier series from Gaussian.'axis'
samples along axis of spectral space with the given list range of frequencies.'diagonal'
samples along diagonal of spectral space with the given list range of frequencies.'full'
samples along entire spectral space for all combinations of frequencies in given list.frequencies_params (Tuple, optional) – Same as frequencies used for encodings of any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’]. By default (“axis”, [i for i in range(10)])
activation_fn (Activation, optional) – Activation function, by default
Activation.SILU
layer_size (int, optional) – Layer size for every hidden layer of the model, by default 512
nr_layers (int, optional) – Number of hidden layers of the model, by default 6
skip_connections (bool, optional) – Apply skip connections every 2 hidden layers, by default False
weight_norm (bool, optional) – Use weight norm on fully connected layers, by default True
adaptive_activations (bool, optional) – Use an adaptive activation functions, by default False
Variable Shape
Input variable tensor shape: \([N, size]\)
Output variable tensor shape: \([N, size]\)
Example
Gaussian frequencies
>>> std = 1.0; num_freq = 10 >>> model = .fourier_net.FourierNetArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> frequencies=("gaussian", std, num_freq))
Diagonal frequencies
>>> frequencies = [1.0, 2.0, 3.0, 4.0] >>> model = .fourier_net.FourierNetArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> frequencies=("diagonal", frequencies))
Full frequencies
>>> frequencies = [1.0, 2.0, 3.0, 4.0] >>> model = .fourier_net.FourierNetArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> frequencies=("full", frequencies))
Note
For information regarding adaptive activations please refer to https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.fully_connected¶
- class modulus.models.fully_connected.FullyConnectedArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], layer_size: int = 512, nr_layers: int = 6, activation_fn=Activation.SILU, periodicity: Optional[Dict[str, Tuple[float, float]]] = None, skip_connections: bool = False, adaptive_activations: bool = False, weight_norm: bool = True)[source]¶
Bases:
Arch
Fully Connected Neural Network.
- Parameters
input_keys (List[Key]) – Input key list.
output_keys (List[Key]) – Output key list.
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
layer_size (int, optional) – Layer size for every hidden layer of the model, by default 512
nr_layers (int, optional) – Number of hidden layers of the model, by default 6
activation_fn (Activation, optional) – Activation function used by network, by default
Activation.SILU
periodicity (Union[Dict[str, Tuple[float, float]], None], optional) – Dictionary of tuples that allows making model give periodic predictions on the given bounds in tuple.
skip_connections (bool, optional) – Apply skip connections every 2 hidden layers, by default False
weight_norm (bool, optional) – Use weight norm on fully connected layers, by default True
adaptive_activations (bool, optional) – Use an adaptive activation functions, by default False
Variable Shape
Input variable tensor shape: \([N, size]\)
Output variable tensor shape: \([N, size]\)
Example
Fully-connected model (2 -> 64 -> 64 -> 2)
>>> arch = .fully_connected.FullyConnectedArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> layer_size = 64, >>> nr_layers = 2) >>> model = arch.make_node() >>> input = {"x": torch.randn(64, 2)} >>> output = model.evaluate(input)
Fully-connected model with periodic outputs between (0,1)
>>> arch = .fully_connected.FullyConnectedArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> periodicity={'x': (0, 1)})
Note
For information regarding adaptive activations please refer to https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.hash_encoding_net¶
- class modulus.models.hash_encoding_net.MultiresolutionHashNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], activation_fn=Activation.SILU, layer_size: int = 64, nr_layers: int = 3, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False, bounds: List[Tuple[float, float]] = [(-1.0, 1.0), (-1.0, 1.0)], nr_levels: int = 16, nr_features_per_level: int = 2, log2_hashmap_size: int = 19, base_resolution: int = 2, finest_resolution: int = 32)[source]¶
Bases:
Arch
Hash encoding network as seen in,
Müller, Thomas, et al. “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.” arXiv preprint arXiv:2201.05989 (2022). A reference pytorch implementation can be found, https://github.com/yashbhalgat/HashNeRF-pytorch
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
layer_size (int = 64) – Layer size for every hidden layer of the model.
nr_layers (int = 3) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
weight_norm (bool = False) – Use weight norm on fully connected layers.
adaptive_activations (bool = False) – If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
bounds (List[Tuple[float, float]] = [(-1.0, 1.0), (-1.0, 1.0)]) – List of bounds for hash grid. Each element is a tuple of the upper and lower bounds.
nr_levels (int = 5) – Number of levels in the hash grid.
nr_features_per_level (int = 2) – Number of features from each hash grid.
log2_hashmap_size (int = 19) – Hash map size will be 2**log2_hashmap_size.
base_resolution (int = 2) – base resolution of hash grids.
finest_resolution (int = 32) – Highest resolution of hash grids.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.highway_fourier_net¶
- class modulus.models.highway_fourier_net.HighwayFourierNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False, transform_fourier_features: bool = True, project_fourier_features: bool = False)[source]¶
Bases:
Arch
A modified highway network using Fourier features. References: (1) Srivastava, R.K., Greff, K. and Schmidhuber, J., 2015. Training very deep networks. In Advances in neural information processing systems (pp. 2377-2385). (2) Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J.T. and Ng, R., 2020. Fourier features let networks learn high frequency functions in low dimensional domains. arXiv preprint arXiv:2006.10739.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
frequencies (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
frequencies_params (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
layer_size (int = 512) – Layer size for every hidden layer of the model.
nr_layers (int = 6) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
weight_norm (bool = True) – Use weight norm on fully connected layers.
adaptive_activations (bool = False) – If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
transform_fourier_features (bool = True) – If True use the Fourier features in the projector layer.
project_fourier_features (bool = False) – If True use the Fourier features in the projector layer.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.modified_fourier_net¶
- class modulus.models.modified_fourier_net.ModifiedFourierNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)[source]¶
Bases:
Arch
A modified Fourier Network which enables multiplicative interactions betweeen the Fourier features and hidden layers. References: (1) Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J.T. and Ng, R., 2020. Fourier features let networks learn high frequency functions in low dimensional domains. arXiv preprint arXiv:2006.10739. (2) Wang, S., Teng, Y. and Perdikaris, P., 2020. Understanding and mitigating gradient pathologies in physics-informed neural networks. arXiv preprint arXiv:2001.04536.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
frequencies (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
frequencies_params (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
layer_size (int = 512) – Layer size for every hidden layer of the model.
nr_layers (int = 6) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
weight_norm (bool = True) – Use weight norm on fully connected layers.
adaptive_activations (bool = False) – If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.moving_time_window¶
- class modulus.models.moving_time_window.MovingTimeWindowArch(arch: Arch, window_size: float)[source]¶
Bases:
Arch
Moving time window model the keeps track of current time window and previous window.
- Parameters
arch (Arch) – Modulus architecture to use for moving time window.
window_size (float) – Size of the time window. This will be used to slide the window forward every iteration.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.multiplicative_filter_net¶
- class modulus.models.multiplicative_filter_net.MultiplicativeFilterNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, activation_fn=Activation.IDENTITY, filter_type: Union[FilterType, str] = FilterType.FOURIER, weight_norm: bool = True, input_scale: float = 10.0, gabor_alpha: float = 6.0, gabor_beta: float = 1.0, normalization: Optional[Dict[str, Tuple[float, float]]] = None)[source]¶
Bases:
Arch
Multiplicative Filter Net with Activations Reference: Fathony, R., Sahu, A.K., AI, A.A., Willmott, D. and Kolter, J.Z., MULTIPLICATIVE FILTER NETWORKS.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
layer_size (int = 512) – Layer size for every hidden layer of the model.
nr_layers (int = 6) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
filter_type (FilterType = FilterType.FOURIER) – Filter type for multiplicative filter network, (Fourier or Gabor).
weight_norm (bool = True) – Use weight norm on fully connected layers.
input_scale (float = 10.0) – Scale inputs for multiplicative filters.
gabor_alpha (float = 6.0) – Alpha value for Gabor filter.
gabor_beta (float = 1.0) – Beta value for Gabor filter.
normalization (Optional[Dict[str, Tuple[float, float]]] = None) – Normalization of input to network.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.multiscale_fourier_net¶
- class modulus.models.multiscale_fourier_net.MultiscaleFourierNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)[source]¶
Bases:
Arch
Multi-scale Fourier Net .. rubric:: References
1. Sifan Wang, Hanwen Wang, Paris Perdikaris, On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks, Computer Methods in Applied Mechanics and Engineering, Volume 384,2021.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
frequencies (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
frequencies_params (Tuple[str, List[float]] = ("axis", [i for i in range(10)])) – Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
activation_fn (layers.Activation = layers.Activation.SILU) – Activation function used by network.
layer_size (int = 512) – Layer size for every hidden layer of the model.
nr_layers (int = 6) – Number of hidden layers of the model.
skip_connections (bool = False) – If true then apply skip connections every 2 hidden layers.
weight_norm (bool = True) – Use weight norm on fully connected layers.
adaptive_activations (bool = False) – If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.pix2pix¶
- class modulus.models.pix2pix.Pix2PixArch(input_keys: List[Key], output_keys: List[Key], dimension: int, detach_keys: List[Key] = [], conv_layer_size: int = 64, n_downsampling: int = 3, n_blocks: int = 3, scaling_factor: int = 1, activation_fn: Activation = Activation.RELU, batch_norm: bool = False, padding_type='reflect')[source]¶
Bases:
Arch
Convolutional encoder-decoder based on pix2pix generator models.
Note
The pix2pix architecture supports options for 1D, 2D and 3D fields which can be constroled using the dimension parameter.
- Parameters
input_keys (List[Key]) – Input key list. The key dimension size should equal the variables channel dim.
output_keys (List[Key]) – Output key list. The key dimension size should equal the variables channel dim.
dimension (int) – Model dimensionality (supports 1, 2, 3).
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
conv_layer_size (int, optional) – Latent channel size after first convolution, by default 64
n_downsampling (int, optional) – Number of downsampling/upsampling blocks, by default 3
n_blocks (int, optional) – Number of residual blocks in middle of model, by default 3
scaling_factor (int, optional) – Scaling factor to increase the output feature size compared to the input (1, 2, 4, or 8), by default 1
activation_fn (Activation, optional) – Activation function, by default
Activation.RELU
batch_norm (bool, optional) – Batch normalization, by default False
padding_type (str, optional) – Padding type (‘constant’, ‘reflect’, ‘replicate’ or ‘circular’), by default “reflect”
Variable Shape
Input variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Output variable tensor shape:
1D: \([N, size, W]\)
2D: \([N, size, H, W]\)
3D: \([N, size, D, H, W]\)
Note
Reference: Isola, Phillip, et al. “Image-To-Image translation with conditional adversarial networks” Conference on Computer Vision and Pattern Recognition, 2017. https://arxiv.org/abs/1611.07004
Reference: Wang, Ting-Chun, et al. “High-Resolution image synthesis and semantic manipulation with conditional GANs” Conference on Computer Vision and Pattern Recognition, 2018. https://arxiv.org/abs/1711.11585
Note
Based on the implementation: https://github.com/NVIDIA/pix2pixHD
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.radial_basis¶
- class modulus.models.radial_basis.RadialBasisArch(input_keys: List[Key], output_keys: List[Key], bounds: Dict[str, List[float]], detach_keys: List[Key] = [], nr_centers: int = 128, sigma: float = 0.1)[source]¶
Bases:
Arch
Radial Basis Neural Network.
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
bounds (Dict[str, Tuple[float, float]]) – Bounds to to randomly generate radial basis functions in.
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
nr_centers (int = 128) – number of radial basis functions to use.
sigma (float = 0.1) – Sigma in radial basis kernel.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.siren¶
- class modulus.models.siren.SirenArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], layer_size: int = 512, nr_layers: int = 6, first_omega: float = 30.0, omega: float = 30.0, normalization: Optional[Dict[str, Tuple[float, float]]] = None)[source]¶
Bases:
Arch
Sinusoidal Representation Network (SIREN).
- Parameters
input_keys (List[Key]) – Input key list.
output_keys (List[Key]) – Output key list.
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
layer_size (int, optional) – Layer size for every hidden layer of the model, by default 512
nr_layers (int, optional) – Number of hidden layers of the model, by default 6
first_omega (float, optional) – Scales first weight matrix by this factor, by default 30
omega (float, optional) – Scales the weight matrix of all hidden layers by this factor, by default 30
normalization (Dict[str, Tuple[float, float]], optional) – Normalization of input to network, by default None
Variable Shape
Input variable tensor shape: \([N, size]\)
Output variable tensor shape: \([N, size]\)
Example
Siren model (2 -> 64 -> 64 -> 2)
>>> arch = .siren.SirenArch( >>> [Key("x", size=2)], >>> [Key("y", size=2)], >>> layer_size = 64, >>> nr_layers = 2) >>> model = arch.make_node() >>> input = {"x": torch.randn(64, 2)} >>> output = model.evaluate(input)
Note
Reference: Sitzmann, Vincent, et al. Implicit Neural Representations with Periodic Activation Functions. https://arxiv.org/abs/2006.09661.
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.super_res_net¶
- class modulus.models.super_res_net.SRResNetArch(input_keys: List[Key], output_keys: List[Key], detach_keys: List[Key] = [], large_kernel_size: int = 7, small_kernel_size: int = 3, conv_layer_size: int = 32, n_resid_blocks: int = 8, scaling_factor: int = 8, activation_fn: Activation = Activation.PRELU)[source]¶
Bases:
Arch
3D super resolution network
Based on the implementation: https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution
- Parameters
input_keys (List[Key]) – Input key list
output_keys (List[Key]) – Output key list
detach_keys (List[Key], optional) – List of keys to detach gradients, by default []
large_kernel_size (int, optional) – convolutional kernel size for first and last convolution, by default 7
small_kernel_size (int, optional) – convolutional kernel size for internal convolutions, by default 3
conv_layer_size (int, optional) – Latent channel size, by default 32
n_resid_blocks (int, optional) – Number of residual blocks before , by default 8
scaling_factor (int, optional) – Scaling factor to increase the output feature size compared to the input (2, 4, or 8), by default 8
activation_fn (Activation, optional) – Activation function, by default Activation.PRELU
- forward(in_vars: Dict[str, Tensor]) Dict[str, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.