-
class modulus.architecture.afno.AFNOArch(input_keys: typing.List[modulus.key.Key], output_keys: typing.List[modulus.key.Key], img_shape: (
, )), detach_keys: typing.List[modulus.key.Key] = [], patch_size: int = 16, embed_dim: int = 256, depth: int = 4, num_blocks: int = 4 Bases:
modulus.arch.Arch
Adaptive Fourier neural operator (AFNO) model
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- img_shapeTuple[int, int]
Input image dimensions
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- patch_sizeint, optional
Size of image patchs, by default 16
- embed_dimint, optional
Embedded channel size, by default 256
- depthint, optional
Number of AFNO layers, by default 4
- num_blocksint, optional
Number of blocks in the frequency weight matrices, by default 4
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.deeponet.DeepONetArch_Data(branch_net: modulus.arch.Arch, trunk_net: modulus.arch.Arch, input_keys: Optional[List[modulus.key.Key]] = None, output_keys: Optional[List[modulus.key.Key]] = None, periodicity: Optional[Dict[str, Tuple[float, float]]] = None, detach_keys: List[modulus.key.Key] = [])
Bases:
modulus.arch.Arch
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- make_node(name: str, jit: bool = False, optimize: bool = True)
Makes neural network node for unrolling with Modulus Graph.
- namestr
This will be used as the name of created node.
- jitbool
If true the compile with jit, https://pytorch.org/docs/stable/jit.html.
- optimizebool
If true then treat parameters as optimizable.
Here is a simple example of creating a node from the fully connected network:
>>>from modulus.architecture.fully_connected import FullyConnectedArch >>>from modulus.key import Key >>>fc_arch = FullyConnectedArch([Key('x'), Key('y')], [Key('u')]) >>>fc_node = fc_arch.make_node(name="fc_node") >>>print(fc_node) node: fc_node inputs: [x, y] derivatives: [] outputs: [u] optimize: True
- class modulus.architecture.deeponet.DeepONetArch_Physics(branch_net: modulus.arch.Arch, trunk_net: modulus.arch.Arch, batch_size: int, input_keys: Optional[List[modulus.key.Key]] = None, output_keys: Optional[List[modulus.key.Key]] = None, periodicity: Optional[Dict[str, Tuple[float, float]]] = None, detach_keys: List[modulus.key.Key] = [])
Bases:
modulus.arch.Arch
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- make_node(name: str, jit: bool = False, optimize: bool = True)
Makes neural network node for unrolling with Modulus Graph.
- namestr
This will be used as the name of created node.
- jitbool
If true the compile with jit, https://pytorch.org/docs/stable/jit.html.
- optimizebool
If true then treat parameters as optimizable.
Here is a simple example of creating a node from the fully connected network:
>>>from modulus.architecture.fully_connected import FullyConnectedArch >>>from modulus.key import Key >>>fc_arch = FullyConnectedArch([Key('x'), Key('y')], [Key('u')]) >>>fc_node = fc_arch.make_node(name="fc_node") >>>print(fc_node) node: fc_node inputs: [x, y] derivatives: [] outputs: [u] optimize: True
- class modulus.architecture.dgm.DGMArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], layer_size: int = 512, nr_layers: int = 6, activation_fn=Activation.SIN, adaptive_activations: bool = False, weight_norm: bool = True)
Bases:
modulus.arch.Arch
A variation of the fully connected network. Reference: Sirignano, J. and Spiliopoulos, K., 2018. DGM: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375, pp.1339-1364.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- weight_normbool = True
Use weight norm on fully connected layers.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.fno.FNOArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], dimension: int, detach_keys: List[modulus.key.Key] = [], nr_fno_layers: int = 4, fno_layer_size: int = 32, fno_modes: Union[int, List[int]] = 16, padding: int = 8, padding_type: str = 'constant', output_fc_layer_sizes: List[int] = [16], activation_fn: modulus.architecture.layers.Activation = Activation.GELU, coord_features: bool = True, domain_length: List[float] = [1.0, 1.0], squeeze_latent_size: Optional[int] = None)
Bases:
modulus.arch.Arch
Fourier neural operator (FNO) model. Supports 1D, 2D and 3D.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- dimensionint
Model dimensionality (supports 1, 2, 3).
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- nr_fno_layersint, optional
Number of spectral convolution layers, by default 4
- fno_layer_sizeint, optional
Size of latent variables inside spectral convolutions, by default 32
- fno_modesUnion[int, List[int]], optional
Number of Fourier modes with learnable weights, by default 16
- paddingint, optional
Padding size for FFT calculations, by default 8
- padding_typestr, optional
Padding type for FFT calculations (‘constant’, ‘reflect’, ‘replicate’ or ‘circular’), by default “constant”
- output_fc_layer_sizesList[int], optional
List of point-wise fully connected decoder layers, by default [16]
- activation_fnActivation, optional
Activation function, by default Activation.GELU
- coord_featuresbool, optional
Use coordinate meshgrid as additional input feature, by default True
- domain_lengthList[float], optional
List defining the rectangular domain size, by default [1.0, 1.0]
- class DecoderArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], derivative_keys: List[modulus.key.Key], domain_length: List[int], dcoder_model: torch.nn.modules.module.Module)
Bases:
modulus.arch.Arch
Decoder Arch for constructing seperate node
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class EncoderArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], encoder_model: torch.nn.modules.module.Module)
Bases:
modulus.arch.Arch
Encoder Arch for constructing seperate node
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- make_nodes(name: str, jit: bool = False, optimize: bool = True) → List[<a class="reference internal" href="modulus.html#modulus.node.Node" title="modulus.node.Node" target="_self">modulus.node.Node</a>]
Returns Fourier Neural Operator in two seperate spectral encoder and decoder nodes. This should be used for PINO, when “exact” gradient methods wants to be used.
- namestr
This will be used as the name of created node.
- jitbool
If true the compile with jit, https://pytorch.org/docs/stable/jit.html.
- optimizebool
If true then treat parameters as optimizable.
- List[Node]
[Encoder nodes, Decoder nodes]
- class modulus.architecture.fourier_net.FourierNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)
Bases:
modulus.arch.Arch
Fourier Encoding Fully Connected Neural Network. This network uses a Fourier encoding of inputs.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- frequenciesTuple[str, List[float]] = (“axis”, [i for i in range(10)])
A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
- frequencies_paramsTuple[str, List[float]] = (“axis”, [i for i in range(10)])
Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- weight_normbool = True
Use weight norm on fully connected layers.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.fully_connected.FullyConnectedArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], periodicity: Optional[Dict[str, Tuple[float, float]]] = None, detach_keys: List[modulus.key.Key] = [], layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, activation_fn=Activation.SILU, adaptive_activations: bool = False, weight_norm: bool = True)
Bases:
modulus.arch.Arch
Fully Connected Neural Network.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- periodicityUnion[Dict[str, Tuple[float, float]], None] = None
Dictionary of tuples that allows making model give periodic predictions on the given bounds in tuple. For example, periodicity={‘x’: (0, 1)} would make the network give periodic results for x on the interval (0, 1).
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- weight_normbool = True
Use weight norm on fully connected layers.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.hash_encoding_net.MultiresolutionHashNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], activation_fn=Activation.SILU, layer_size: int = 64, nr_layers: int = 3, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False, bounds: List[Tuple[float, float]] = [(- 1.0, 1.0), (- 1.0, 1.0)], nr_levels: int = 16, nr_features_per_level: int = 2, log2_hashmap_size: int = 19, base_resolution: int = 2, finest_resolution: int = 32)
Bases:
modulus.arch.Arch
Hash encoding network as seen in,
Müller, Thomas, et al. “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.” arXiv preprint arXiv:2201.05989 (2022). A reference pytorch implementation can be found, https://github.com/yashbhalgat/HashNeRF-pytorch
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- layer_sizeint = 64
Layer size for every hidden layer of the model.
- nr_layersint = 3
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- weight_normbool = False
Use weight norm on fully connected layers.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- boundsList[Tuple[float, float]] = [(-1.0, 1.0), (-1.0, 1.0)]
List of bounds for hash grid. Each element is a tuple of the upper and lower bounds.
- nr_levelsint = 5
Number of levels in the hash grid.
- nr_features_per_levelint = 2
Number of features from each hash grid.
- log2_hashmap_sizeint = 19
Hash map size will be 2**log2_hashmap_size.
- base_resolutionint = 2
base resolution of hash grids.
- finest_resolutionint = 32
Highest resolution of hash grids.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.highway_fourier_net.HighwayFourierNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False, transform_fourier_features: bool = True, project_fourier_features: bool = False)
Bases:
modulus.arch.Arch
A modified highway network using Fourier features. References: (1) Srivastava, R.K., Greff, K. and Schmidhuber, J., 2015. Training very deep networks. In Advances in neural information processing systems (pp. 2377-2385). (2) Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J.T. and Ng, R., 2020. Fourier features let networks learn high frequency functions in low dimensional domains. arXiv preprint arXiv:2006.10739.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- frequenciesTuple[str, List[float]] = (“axis”, [i for i in range(10)])
A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
- frequencies_paramsTuple[str, List[float]] = (“axis”, [i for i in range(10)])
Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- weight_normbool = True
Use weight norm on fully connected layers.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- transform_fourier_featuresbool = True
If True use the Fourier features in the projector layer.
- project_fourier_featuresbool = False
If True use the Fourier features in the projector layer.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.modified_fourier_net.ModifiedFourierNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)
Bases:
modulus.arch.Arch
A modified Fourier Network which enables multiplicative interactions betweeen the Fourier features and hidden layers. References: (1) Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J.T. and Ng, R., 2020. Fourier features let networks learn high frequency functions in low dimensional domains. arXiv preprint arXiv:2006.10739. (2) Wang, S., Teng, Y. and Perdikaris, P., 2020. Understanding and mitigating gradient pathologies in physics-informed neural networks. arXiv preprint arXiv:2001.04536.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- frequenciesTuple[str, List[float]] = (“axis”, [i for i in range(10)])
A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
- frequencies_paramsTuple[str, List[float]] = (“axis”, [i for i in range(10)])
Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- weight_normbool = True
Use weight norm on fully connected layers.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.moving_time_window.MovingTimeWindowArch(arch: modulus.arch.Arch, window_size: float)
Bases:
modulus.arch.Arch
Moving time window model the keeps track of current time window and previous window.
- archArch
Modulus architecture to use for moving time window.
- window_sizefloat
Size of the time window. This will be used to slide the window forward every iteration.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.multiplicative_filter_net.MultiplicativeFilterNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, activation_fn=Activation.IDENTITY, filter_type: modulus.architecture.multiplicative_filter_net.FilterType = FilterType.FOURIER, weight_norm: bool = True, input_scale: float = 10.0, gabor_alpha: float = 6.0, gabor_beta: float = 1.0, normalization: Optional[Dict[str, Tuple[float, float]]] = None)
Bases:
modulus.arch.Arch
Multiplicative Filter Net with Activations Reference: Fathony, R., Sahu, A.K., AI, A.A., Willmott, D. and Kolter, J.Z., MULTIPLICATIVE FILTER NETWORKS.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- filter_typeFilterType = FilterType.FOURIER
Filter type for multiplicative filter network, (Fourier or Gabor).
- weight_normbool = True
Use weight norm on fully connected layers.
- input_scalefloat = 10.0
Scale inputs for multiplicative filters.
- gabor_alphafloat = 6.0
Alpha value for Gabor filter.
- gabor_betafloat = 1.0
Beta value for Gabor filter.
- normalizationOptional[Dict[str, Tuple[float, float]]] = None
Normalization of input to network.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.multiscale_fourier_net.MultiscaleFourierNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], frequencies=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), frequencies_params=('axis', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), activation_fn=Activation.SILU, layer_size: int = 512, nr_layers: int = 6, skip_connections: bool = False, weight_norm: bool = True, adaptive_activations: bool = False)
Bases:
modulus.arch.Arch
Multi-scale Fourier Net References:
1. Sifan Wang, Hanwen Wang, Paris Perdikaris, On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks, Computer Methods in Applied Mechanics and Engineering, Volume 384,2021.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- frequenciesTuple[str, List[float]] = (“axis”, [i for i in range(10)])
A tuple that describes the Fourier encodings to use any inputs in the list [‘x’, ‘y’, ‘z’, ‘t’]. The first element describes the type of frequency encoding with options, ‘gaussian’, ‘full’, ‘axis’, ‘diagonal’. ‘gaussian’ samples frequency of Fourier series from Gaussian. ‘axis’ samples along axis of spectral space with the given list range of frequencies. ‘diagonal’ samples along diagonal of spectral space with the given list range of frequencies. ‘full’ samples along entire spectral space for all combinations of frequencies in given list.
- frequencies_paramsTuple[str, List[float]] = (“axis”, [i for i in range(10)])
Same as frequencies except these are used for encodings on any inputs not in the list [‘x’, ‘y’, ‘z’, ‘t’].
- activation_fnlayers.Activation = layers.Activation.SILU
Activation function used by network.
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- skip_connectionsbool = False
If true then apply skip connections every 2 hidden layers.
- weight_normbool = True
Use weight norm on fully connected layers.
- adaptive_activationsbool = False
If True then use an adaptive activation function as described here https://arxiv.org/abs/1906.01170.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.pix2pix.Pix2PixArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], dimension: int, detach_keys: List[modulus.key.Key] = [], conv_layer_size: int = 64, n_downsampling: int = 3, n_blocks: int = 3, scaling_factor: int = 1, batch_norm: bool = False, padding_type='reflect', activation_fn: modulus.architecture.layers.Activation = Activation.RELU)
Bases:
modulus.arch.Arch
Convolutional encoder-decoder based on pix2pix generator models Supports 1D, 2D and 3D.
Isola, Phillip, et al. “Image-To-Image translation With conditional adversarial networks” Conference on Computer Vision and Pattern Recognition, 2017.
Wang, Ting-Chun, et al. “High-Resolution image synthesis and semantic manipulation with conditional GANs” Conference on Computer Vision and Pattern Recognition, 2018.
Based on the implementation: https://github.com/NVIDIA/pix2pixHD
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- dimensionint
Model dimensionality (supports 1, 2, 3).
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- conv_layer_sizeint, optional
Latent channel size after first convolution, by default 64
- n_downsamplingint, optional
Number of downsampling/upsampling blocks, by default 3
- n_blocksint, optional
Number of residual blocks in middle of model, by default 3
- scaling_factorint, optional
Scaling factor to increase the output feature size compared to the input (1, 2, 4, or 8), by default 1
- batch_normbool, optional
Batch normalization, by default False
- padding_typestr, optional
Padding type (‘constant’, ‘reflect’, ‘replicate’ or ‘circular’), by default “reflect”
- activation_fnActivation, optional
Activation function, by default Activation.RELU
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.radial_basis.RadialBasisArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], bounds: Dict[str, List[float]], detach_keys: List[modulus.key.Key] = [], nr_centers: int = 128, sigma: float = 0.1)
Bases:
modulus.arch.Arch
Radial Basis Neural Network.
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- boundsDict[str, Tuple[float, float]]
Bounds to to randomly generate radial basis functions in.
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- nr_centersint = 128
number of radial basis functions to use.
- sigmafloat = 0.1
Sigma in radial basis kernel.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.siren.SirenArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], layer_size: int = 512, nr_layers: int = 6, first_omega: float = 30.0, omega: float = 30.0, normalization: Optional[Dict[str, Tuple[float, float]]] = None)
Bases:
modulus.arch.Arch
Siren fully connected network Reference: Sitzmann, Vincent, et al. Implicit Neural Representations with Periodic Activation Functions. arXiv preprint arXiv:2006.09661 (2020).
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- layer_sizeint = 512
Layer size for every hidden layer of the model.
- nr_layersint = 6
Number of hidden layers of the model.
- first_omegafloat = 30.0
Scales first weight matrix by this factor. Refer to paper for more details.
- omegafloat = 30.0
Scales the weight matrix of all hidden layers by this factor. Refer to paper for more details.
- normalizationOptional[Dict[str, Tuple[float, float]]] = None
Normalization of input to network.
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class modulus.architecture.super_res_net.SRResNetArch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], large_kernel_size: int = 7, small_kernel_size: int = 3, conv_layer_size: int = 32, n_resid_blocks: int = 8, scaling_factor: int = 8, activation_fn: modulus.architecture.layers.Activation = Activation.PRELU)
Bases:
modulus.arch.Arch
3D super resolution network
Based on the implementation: https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Super-Resolution
- input_keysList[Key]
Input key list
- output_keysList[Key]
Output key list
- detach_keysList[Key], optional
List of keys to detach gradients, by default []
- large_kernel_sizeint, optional
convolutional kernel size for first and last convolution, by default 7
- small_kernel_sizeint, optional
convolutional kernel size for internal convolutions, by default 3
- conv_layer_sizeint, optional
Latent channel size, by default 32
- n_resid_blocksint, optional
Number of residual blocks before , by default 8
- scaling_factorint, optional
Scaling factor to increase the output feature size compared to the input (2, 4, or 8), by default 8
- activation_fnActivation, optional
Activation function, by default Activation.PRELU
- forward(in_vars: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NoteAlthough the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.