modulus

modulus.aggregator

class modulus.aggregator.Aggregator(params, num_losses, weights)

Bases: torch.nn.modules.module.Module

Base class for loss aggregators

class modulus.aggregator.GradNorm(params, num_losses, alpha=1.0, weights=None)

Bases: modulus.aggregator.Aggregator

GradNorm for loss aggregation Reference: “Chen, Z., Badrinarayanan, V., Lee, C.Y. and Rabinovich, A., 2018, July. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning (pp. 794-803). PMLR.”

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Weights and aggregates the losses using the gradNorm algorithm

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

class modulus.aggregator.HomoscedasticUncertainty(params, num_losses, weights=None)

Bases: modulus.aggregator.Aggregator

Homoscedastic task uncertainty for loss aggregation Reference: “Reference: Kendall, A., Gal, Y. and Cipolla, R., 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7482-7491).”

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Weights and aggregates the losses using homoscedastic task uncertainty

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

class modulus.aggregator.LRAnnealing(params, num_losses, update_freq=1, alpha=0.01, ref_key=None, eps=1e-08, weights=None)

Bases: modulus.aggregator.Aggregator

Learning rate annealing for loss aggregation References: “Wang, S., Teng, Y. and Perdikaris, P., 2020. Understanding and mitigating gradient pathologies in physics-informed neural networks. arXiv preprint arXiv:2001.04536.”, and “Jin, X., Cai, S., Li, H. and Karniadakis, G.E., 2021. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. Journal of Computational Physics, 426, p.109951.”

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Weights and aggregates the losses using the learning rate annealing algorithm

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

class modulus.aggregator.NTK(run_per_step: int = 1000, save_name: Optional[str] = None)

Bases: torch.nn.modules.module.Module

forward(constraints, ntk_weights, step)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class modulus.aggregator.Relobralo(params, num_losses, alpha=0.95, beta=0.99, tau=1.0, eps=1e-08, weights=None)

Bases: modulus.aggregator.Aggregator

Relative loss balancing with random lookback Reference: “Bischof, R. and Kraus, M., 2021. Multi-Objective Loss Balancing for Physics-Informed Deep Learning. arXiv preprint arXiv:2110.09813.”

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Weights and aggregates the losses using the ReLoBRaLo algorithm

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

class modulus.aggregator.SoftAdapt(params, num_losses, eps=1e-08, weights=None)

Bases: modulus.aggregator.Aggregator

SoftAdapt for loss aggregation Reference: “Heydari, A.A., Thompson, C.A. and Mehmood, A., 2019. Softadapt: Techniques for adaptive loss weighting of neural networks with multi-part loss functions. arXiv preprint arXiv: 1912.12355.”

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Weights and aggregates the losses using the original variant of the softadapt algorithm

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

class modulus.aggregator.Sum(params, num_losses, weights=None)

Bases: modulus.aggregator.Aggregator

Loss aggregation by summation

forward(losses: Dict[str, torch.Tensor], step: int) torch.Tensor

Aggregates the losses by summation

lossesDict[str, torch.Tensor]

A dictionary of losses.

stepint

Optimizer step.

losstorch.Tensor

Aggregated loss.

modulus.arch

class modulus.arch.Arch(input_keys: List[modulus.key.Key], output_keys: List[modulus.key.Key], detach_keys: List[modulus.key.Key] = [], periodicity: Optional[Dict[str, Tuple[float, float]]] = None)

Bases: torch.nn.modules.module.Module

Base class for all neural networks

make_node(name: str, jit: bool = True, optimize: bool = True)

Makes neural network node for unrolling with Modulus Graph.

namestr

This will be used as the name of created node.

jitbool

If true the compile with jit, https://pytorch.org/docs/stable/jit.html.

optimizebool

If true then treat parameters as optimizable.

Here is a simple example of creating a node from the fully connected network:

>>> from modulus.architecture.fully_connected import FullyConnectedArch
>>> from modulus.key import Key
>>> fc_arch = FullyConnectedArch([Key('x'), Key('y')], [Key('u')])
>>> fc_node = fc_arch.make_node(name="fc_node")
>>> print(fc_node)
node: fc_node
inputs: [x, y]
derivatives: []
outputs: [u]
optimize: True

modulus.constants

constant values used by Modulus

modulus.constraint

Constraint classes

class modulus.constraint.Constraint

Bases: object

Base class for all constraints

modulus.derivatives

class modulus.derivatives.Derivative(bwd_derivative_dict: Dict[modulus.key.Key, List[modulus.key.Key]])

Bases: torch.nn.modules.module.Module

Module to compute derivatives using backward automatic differentiation

forward(input_var: Dict[str, torch.Tensor]) Dict[str, torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

modulus.graph

Helper functions for unrolling computational graph

class modulus.graph.Graph(nodes: List[modulus.node.Node], invar: List[modulus.key.Key], req_names: List[modulus.key.Key], diff_nodes: List[modulus.node.Node] = [])

Bases: torch.nn.modules.module.Module

Torch Module that is constructed by unrolling a computational graph given desired inputs, outputs, and evaluatable nodes.

Here is a simple example of using Graph to unroll a two node graph. >>> import torch >>> from sympy import Symbol >>> from modulus.node import Node >>> from modulus.key import Key >>> from modulus.graph import Graph >>> node_1 = Node.from_sympy(Symbol(‘x’) + Symbol(‘y’), ‘u’) >>> node_2 = Node.from_sympy(Symbol(‘u’) + 1.0, ‘v’) >>> graph = Graph([node_1, node_2], [Key(‘x’), Key(‘y’)], [Key(‘v’)]) >>> graph.forward({‘x’: torch.tensor([1.0]), ‘y’: torch.tensor([2.0])}) {‘v’: tensor([4.])}

nodesList[Node]

List of Modulus Nodes to unroll graph with.

invarList[Key]

List of inputs to graph.

req_namesList[Key]

List of required outputs of graph.

diff_nodesList[Node]

List of specialty nodes to compute derivatives. By default this is not needed.

forward(invar: Dict[str, torch.Tensor]) Dict[str, torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

modulus.key

Key

class modulus.key.Key(name, size=1, derivatives=[], base_unit=None, scale=(0.0, 1.0))

Bases: object

Class describing keys used for graph unroll. The most basic key is just a simple string however you can also add dimension information and even information on how to scale inputs to networks.

namestr

String used to refer to the variable (e.g. ‘x’, ‘y’…).

sizeint=1

Dimension of variable.

derivativesList=[]

This signifies that this key holds a derivative with respect to that key.

scale: (float, float)

Characteristic location and scale of quantity: used for normalisation.

modulus.loss

class modulus.loss.IntegralLossNorm(ord: int = 2)

Bases: modulus.loss.Loss

L-p loss function for Integral data Computes the p-th order loss of each output tensor

ordint

Order of the loss. For example, ord=2 would be the L2 loss.

forward(list_invar: List[Dict[str, torch.Tensor]], list_pred_outvar: List[Dict[str, torch.Tensor]], list_true_outvar: List[Dict[str, torch.Tensor]], list_lambda_weighting: List[Dict[str, torch.Tensor]], step: int) Dict[str, torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class modulus.loss.Loss

Bases: torch.nn.modules.module.Module

Base class for all loss functions

forward(invar: Dict[str, torch.Tensor], pred_outvar: Dict[str, torch.Tensor], true_outvar: Dict[str, torch.Tensor], lambda_weighting: Dict[str, torch.Tensor], step: int) Dict[str, torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class modulus.loss.PointwiseLossNorm(ord: int = 2)

Bases: modulus.loss.Loss

L-p loss function for pointwise data Computes the p-th order loss of each output tensor

ordint

Order of the loss. For example, ord=2 would be the L2 loss.

forward(invar: Dict[str, torch.Tensor], pred_outvar: Dict[str, torch.Tensor], true_outvar: Dict[str, torch.Tensor], lambda_weighting: Dict[str, torch.Tensor], step: int) Dict[str, torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

modulus.node

Modulus nodes

class modulus.node.Node(inputs, outputs, evaluate, name='Node', optimize=False)

Bases: object

Base class for all nodes used to unroll computational graph in Modulus.

inputsList[Union[str, Key]]

Names of inputs to node. For example, inputs=[‘x’, ‘y’].

outputsList[Union[str, Key]]

Names of outputs to node. For example, inputs=[‘u’, ‘v’, ‘p’].

evaluatePytorch Function

A pytorch function that takes in a dictionary of tensors whose keys are the above inputs.

namestr

Name of node for print statements and debugging.

optimizebool

If true then any trainable parameters contained in the node will be optimized by the Trainer.

property derivatives
derivativesList[str]

Derivative inputs of node.

classmethod from_sympy(eq, out_name, detach_names=[])

generates a Modulus Node from a SymPy equation

eqSympy Symbol/Exp

the equation to convert to a Modulus Node. The inputs to this node consist of all Symbols, Functions, and derivatives of Functions. For example, f(x,y) + f(x,y).diff(x) + k will be converted to a node whose input is [f,f__x,k].

out_namestr

This will be the name of the output for the node.

detach_namesList[str]

This will detach the inputs of the resulting node.

node : Node

property inputs
inputsList[str]

Inputs of node.

property outputs
outputsList[str]

Outputs of node.

modulus.pdes

base class for PDEs

class modulus.pdes.PDES

Bases: object

base class for all partial differential equations

make_nodes(detach_names=[])

Make a list of nodes from PDE.

detach_namesList[str]

This will detach the inputs of the resulting node.

nodesList[Node]

Makes a separate node for every equation.

pprint(print_latex=False)

Print differential equation.

print_latexbool

If True print the equations in Latex. Else, just print as text.

modulus.trainer

Modulus Solver

class modulus.trainer.AdaHessianMixin

Bases: object

Special functions for training using the higher-order optimizer AdaHessian

class modulus.trainer.AdamMixin

Bases: object

Special functions for training using the standard optimizers Should be used with ADAM, SGD, RMSProp, etc.

class modulus.trainer.BFGSMixin

Bases: object

Special functions for training using BFGS optimizer

class modulus.trainer.Trainer(cfg: omegaconf.dictconfig.DictConfig)

Bases: modulus.trainer.AdamMixin, modulus.trainer.AdaHessianMixin, modulus.trainer.BFGSMixin

Base class for optimizing networks on losses/constraints