NVIDIA Modulus Sym (Latest Release)

Modulus Sym

constant values used by Modulus

Helper functions for unrolling computational graph

class modulus.sym.graph.Graph(nodes: List[Node], invar: List[Key], req_names: List[Key], diff_nodes: List[Node] = [], func_arch: Optional[bool] = None, func_arch_allow_partial_hessian: Optional[bool] = None)[source]

Bases: Module

Torch Module that is constructed by unrolling a computational graph given desired inputs, outputs, and evaluatable nodes.

Examples

Here is a simple example of using Graph to unroll a two node graph. >>> import torch >>> from sympy import Symbol >>> from modulus.sym.node import Node >>> from modulus.sym.key import Key >>> from modulus.sym.graph import Graph >>> node_1 = Node.from_sympy(Symbol(‘x’) + Symbol(‘y’), ‘u’) >>> node_2 = Node.from_sympy(Symbol(‘u’) + 1.0, ‘v’) >>> graph = Graph([node_1, node_2], [Key(‘x’), Key(‘y’)], [Key(‘v’)]) >>> graph.forward({‘x’: torch.tensor([1.0]), ‘y’: torch.tensor([2.0])}) {‘v’: tensor([4.])}

Parameters
  • nodes (List[Node]) – List of Modulus Nodes to unroll graph with.

  • invar (List[Key]) – List of inputs to graph.

  • req_names (List[Key]) – List of required outputs of graph.

  • diff_nodes (List[Node]) – List of specialty nodes to compute derivatives. By default this is not needed.

  • func_arch (bool, Optional) – If True, find the computable derivatives that are part of the Jacobian and Hessian of the neural network. They will be calculated during the forward pass using FuncArch. If None (default), will use the GraphManager to get the global flag (default is False), which could be configured in the hydra config with key graph.func_arch.

  • func_arch_allow_partial_hessian (bool, Optional) – If True, allow evaluating partial hessian to save some unnecessary computations. For example, when the input is x, outputs are [u, p], and the needed derivatives are [u__x, p__x, u__x__x], func_arch needs to evaluate the full hessian rows to be able to extract jacobian p__x. When this flag is on, func_arch will only output [u__x, u__x__x], and p__x will be evaluated later by the autograd. If None (default), will use the GraphManager to get the global flag (default is True), which could be configured in the hydra config with key graph.func_arch_allow_partial_hessian.

forward(invar: Dict[str, Tensor]) → Dict[str, Tensor][source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

setup_deriv_scaler(deriv_scalers: DerivScalers, name: str = '')[source]

Setup derivative scalers for Derivative node and FuncArch node in the graph.

Key

class modulus.sym.key.Key(name, size=1, derivatives=[], base_unit=None, scale=(0.0, 1.0))[source]

Bases: object

Class describing keys used for graph unroll. The most basic key is just a simple string however you can also add dimension information and even information on how to scale inputs to networks.

Parameters
  • name (str) – String used to refer to the variable (e.g. ‘x’, ‘y’…).

  • size (int=1) – Dimension of variable.

  • derivatives (List=[]) – This signifies that this key holds a derivative with respect to that key.

  • scale ((float, float)) – Characteristic location and scale of quantity: used for normalisation.

static convert_config(key_cfg: Union[List, str])[source]

Converts a config input/output key string/list into a key This provides a quick alternative method for defining keys in models

Parameters

key_cfg (Union[List, str]) – Config list or string

Returns

List of keys generated

Return type

List[Key]

Example

The following are some config examples for constructing keys in the YAML file.

Defining input/output keys with size of 1

Copy
Copied!
            

>>> arch: >>> full_connected: >>> input_keys: input >>> output_keys: output

Defining input/output keys with different sizes

Copy
Copied!
            

>>> arch: >>> full_connected: >>> input_keys: [input, 2] # Key('input',size=2) >>> output_keys: [output, 3] # Key('output',size=3)

Multiple input/output keys with size of 1 >>> arch: >>> full_connected: >>> input_keys: [a, b, c] >>> output_keys: [u, w, v]

Multiple input/output keys with different sizes >>> arch: >>> full_connected: >>> input_keys: [[a,2], [b,3]] # Key(‘a’,size=2), Key(‘b’,size=3) >>> output_keys: [[u,3],w] # Key(‘u’,size=3), Key(‘w’,size=1)

Modulus nodes

class modulus.sym.node.Node(inputs, outputs, evaluate, name='Node', optimize=False)[source]

Bases: object

Base class for all nodes used to unroll computational graph in Modulus.

Parameters
  • inputs (List[Union[str, Key]]) – Names of inputs to node. For example, inputs=[‘x’, ‘y’].

  • outputs (List[Union[str, Key]]) – Names of outputs to node. For example, inputs=[‘u’, ‘v’, ‘p’].

  • evaluate (Pytorch Function) – A pytorch function that takes in a dictionary of tensors whose keys are the above inputs.

  • name (str) – Name of node for print statements and debugging.

  • optimize (bool) – If true then any trainable parameters contained in the node will be optimized by the Trainer.

property derivatives

returns: derivatives – Derivative inputs of node. :rtype: List[str]

classmethod from_sympy(eq, out_name, freeze_terms=[], detach_names=[])[source]

generates a Modulus Node from a SymPy equation

Parameters
  • eq (Sympy Symbol/Exp) – the equation to convert to a Modulus Node. The inputs to this node consist of all Symbols, Functions, and derivatives of Functions. For example, f(x,y) + f(x,y).diff(x) + k will be converted to a node whose input is [f,f__x,k].

  • out_name (str) – This will be the name of the output for the node.

  • freeze_terms (List[int]) – The terms that need to be frozen

  • detach_names (List[str]) – This will detach the inputs of the resulting node.

Returns

node

Return type

Node

property inputs

returns: inputs – Inputs of node. :rtype: List[str]

property outputs

returns: outputs – Outputs of node. :rtype: List[str]

Modulus Solver

class modulus.sym.trainer.AdaHessianMixin[source]

Bases: object

Special functions for training using the higher-order optimizer AdaHessian

class modulus.sym.trainer.AdamMixin[source]

Bases: object

Special functions for training using the standard optimizers Should be used with ADAM, SGD, RMSProp, etc.

class modulus.sym.trainer.BFGSMixin[source]

Bases: object

Special functions for training using BFGS optimizer

class modulus.sym.trainer.Trainer(cfg: DictConfig)[source]

Bases: AdamMixin, AdaHessianMixin, BFGSMixin

Base class for optimizing networks on losses/constraints

Previous Core API
Next Modulus Sym Constraints
© Copyright 2023, NVIDIA Modulus Team. Last updated on Sep 24, 2024.