PhysicsNeMo Sym Constraints#
Continuous Constraints#
Continuous type constraints
- class physicsnemo.sym.domain.constraint.continuous.DeepONetConstraint(
- nodes: List[Node],
- dataset: Dataset | IterableDataset,
- loss: Loss,
- batch_size: int,
- shuffle: bool,
- drop_last: bool,
- num_workers: int,
Bases:
PointwiseConstraint
Base DeepONet Constraint class for all DeepONets
- classmethod from_numpy(
- nodes: List[Node],
- invar: Dict[str, ndarray],
- outvar: Dict[str, ndarray],
- batch_size: int,
- lambda_weighting: Dict[str, ndarray] = None,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
Create custom DeepONet constraint from numpy arrays.
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
invar (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays as input.
outvar (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays to enforce constraint on.
batch_size (int) – Batch size used in training.
lambda_weighting (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays to pointwise weight losses. Default is ones.
loss (Loss) – PhysicsNeMo Loss module that defines the loss type, (e.g. L2, L1, …).
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True
drop_last (bool, optional) – Drop last mini-batch if dataset not fully divisible but batch_size, by default False
num_workers (int) – Number of worker used in fetching data.
- class physicsnemo.sym.domain.constraint.continuous.IntegralBoundaryConstraint(
- nodes: List[Node],
- geometry: Geometry,
- outvar: Dict[str, int | float | Basic],
- batch_size: int,
- integral_batch_size: int,
- criteria: Basic | Callable | None = None,
- lambda_weighting: Dict[str, int | float | Basic] = None,
- parameterization: Parameterization | None = None,
- fixed_dataset: bool = True,
- batch_per_epoch: int = 100,
- quasirandom: bool = False,
- num_workers: int = 0,
- loss: Loss = IntegralLossNorm(),
- shuffle: bool = True,
Bases:
IntegralConstraint
Integral Constraint applied to boundary/perimeter/surface of geometry. For example, in 3D this will create a constraint on the surface of the given geometry.
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
geometry (Geometry) – PhysicsNeMo Geometry to apply the constraint with.
outvar (Dict[str, Union[int, float, sp.Basic]]) – A dictionary of SymPy Symbols/Expr, floats or int. This is used to describe the constraint. For example,
outvar={'u': 0}
would specify the integral of'u'
to be zero.batch_size (int) – Number of integrals to apply.
integral_batch_size (int) – Batch sized used in the Monte Carlo integration to compute the integral.
criteria (Union[sp.basic, True]) – SymPy criteria function specifies to only integrate areas that satisfy this criteria. For example, if
criteria=sympy.Symbol('x')>0
then only areas that have positive'x'
values will be integrated.lambda_weighting (Dict[str, Union[int, float, sp.Basic]] = None) – The weighting of the constraint. For example,
lambda_weighting={'lambda_u': 2.0}
would weight the integral constraint by2.0
.parameterization (Union[Parameterization, None]) – This allows adding parameterization or additional inputs.
fixed_dataset (bool = True) – If True then the points sampled for this constraint are done right when initialized and fixed. If false then the points are continually resampled.
batch_per_epoch (int = 100) – If
fixed_dataset=True
then the total number of integrals generated to apply constraint on istotal_nr_integrals=batch_per_epoch*batch_size
.quasirandom (bool = False) – If true then sample the points using the Halton sequence.
num_workers (int) – Number of worker used in fetching data.
loss (Loss) – PhysicsNeMo Loss module that defines the loss type, (e.g. L2, L1, …).
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True
- class physicsnemo.sym.domain.constraint.continuous.IntegralConstraint(
- nodes: List[Node],
- dataset: Dataset | IterableDataset,
- loss: Loss,
- batch_size: int,
- shuffle: bool,
- drop_last: bool,
- num_workers: int,
Bases:
Constraint
Base class for all Integral Constraints
- class physicsnemo.sym.domain.constraint.continuous.PointwiseBoundaryConstraint(
- nodes: List[Node],
- geometry: Geometry,
- outvar: Dict[str, int | float | Basic],
- batch_size: int,
- criteria: Basic | Callable | None = None,
- lambda_weighting: Dict[str, int | float | Basic] = None,
- parameterization: Parameterization | None = None,
- fixed_dataset: bool = True,
- importance_measure: Callable | None = None,
- batch_per_epoch: int = 1000,
- quasirandom: bool = False,
- num_workers: int = 0,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
Bases:
PointwiseConstraint
Pointwise Constraint applied to boundary/perimeter/surface of geometry. For example, in 3D this will create a constraint on the surface of the given geometry.
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
geometry (Geometry) – PhysicsNeMo Geometry to apply the constraint with.
outvar (Dict[str, Union[int, float, sp.Basic]]) – A dictionary of SymPy Symbols/Expr, floats or int. This is used to describe the constraint. For example,
outvar={'u': 0}
would specify'u'
to be zero everywhere on the constraint.batch_size (int) – Batch size used in training.
criteria (Union[sp.Basic, True]) – SymPy criteria function specifies to only apply constraint to areas that satisfy this criteria. For example, if
criteria=sympy.Symbol('x')>0
then only areas that have positive'x'
values will have the constraint applied to them.lambda_weighting (Dict[str, Union[int, float, sp.Basic]] = None) – The spatial pointwise weighting of the constraint. For example,
lambda_weighting={'lambda_u': 2.0*sympy.Symbol('x')}
would apply a pointwise weighting to the loss of2.0 * x
.parameterization (Union[Parameterization, None], optional) – This allows adding parameterization or additional inputs.
fixed_dataset (bool = True) – If True then the points sampled for this constraint are done right when initialized and fixed. If false then the points are continually resampled.
compute_sdf_derivatives (bool, optional) – Compute SDF derivatives when sampling geometery
importance_measure (Union[Callable, None] = None) – A callable function that computes a scalar importance measure. This importance measure is then used in the constraint when sampling points. Areas with higher importance are sampled more frequently according to Monte Carlo importance sampling, https://en.wikipedia.org/wiki/Monte_Carlo_integration.
batch_per_epoch (int = 1000) – If
fixed_dataset=True
then the total number of points generated to apply constraint on istotal_nr_points=batch_per_epoch*batch_size
.quasirandom (bool = False) – If true then sample the points using the Halton sequence.
num_workers (int) – Number of worker used in fetching data.
loss (Loss) – PhysicsNeMo Loss module that defines the loss type, (e.g. L2, L1, …).
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True
- class physicsnemo.sym.domain.constraint.continuous.PointwiseConstraint(
- nodes: List[Node],
- dataset: Dataset | IterableDataset,
- loss: Loss,
- batch_size: int,
- shuffle: bool,
- drop_last: bool,
- num_workers: int,
Bases:
Constraint
Base class for all Pointwise Constraints
- classmethod from_numpy(
- nodes: List[Node],
- invar: Dict[str, ndarray],
- outvar: Dict[str, ndarray],
- batch_size: int,
- lambda_weighting: Dict[str, ndarray] = None,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
Create custom pointwise constraint from numpy arrays.
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
invar (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays as input.
outvar (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays to enforce constraint on.
batch_size (int) – Batch size used in training.
lambda_weighting (Dict[str, np.ndarray (N, 1)]) – Dictionary of numpy arrays to pointwise weight losses. Default is ones.
loss (Loss) – PhysicsNeMo Loss module that defines the loss type, (e.g. L2, L1, …).
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True
drop_last (bool, optional) – Drop last mini-batch if dataset not fully divisible but batch_size, by default False
num_workers (int) – Number of worker used in fetching data.
- class physicsnemo.sym.domain.constraint.continuous.PointwiseInteriorConstraint(
- nodes: List[Node],
- geometry: Geometry,
- outvar: Dict[str, int | float | Basic],
- batch_size: int,
- bounds: Dict[Basic, Tuple[float, float]] = None,
- criteria: Basic | Callable | None = None,
- lambda_weighting: Dict[str, int | float | Basic] = None,
- parameterization: Parameterization | None = None,
- fixed_dataset: bool = True,
- compute_sdf_derivatives: bool = False,
- importance_measure: Callable | None = None,
- batch_per_epoch: int = 1000,
- quasirandom: bool = False,
- num_workers: int = 0,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
Bases:
PointwiseConstraint
Pointwise Constraint applied to interior of geometry. For example, in 3D this will create a constraint on the interior volume of the given geometry.
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
geometry (Geometry) – PhysicsNeMo Geometry to apply the constraint with.
outvar (Dict[str, Union[int, float, sp.Basic]]) – A dictionary of SymPy Symbols/Expr, floats or int. This is used to describe the constraint. For example,
outvar={'u': 0}
would specify'u'
to be zero everywhere in the constraint.batch_size (int) – Batch size used in training.
bounds (Dict[sp.Basic, Tuple[float, float]] = None) – Bounds of the given geometry, (e.g.
bounds={sympy.Symbol('x'): (0, 1), sympy.Symbol('y'): (0, 1)}
).criteria (Union[sp.basic, True]) – SymPy criteria function specifies to only apply constraint to areas that satisfy this criteria. For example, if
criteria=sympy.Symbol('x')>0
then only areas that have positive'x'
values will have the constraint applied to them.lambda_weighting (Dict[str, Union[int, float, sp.Basic]] = None) – The spatial pointwise weighting of the constraint. For example,
lambda_weighting={'lambda_u': 2.0*sympy.Symbol('x')}
would apply a pointwise weighting to the loss of2.0 * x
.parameterization (Union[Parameterization, None] = {}) – This allows adding parameterization or additional inputs.
fixed_dataset (bool = True) – If True then the points sampled for this constraint are done right when initialized and fixed. If false then the points are continually resampled.
importance_measure (Union[Callable, None] = None) – A callable function that computes a scalar importance measure. This importance measure is then used in the constraint when sampling points. Areas with higher importance are sampled more frequently according to Monte Carlo importance sampling, https://en.wikipedia.org/wiki/Monte_Carlo_integration.
batch_per_epoch (int = 1000) – If
fixed_dataset=True
then the total number of points generated to apply constraint on istotal_nr_points=batch_per_epoch*batch_size
.quasirandom (bool = False) – If true then sample the points using the Halton sequence.
num_workers (int) – Number of worker used in fetching data.
loss (Loss) – PhysicsNeMo Loss module that defines the loss type, (e.g. L2, L1, …).
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True
- class physicsnemo.sym.domain.constraint.continuous.VariationalConstraint(
- nodes: List[Node],
- datasets: Dict[str, DictVariationalDataset],
- batch_sizes: Dict[str, int],
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
Bases:
Constraint
Base class for all Variational Constraints.
B(u, v, g, dom) = int_{dom} (F(u, v) - g*v) dx = 0, where F is an operator, g is a given function/data, v is the test function. loss of variational = B1(u1, v1, g1, dom1) + B2(u2, v2, g2, dom2) + …
- class physicsnemo.sym.domain.constraint.continuous.VariationalDomainConstraint(
- nodes: List[Node],
- geometry: Geometry,
- outvar_names: List[str],
- boundary_batch_size: int,
- interior_batch_size: int,
- interior_bounds: Dict[Basic, Tuple[float, float]] = None,
- boundary_criteria: Basic | Callable | None = None,
- interior_criteria: Basic | Callable | None = None,
- parameterization: Parameterization | None = None,
- batch_per_epoch: int = 1000,
- quasirandom: bool = False,
- num_workers: int = 0,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
Bases:
VariationalConstraint
Simple Variational Domain Constraint with a single geometry that represents the domain.
TODO add comprehensive doc string after refactor
Discrete Constraints#
Continuous type constraints
- class physicsnemo.sym.domain.constraint.discrete.DeepONetConstraint_Data(
- nodes: List[Node],
- invar_branch: Dict[str, array],
- invar_trunk: Dict[str, array],
- outvar: Dict[str, array],
- batch_size: int,
- lambda_weighting: Dict[str, array] = None,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
Bases:
_DeepONetConstraint
- class physicsnemo.sym.domain.constraint.discrete.DeepONetConstraint_Physics(
- nodes: List[Node],
- invar_branch: Dict[str, array],
- invar_trunk: Dict[str, array],
- outvar: Dict[str, array],
- batch_size: int,
- lambda_weighting: Dict[str, array] = None,
- loss: Loss = PointwiseLossNorm(),
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
- tile_trunk_input: bool = True,
Bases:
_DeepONetConstraint
- class physicsnemo.sym.domain.constraint.discrete.SupervisedGridConstraint(
- nodes: List[Node],
- dataset: Dataset | IterableDataset,
- loss: Loss = PointwiseLossNorm(),
- batch_size: int = None,
- shuffle: bool = True,
- drop_last: bool = True,
- num_workers: int = 0,
Bases:
Constraint
Data-driven grid field constraint
- Parameters:
nodes (List[Node]) – List of PhysicsNeMo Nodes to unroll graph with.
dataset (Union[Dataset, IterableDataset]) – dataset which supplies invar and outvar examples Must be a subclass of Dataset or IterableDataset
loss (Loss, optional) – PhysicsNeMo Loss function, by default PointwiseLossNorm()
batch_size (int, optional) – Batch size used when running constraint, must be specified if Dataset used Not used if IterableDataset used
shuffle (bool, optional) – Randomly shuffle examples in dataset every epoch, by default True Not used if IterableDataset used
drop_last (bool, optional) – Drop last mini-batch if dataset not fully divisible but batch_size, by default False Not used if IterableDataset used
num_workers (int, optional) – Number of dataloader workers, by default 0