# Deep-O Nets¶

## Introduction¶

This tutorial uses Modulus to solve anti-derivative problems. It uses both data and physics informed DeepONet to solve this problem.

Note

This tutorial assumes that you have completed the tutorial Lid Driven Cavity Background and are familiar with Modulus APIs.

## Problem Description¶

Assume a continuous function $$u$$ has been defined on $$[0,1]$$. Define the anti-derivative operator over $$[0,1]$$ as

(166)$G:\quad u\mapsto G(u)(y):= \int_0^y u(s)ds.$

In other words,

(167)$\frac{d}{dy}G(u)(y) = u(y), \qquad G(u)(0) = 0.$

You’re going to setup a DeepONet to learn the operator $$G$$. In this case, the $$u$$ will be the input of branch net and the $$y$$ will be the input of trunk net. As the input of branch net, $$u$$ will be evaluated at some point in $$[0,1]$$. They are not necessary to be the same as $$y$$, which is the evaluation points of the output. For example, you may give the data of $$u$$ as $$\{u(0),\ u(0.5),\ u(1)\}$$ but evaluate the output at $$\{G(u)(0.1), G(u)(0.8), G(u)(0.9)\}$$. This is one of the advantages of DeepONet compared with Fourier neural operator.

With this structure, there are two options to train the network: data informed and physics informed.

## Problem 1: Data informed DeepONet¶

Note

The python script for this problem can be found at /examples/anti_derivative/data_informed.py.

### Date Preparation¶

As a preparation, generate $$10,000$$ 1D Gaussian random field (GRF) on $$[0,1]$$. Then use the quadrature tool in the scipy package to obtain their anti-derivatives which go across the origin. The data generation code can be found in examples/anti_derivate/utils.py. With this data, you can start the data informed DeepONet code.

### Case Setup¶

Let us first import the necessary packages.

import torch

import modulus
from modulus.hydra import to_absolute_path, to_yaml, instantiate_arch
from modulus.hydra.config import ModulusConfig
from modulus.continuous.solvers.solver import Solver
from modulus.continuous.domain.domain import Domain
from modulus.architecture.deeponet import DeepONetArch_Data
from modulus.discrete.constraints.constraint import DeepONetConstraint_Data
from modulus.discrete.validator.validator import DeepONet_Data_Validator

from modulus.key import Key
from modulus.tensorboard_utils.plotter import DeepONetValidatorPlotter
import numpy as np


In the run function, setup the branch and trunk nets, respectively. In this case, they are all fully-connected networks. Then use these two networks to setup the DeepONet structure. The Key("u1", 1000) in branch net and the Key("u2", 1000) in the trunk net indicate the outputs of them. The size $$1,000$$ are the feature size, which can be adjusted in different problems. The input key Key("x", 100) means the each input sample function has been evaluated on $$100$$ points.

    branch_net = instantiate_arch(
input_keys=[Key("x", 100)],
output_keys=[Key("u1", 1000)],
layer_size=50,
cfg=cfg.arch.fully_connected,
)
trunk_net = instantiate_arch(
input_keys=[Key("s", 1)],
output_keys=[Key("u2", 1000)],
layer_size=50,
cfg=cfg.arch.fully_connected,
)
deeponet = DeepONetArch_Data.from_branch_trunk(
branch_net=branch_net, trunk_net=trunk_net, output_keys=[Key("u", 100)]
)
nodes = [deeponet.make_node(name="deeponet", jit=cfg.jit)]


The output key Key("u", 100) means the output function has been evaluated on $$100$$ points, which is the size of trunk net’s input.

Then import the data from the .npz file.

    data = np.load(to_absolute_path("data/anti_derivative.npz"), allow_pickle=True)
x_data = data["x"].reshape((-1, 1))
u_data = data["u"]
anti_u_data = data["anti_u"]


To add the data constraint, use DeepONetConstraint_Data.

    supervised = DeepONetConstraint_Data(
nodes=nodes,
invar_branch={"x": u_data[:9900, :]},
invar_trunk={"s": x_data},
outvar={"u": anti_u_data[:9900, :]},
batch_size=cfg.batch_size.supervised,
cell_volumes=None,
lambda_weighting=None,
)


Here, since we have $$10,000$$ totally, we use $$9,900$$ of them for training. In the hydra config file, set the batch size (cfg.batch_size.supervised) as $$100$$.

You can set a validator to verify the results. Use the rest $$100$$ data to validate the results. The batch size is still $$100$$. To visualize the result, draw the result with the first $$10$$ data. This can be done by means of DeepONetValidatorPlotter.

    validator = DeepONet_Data_Validator(
invar_branch={"x": u_data[9900:, :]},
invar_trunk={"s": x_data},
true_outvar={"u": anti_u_data[9900:, :]},
nodes=nodes,
batch_size=100,
plotter=DeepONetValidatorPlotter(n_examples=10),
)


The validation results (ground truth, DeepONet prediction, and difference, respectively) are shown as below (Fig. 72, Fig. 73, Fig. 74).

## Problem 2: Physics informed DeepONet¶

This section uses the physics informed DeepONet to resolve the anti-derivative problem. In physics informed approach, there is no need for training, but you do need some data for validation.

Note

The python script for this problem can be found at /examples/anti_derivative/physics_informed.py.

### Case Setup¶

Most of the setup for physics informed DeepONet is the same as the data informed version. First you import the needed packages.

import torch

import modulus
from modulus.hydra import to_absolute_path, to_yaml, instantiate_arch
from modulus.hydra.config import ModulusConfig
from modulus.continuous.solvers.solver import Solver
from modulus.continuous.domain.domain import Domain
from modulus.architecture.deeponet import DeepONetArch_Physics
from modulus.discrete.constraints.constraint import DeepONetConstraint_Physics
from modulus.discrete.validator.validator import DeepONet_Physics_Validator
from modulus.constants import np_dt

from modulus.key import Key
from modulus.tensorboard_utils.plotter import DeepONetValidatorPlotter
import numpy as np


In the run function, setup the branch and trunk nets, respectively. This part is the same as the data informed version.

    branch_net = instantiate_arch(
input_keys=[Key("x", 100)],
output_keys=[Key("u1", 1000)],
layer_size=50,
cfg=cfg.arch.fully_connected,
)
trunk_net = instantiate_arch(
input_keys=[Key("s", 1)],
output_keys=[Key("u2", 1000)],
layer_size=50,
cfg=cfg.arch.fully_connected,
)
deeponet = DeepONetArch_Physics.from_branch_trunk(
branch_net=branch_net,
trunk_net=trunk_net,
batch_size=cfg.batch_size.supervised,
output_keys=[Key("u", 100)],
)
nodes = [deeponet.make_node(name="deeponet", jit=cfg.jit)]


Then, import the data, this is same as the data informed version.

    data = np.load(to_absolute_path("data/anti_derivative.npz"), allow_pickle=True)
x_data = data["x"].reshape((-1, 1))
u_data = data["u"]
anti_u_data = data["anti_u"]


Next, the main different of physics informed version compared with data informed is highlighted.

First, impose the derivative constraint that $$\frac{d}{dy}G(u)(y) = u(y)$$.

    supervised = DeepONetConstraint_Physics(
nodes=nodes,
invar_branch={"x": u_data[:9900, :]},
invar_trunk={"s": x_data},
outvar={"u__s": u_data[:9900, :]},
batch_size=cfg.batch_size.supervised,
cell_volumes=None,
lambda_weighting=None,
)


Note here that u__s is the derivative of u w.r.t s.

Also impose the initial value constraint that $$u(0)=0$$. The way to achieve this is to set the input of the trunk net as all zero data. Then the output function will be evaluated at only $$0$$.

    boundary = DeepONetConstraint_Physics(
nodes=nodes,
invar_branch={"x": u_data[:9900, :]},
invar_trunk={"s": np.zeros_like(x_data, dtype=np_dt)},
outvar={"u": np.zeros_like(u_data[:9900, :], dtype=np_dt)},
batch_size=cfg.batch_size.supervised,
cell_volumes=None,
lambda_weighting=None,
)


Finally, add the validator. This is the same as data informed version. For the sake of comparison, we again use the last $$100$$ data as validation.

    validator = DeepONet_Physics_Validator(
invar_branch={"x": u_data[9900:, :]},
invar_trunk={"s": x_data},
true_outvar={"u": anti_u_data[9900:, :]},
nodes=nodes,
batch_size=100,
plotter=DeepONetValidatorPlotter(n_examples=10),
)