NVIDIA Modulus Sym v1.0.0
Sym v1.0.0

Inverse Problem: Finding Unknown Coefficients of a PDE

In this tutorial, you will use Modulus Sym to solve an inverse problem by assimilating data from observations. You will use the flow field computed by OpenFOAM as an input to the PINNs whose job is to predict the parameters characterizing the flow (eg. viscosity (\(\nu\)) and thermal diffusivity (\(\alpha\))). In this tutorial you will learn:

  1. How to assimilate analytical/experimental/simulation data in Modulus Sym.

  2. How to use the PointwiseConstraint in Modulus Sym to create constraints from data that can be loaded in form of .csv files/numpy arrays.

  3. How to use the assimilated data to make predictions of unknown quantities.

Note

This tutorial assumes that you have completed tutorial Introductory Example and have familiarized yourself with the basics of the Modulus Sym APIs.

In this tutorial, you will predict the fluid’s viscosity and thermal diffusivity by providing the flow field information obtained from OpenFOAM simulations as an input to the PINNs. You will use the same 2D slice from a 3-fin flow field that was used as a validation data in tutorial Scalar Transport: 2D Advection Diffusion.

To summarize, the training data for this problem is (\(u_i\), \(v_i\), \(p_i\), \(T_i\)) from OpenFOAM simulation and the model is trained to predict (\(\nu\), \(\alpha\)) with the constraints of satisfying the governing equations of continuity, Navier-Stokes and advection-diffusion.

The \(\nu\) and \(\alpha\) used for the OpenFOAM simulation are \(0.01\) and \(0.002\) respectively.

\(T\) is scaled to define a new transport variable \(c\) for the advection-diffusion equation as shown in equation (171).

(171)\[\begin{split}\begin{split} c &=\frac{T_{actual}}{T_{base}} - 1.0\\ T_{base} &=293.498 K \end{split}\end{split}\]

As the majority of diffusion of temperature occurs in the wake of the heat sink (Fig. 76), you can sample points only in the wake for training the PINNs. You also should also discard the points close to the boundary as you will train the network to minimize loss from the conservation laws alone.

inverse_problem.png

Fig. 76 Batch of training points sampled from OpenFOAM data

In this tutorial you will use the PointwiseConstraint class for making the training data from .csv file. You will make three networks. The first network will memorize the flow field by developing a mapping between (\(x\), \(y\)) and (\(u\), \(v\), \(p\)). The second network will memorize the temperature field by developing a mapping between (\(x\), \(y\)) and (\(c\)). The third network will be trained to invert out the desired quantities viz. (\(\nu\), \(\alpha\)). For this problem, you will be using NavierStokes and AdvectionDiffusion equations from the PDES module.

Note

The python script for this problem can be found at examples/three_fin_2d/heat_sink_inverse.py.

Importing the required packages

The list of packages/modules to be imported are shown below.

Copy
Copied!
            

import os import sys import warnings import torch import numpy as np from sympy import Symbol, Eq import modulus.sym from modulus.sym.hydra import to_absolute_path, instantiate_arch, ModulusConfig from modulus.sym.solver import Solver from modulus.sym.domain import Domain from modulus.sym.geometry.primitives_2d import Rectangle, Line, Channel2D from modulus.sym.utils.sympy.functions import parabola from modulus.sym.utils.io import csv_to_dict from modulus.sym.eq.pdes.navier_stokes import NavierStokes, GradNormal from modulus.sym.eq.pdes.basic import NormalDotVec from modulus.sym.eq.pdes.turbulence_zero_eq import ZeroEquation from modulus.sym.eq.pdes.advection_diffusion import AdvectionDiffusion from modulus.sym.domain.constraint import ( PointwiseBoundaryConstraint, PointwiseInteriorConstraint, IntegralBoundaryConstraint, PointwiseConstraint, ) from modulus.sym.domain.monitor import PointwiseMonitor from modulus.sym.domain.validator import PointwiseValidator from modulus.sym.key import Key from modulus.sym.node import Node

Defining the Equations, Networks and Nodes for a Inverse problem

The process of creating a neural network for an inverse problem is similar to most of the problems you have seen in previous tutorials. However, the information for the flow variables, and in turn their gradients is already present (from OpenFOAM data) for the network to memorize. Hence, this example detachs these variables in computation graph of their respective equations. This means that only the networks predicting 'nu' and 'D' will be optimized to minimize the equation residuals. The velocity, pressure and their gradients are treated as ground truth data.

Also, note that the viscosity and diffusivity are passed in as strings ('nu' and 'D' respectively) to the equations as they are unknowns in this problem.

Similar to the tutorial Scalar Transport: 2D Advection Diffusion, you will train two separate networks to memorize flow variables (\(u\), \(v\) , \(p\) ), and scalar transport variable (\(c\)). Also, because 'nu' and 'D' are the custom variables that were defined, you will have a separate network invert_net_nu and invert_net_D that produces \(\nu\) and \(\alpha\) at the output nodes.

The code to generate the nodes for the problem is shown here:

Copy
Copied!
            

nu, D = Symbol("nu"), Symbol("D") # make list of nodes to unroll graph on ns = NavierStokes(nu=nu, rho=1.0, dim=2, time=False) ade = AdvectionDiffusion(T="c", rho=1.0, D=D, dim=2, time=False) flow_net = instantiate_arch( input_keys=[Key("x"), Key("y")], output_keys=[Key("u"), Key("v"), Key("p")], cfg=cfg.arch.fully_connected, ) heat_net = instantiate_arch( input_keys=[Key("x"), Key("y")], output_keys=[Key("c")], cfg=cfg.arch.fully_connected, ) invert_net_nu = instantiate_arch( input_keys=[Key("x"), Key("y")], output_keys=[Key("nu")], cfg=cfg.arch.fully_connected, ) invert_net_D = instantiate_arch( input_keys=[Key("x"), Key("y")], output_keys=[Key("D")], cfg=cfg.arch.fully_connected, ) nodes = ( ns.make_nodes( detach_names=[ "u", "u__x", "u__x__x", "u__y", "u__y__y", "v", "v__x", "v__x__x", "v__y", "v__y__y", "p", "p__x", "p__y", ] ) + ade.make_nodes( detach_names=["u", "v", "c", "c__x", "c__y", "c__x__x", "c__y__y"] ) + [flow_net.make_node(name="flow_network")] + [heat_net.make_node(name="heat_network")] + [invert_net_nu.make_node(name="invert_nu_network")] + [invert_net_D.make_node(name="invert_D_network")] ) base_temp = 293.498

Assimilating data from CSV files/point clouds to create Training data

PointwiseConstraint: Here the tutorial does not solve the problem for the full domain. Therefore, it does not use the geometry module to create geometry for sampling points. Instead, you use the point cloud data in the form of a .csv file. You can use the PointwiseConstraint class to handle such input data. This class takes in separate dictionaries for input variables and output variables. These dictionaries have a key for each variable and a numpy array of values associated to the key. Also, this tutorial provides a batch size for sampling this .csv point cloud. This is done by specifying the required batch size to the batch_size argument.

Since part of the problem involves memorizing the given flow field, you will have ['x', 'y'] as input keys and ['u', 'v', 'p', 'c', 'continuity', 'momentum_x', 'momentum_y', 'advection_diffusion'] as the output keys. Setting ['u', 'v', 'p', 'c'] as input values from OpenFOAM data, you are making the network assimilate the OpenFOAM distribution of these variables in the selected domain. Setting ['continuity', 'momentum_x', 'momentum_y', 'advection_diffusion'] equal to \(0\), you also instruct the network to satisfy the PDE losses at those sampled points. Now, except the \(\nu\) and \(\alpha\), all the variables in these PDEs are known. Thus the network can use this information to invert out the unknowns.

The code to generate such a boundary condition is shown here:

Copy
Copied!
            

# OpenFOAM data file_path = "openfoam/heat_sink_Pr5_clipped2.csv" if not os.path.exists(to_absolute_path(file_path)): warnings.warn( f"Directory{file_path}does not exist. Cannot continue. Please download the additional files from NGC <TODO: Add link here>" ) sys.exit() mapping = { "Points:0": "x", "Points:1": "y", "U:0": "u", "U:1": "v", "p": "p", "T": "c", } openfoam_var = csv_to_dict( to_absolute_path("openfoam/heat_sink_Pr5_clipped2.csv"), mapping ) openfoam_var["c"] = openfoam_var["c"] / base_temp - 1.0 openfoam_invar_numpy = { key: value for key, value in openfoam_var.items() if key in ["x", "y"] } openfoam_outvar_numpy = { key: value for key, value in openfoam_var.items() if key in ["u", "v", "p", "c"] } openfoam_outvar_numpy["continuity"] = np.zeros_like(openfoam_outvar_numpy["u"]) openfoam_outvar_numpy["momentum_x"] = np.zeros_like(openfoam_outvar_numpy["u"]) openfoam_outvar_numpy["momentum_y"] = np.zeros_like(openfoam_outvar_numpy["u"]) openfoam_outvar_numpy["advection_diffusion_c"] = np.zeros_like( openfoam_outvar_numpy["u"] ) # make domain domain = Domain() # interior data = PointwiseConstraint.from_numpy( nodes=nodes, invar=openfoam_invar_numpy, outvar=openfoam_outvar_numpy, batch_size=cfg.batch_size.data, ) domain.add_constraint(data, "interior_data")

Adding Monitors

In this tutorial, you will create monitors for the convergence of average 'nu' and 'D' inside the domain as the solution progresses. Once you find that the average value of these quantities has reached a steady value, you can end the simulation. The code to generate the PointwiseMonitor is shown here:

Copy
Copied!
            

# add monitors monitor = PointwiseMonitor( openfoam_invar_numpy, output_names=["nu"], metrics={"mean_nu": lambda var: torch.mean(var["nu"])}, nodes=nodes, ) domain.add_monitor(monitor) monitor = PointwiseMonitor( openfoam_invar_numpy, output_names=["D"], metrics={"mean_D": lambda var: torch.mean(var["D"])}, nodes=nodes, ) domain.add_monitor(monitor)

Once the python file is setup, the training can be simply started by executing the python script.

Copy
Copied!
            

python heat_sink_inverse.py

You can monitor the Tensorboard plots to see the convergence of the simulation. The Tensorboard graphs should look similar to the ones shown in Fig. 77.

Table 3 Comparison of the inverted coefficients with the actual values

Property

OpenFOAM (True)

Modulus Sym (Predicted)

Kinematic Viscosity \((m^2/s)\)

1.00 × 10−2

9.87 × 10−3

Thermal Diffusivity \((m^2/s)\)

2.00 × 10−3

2.53 × 10−3

inverse_problem_results_try2.png

Fig. 77 Tensorboard plots for \(\alpha\) and \(\nu\)

© Copyright 2023, NVIDIA Modulus Team. Last updated on Aug 8, 2023.