Lid Driven Cavity Background

This tutorial steps through the process of generating a 2D flow simulation for the Lid Driven Cavity (LDC) flow using NVIDIA Modulus. In this tutorial, you learn:

  1. How to generate a 2D geometry using the geometry module in Modulus.

  2. How to set up the boundary conditions.

  3. How to select the flow equations to be solved.

  4. How to interpret the different losses and tuning the network.

  5. How to do some basic post-processing using Modulus.

Note

The tutorial also assumes that you have successfully downloaded the Modulus repository.

Problem Description

The geometry for the problem is shown in Fig. 1. All the boundaries of the geometry are stationary walls with the top wall moving in the x-direction at a velocity of 1 \(m/s\). The Reynolds number based on the cavity height is chosen to be 10.

Lid driven cavity geometry

Fig. 1 Lid driven cavity geometry

Case Setup

It’s helpful to summarize the key concepts (discussed in depth in Theory) and how it relates to Modulus’ features. For solving any physics simulation that is defined by differential equations, you need information about the domain of the problem and its governing equations/boundary conditions. With this data, you can use a solver to solve the equations over the domain to obtain a solution. In Modulus, there are modules/functions to help in each stage of this problem definition, solution, monitoring and post-processing. The domain can be defined using either the Modulus’ Constructive Solid Geometry (CSG) module, STL module or any general data from external sources like CSV/numpy/HDF5DataFile etc. For PINNs, once we have this geometry/point cloud, it can be sub-sampled into points on the boundaries to satisfy the boundary conditions; and interior regions to minimize the PDE/ODE residuals.

Note

From v22.03 Modulus will support APIs and architectures to solve the problems using only data, only physics and hybrid of the two. Modulus also now uses PyTorch as the backend framework (Release Notes). The major APIs for generating the neural networks, generating and sampling geometries have remained the same, however there is a change in how these things come together to solve the problem.

The lid-driven cavity flow problem is solved using a fully physics-driven method.

Note

The python script for this problem can be found at examples/ldc/ldc_2d.py

Creating Nodes

Importing the required packages

Before adding any code, lets get the required packages imported for creating the geometry, network, and plotting the results.

from sympy import Symbol, Eq, Abs

import modulus
from modulus.hydra import to_absolute_path, to_yaml, instantiate_arch
from modulus.hydra.config import ModulusConfig
from modulus.csv_utils.csv_rw import csv_to_dict
from modulus.continuous.solvers.solver import Solver
from modulus.continuous.domain.domain import Domain
from modulus.geometry.csg.csg_2d import Rectangle

from modulus.continuous.constraints.constraint import (
    PointwiseBoundaryConstraint,
    PointwiseInteriorConstraint,
)
from modulus.continuous.validator.validator import PointwiseValidator
from modulus.continuous.inferencer.inferencer import PointwiseInferencer
from modulus.key import Key
from modulus.PDES.navier_stokes import NavierStokes
from modulus.tensorboard_utils.plotter import ValidatorPlotter, InferencerPlotter
from modulus.architecture import layers

Creating a PDE Node

For the LDC flow, solve the steady-state incompressible Navier-stokes equations in 2d. Hence the independent variables of the problem are \(x\) and \(y\). The variables to solve for are \(u, v\) and \(p\). To represent the solution of this problem, create a neural network that can approximate the solution for these variables for the given equations and boundary conditions. This network will have \(x, y\) as inputs and \(u, v, p\) as outputs. The PDEs needed for this problem are the continuity and momentum equations in \(x\) and \(y\) directions as shown below.

(1)\[\begin{split}\begin{aligned} \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} &= 0\\ u\frac{\partial u}{\partial x} + v\frac{\partial u}{\partial y} &= -\frac{\partial p}{\partial x} + \nu \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right)\\ u\frac{\partial v}{\partial x} + v\frac{\partial v}{\partial y} &= -\frac{\partial p}{\partial y} + \nu \left(\frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2} \right)\end{aligned}\end{split}\]

These continuity and Navier stokes equations can be defined by using the NavierStokes PDE within Modulus and by specifying the appropriate arguments for the time and dim. Since this is a 2d problem steady state problem, set time=False and dim=2. Also define the kinematic viscosity and the density for the problem as \(0.01\) and \(1.0\) respectively.

Creating a Neural Network Node

The default FullyConnectedArch (selected by setting cfg.arch.fully_connected in the cfg argument of the instantiate_arch() function) can be used to create the neural network for the problem. The default FullyConnectedArch represents a 6 layer MLP (multi-layer perceptron) architecture with each layer containing 512 nodes and uses swish as the activation function. All these parameters are user configurable, but for this problem the default values should work. Once all the PDEs and architectures are defined, create a list of nodes to pass to different constraints you want to satisfy for this problem (i.e. equations residuals, boundary conditions, etc.).

This is how this is done in the code.

    # make list of nodes to unroll graph on
    ns = NavierStokes(nu=0.01, rho=1.0, dim=2, time=False)
    flow_net = instantiate_arch(
        input_keys=[Key("x"), Key("y")],
        output_keys=[Key("u"), Key("v"), Key("p")],
        cfg=cfg.arch.fully_connected,
    )
    nodes = ns.make_nodes() + [flow_net.make_node(name="flow_network", jit=cfg.jit)]

Using Hydra to Configure Modulus

At the heart of the using Modulus are Hydra config files, the one for this example are found in the code shown below. Using hydra allows a highly customizable but user friendly method for configuring the majority of Modulus’ features. More information can be found in Modulus Configuration.

defaults :
  - modulus_default
  - arch:
      - fully_connected
  - scheduler: tf_exponential_lr
  - optimizer: adam
  - loss: sum
  - _self_
jit: false
scheduler:
  decay_rate: 0.95
  decay_steps: 4000

training:
  rec_validation_freq: 1000
  rec_inference_freq: 2000
  rec_monitor_freq: 1000
  rec_constraint_freq: 2000
  max_steps : 10000

batch_size:
  TopWall: 1000
  NoSlip: 1000
  Interior: 4000

Creating Geometry

Now that you have the PDEs and the architectures defined, the LDC geometry can be generated using the CSG module in Modulus. The CSG module supports a wide variety of primitives like rectangle, circle, triangle, infinite channel, line in 2D and sphere, cone, cuboid, infinite channel, plane, cylinder, torus, tetrahedron, and triangular prism in 3D. Other complex geometries can be constructed using these primitives by performing operations like add, subtract, intersect, etc. Please see the source code documentation for more details on the mode of definition for each shape as well as updates on newly added geometries.

Begin by defining the required symbolic variables for the geometry and then generating the 2D square geometry by using the Rectangle geometry object. The symbolic variable will be used to later sub-sample the geometry to create different boundaries, interior regions etc. while defining constraints.

In Modulus, a Rectangle is defined using the coordinates for two opposite corner points. The code below shows the process of generating a simple geometry in Modulus.

    # make geometry
    height = 0.1
    width = 0.1
    x, y = Symbol("x"), Symbol("y")
    rec = Rectangle((-width / 2, -height / 2), (width / 2, height / 2))

To visualize the geometry, you can sample either on the boundary or in the interior of the geometry. One such way is shown below where the sample_boundary method samples points on the boundary of the geometry. The sample_boundary can be replaced by sample_interior to sample points in the interior of the geometry.

The var_to_polyvtk function will generate a .vtp point cloud file for the geometry which can be viewed using tools like ParaView or any other point cloud plotting software.

samples = geo.sample_boundary(1000)
var_to_polyvtk(samples, './geo')

The geometry module also features functionality like translate and rotate to generate shapes in arbitrary orientation. The use of these will be covered in upcoming tutorials.

Adding Constraints

Before we start defining the constraints of the problem, let’s setup the domain.

Setting up the Domain

The Domain and the configs are passed as the inputs when using the Solver class. Apart from constraints, you can add various other utilities to the Domain such as monitors, validation data, points to do inference on etc. Each of these is covered in detail in this and the upcoming examples.

    # make ldc domain
    ldc_domain = Domain()

Now let’s look into adding constraints to this domain. This can be thought of as adding specific constraints to the neural network optimization. For this physics-driven problem, these constraints are the boundary conditions and equation residuals. The goal is to satisfy the boundary conditions exactly, and ideally have the PDE residuals to go 0. These constraints can be specified within Modulus using classes like PointwiseBoundaryConstrant and PointwiseInteriorConstraint. A L2 loss (defult and can be modified) is then constructed from these constraints which is used by the optimizer to minimize on. Specifying the constraints in this fashion is called soft-constraints. Here’s how to specify these constraints.

Boundary Constraints

For generating a boundary condition in Modulus, sample the points on the required boundary/surface of the geometry, specify the nodes you want to unroll(evaluate) on those points and then assign them the true values ypu want for them.

PointwiseBoundaryConstraint A boundary can be sampled using the PointwiseBoundaryConstraint. This will, sample the entire boundary of the geometry you specify in the geometry argument. In this case, once you set geometry=rec all the sides of the rectangle are sampled. A particular boundary of the geometry can be sub-sampled by using a particular criterion by using the criteria parameter. The criteria can be any symbolic function definied using sympy library. For example, to sample the top wall, criteria is set to criteria=Eq(y,height/2).

The desired values for the boundary condition are listed as a dictionary in outvar argument.

The number of points to sample on each boundary are specified using the batch_size parameter.

Note

  • The criteria parameter is optional. With no criteria, all the boundaries in the geometry are sampled.

  • The network directory will only show the points sampled in a single batch. However the total points used in the training can be computed by further multiplying the batch size by batch_per_epoch parameter. The default value of this is set to 1000. In the example above, the total points sampled on the Top BC will be \(1000\) x \(1000 = 1000000\).

For the LDC problem, define the top wall with a \(u\) velocity equal to 1 \(m/s\) in the \(+ve\) x-direction while all other walls are stationary (\(u,v = 0\)). It can be observed from Fig. 2 that this gives rise to sharp discontinuities, wherein the \(u\) velocity jumps from \(0\) to \(1.0\) sharply. As outlined in the theory Spatial Weighting of Losses (SDF weighting), this can be avoided by specifying the weighting for this boundary such that the weight of the loss is 0 on the boundaries. You can use the function \(1.0 - 20.0|x|\) as shown in Fig. 2 for this purpose. Similar to the advantages of weighting losses for equations, (Fig. 22), eliminating such discontinuities speeds up convergence and allows you to achieve better accuracy.

Weights to any variables can be specified as an input to the lambda_weighting parameter.

Weighting the sharp discontinuities in the boundary condition

Fig. 2 Weighting the sharp discontinuities in the boundary condition

PDE Constraints

The PDEs of the problem defined earlier need to be enforced on all the points in the interior of the geometry to achieve the desired solution. To do this you need to sample the points inside the required geometry, specify the nodes you want to unroll(evaluate) on those points and then assign them the true values you want for them.

PointwiseInteriorConstraint

Similar to sampling boundaries, use PointwiseInteriorConstraint to sample points in the interior of a geometry. The equations to solve are specified as a dictionary input to outvar argument.

For the 2D LDC case, the continuity equation and the momentum equations in x and y directions are needed. Therefore you will have keys for 'continuity', 'momentum_x' and 'momentum_y'. Assign the value 0 to these keys. This represents the desired residual for these keys at the chosen points (in this case all of interior of the ldc geometry). You can also specify any value to these keys and this will act like add a custom forcing/source term. More examples of this can be found in the later chapters of this User Guide, but generally, if you want to add any source term, the value 0 can be replaced with source value. To see how the equation keys are defined, you can look at the Modulus source or see the source code documentation (modulus/PDES/navier_stokes.py).

As an example, the definition of 'continuity' is presented here.

...
    # set equations
    self.equations = {}
    self.equations['continuity'] = rho.diff(t) + (rho*u).diff(x) + (rho*v).diff(y) + (rho*w).diff(z)
    ...

The resulting in losses are now formulated as depicted in the equations below.

(2)\[L_{continuity}= \frac{V}{N} \sum_{i=0}^{N} ( 0 - continuity(x_i,y_i))^2\]
(3)\[L_{momentum_{x}}= \frac{V}{N} \sum_{i=0}^{N} ( 0 - momentum_{x}(x_i,y_i))^2\]
(4)\[L_{momentum_{y}}= \frac{V}{N} \sum_{i=1}^{n} (0 - momentum_{y}(x_i, y_i))^2\]

The parameter bounds, determines the range for sampling the values for variables x and y. The lambda parameter is used to determine the weights for different losses. In this problem, we weight each equation at each point by its distance from the boundary by using the Signed Distance Field (SDF) of the geometry. This implies that the points away from the boundary are weighted higher compared to the ones closer to the boundary. This type of weighting of the loss functions leads to a faster convergence since it avoids discontinuities at the boundaries (section Spatial Weighting of Losses (SDF weighting)).

Note

The lambda parameter is optional. If not specified, the loss for each equation/boundary variable at each point is weighted equally.

Let’s now see how this is done within the code:

    # top wall
    top_wall = PointwiseBoundaryConstraint(
        nodes=nodes,
        geometry=rec,
        outvar={"u": 1.0, "v": 0},
        batch_size=cfg.batch_size.TopWall,
        lambda_weighting={"u": 1.0 - 20 * Abs(x), "v": 1.0},  # weight edges to be zero
        criteria=Eq(y, height / 2),
    )
    ldc_domain.add_constraint(top_wall, "top_wall")

    # no slip
    no_slip = PointwiseBoundaryConstraint(
        nodes=nodes,
        geometry=rec,
        outvar={"u": 0, "v": 0},
        batch_size=cfg.batch_size.NoSlip,
        criteria=y < height / 2,
    )
    ldc_domain.add_constraint(no_slip, "no_slip")

    # interior
    interior = PointwiseInteriorConstraint(
        nodes=nodes,
        geometry=rec,
        outvar={"continuity": 0, "momentum_x": 0, "momentum_y": 0},
        batch_size=cfg.batch_size.Interior,
        bounds={x: (-width / 2, width / 2), y: (-height / 2, height / 2)},
        lambda_weighting={
            "continuity": rec.sdf,
            "momentum_x": rec.sdf,
            "momentum_y": rec.sdf,
        },
    )
    ldc_domain.add_constraint(interior, "interior")

Adding Validation Node

You can add CFD data or data from any other PDE solver and use that to make a comparison with Modulus’s results. This section shows how to set up such a validation domain in Modulus. Here the results from OpenFOAM, an open-source CFD solver are used to make comparisons. The results can be imported into Modulus as a .csv file (or any other data format like .npz, .vtk, etc.). The data needs to be converted into a dictionary of numpy variables for input and output for using them inside Modulus. For the csv files, this can be done using the csv_to_dict function.

The validation data is then added to the domain using PointwiseValidator. The dictionary of generated numpy arrays for input and output variables is used as an input.

    # add validator
    mapping = {"Points:0": "x", "Points:1": "y", "U:0": "u", "U:1": "v", "p": "p"}
    openfoam_var = csv_to_dict(
        to_absolute_path("openfoam/cavity_uniformVel0.csv"), mapping
    )
    openfoam_var["x"] += -width / 2  # center OpenFoam data
    openfoam_var["y"] += -height / 2  # center OpenFoam data
    openfoam_invar_numpy = {
        key: value for key, value in openfoam_var.items() if key in ["x", "y"]
    }
    openfoam_outvar_numpy = {
        key: value for key, value in openfoam_var.items() if key in ["u", "v"]
    }
    openfoam_validator = PointwiseValidator(
        openfoam_invar_numpy,
        openfoam_outvar_numpy,
        nodes,
        batch_size=1024,
        plotter=ValidatorPlotter(),
    )
    ldc_domain.add_validator(openfoam_validator)

Training

Create a solver by using the domain just created along with the other configurations that define the optimizer choices, settings (i.e. conf) using Modulus’ Solver class. The solver can then be executed using the solve method.

    # make solver
    slv = Solver(cfg, ldc_domain)

    # start solver
    slv.solve()


if __name__ == "__main__":
    run()

The file set up for Modulus is now complete. You are now ready to solve the CFD simulation using Modulus’s Neural Network solver.

Training the model

The training can now be simply started by executing the python script.

python ldc_2d.py

The console should print the losses at each step. However it is difficult to monitor convergence through the command window and you can use Tensorboard instead, to graphically monitor the losses as the training progresses.

Results and Post Processing

Setting up Tensorboard

Tensorboard is a great tool for visualization of machine learning experiments. To visualize the various training and validation losses, tensorboard can be set up as follows:

  1. In a separate terminal window, navigate to the working directory of the example, in this case examples/ldc/

  2. Type in the following command on the command line:

    tensorboard --logdir=./ --port=7007
    

    Specify the port you want to use. This example uses 7007. Once running, the command prompt displays the url that displays the results.

  3. To view results, open a web browser and go to the url mentioned in the command prompt. An example would be: http://localhost:7007/#scalars. A window as shown in Fig. 3 should open up in the browser window.

The Tensorboard window displays the various losses at each step during the training. The AdamOptimizer loss is the total loss computed by the network. The “loss_L2continuity”, “loss_L2momentum_x” and “loss_L2momentum_y” determine the L2 loss computed for the continuity and Navier Stokes equations in x and y direction respectively. The “loss_L2u” and “loss_L2v” determine how well the boundary conditions are satisfied (soft constraints).

Tensorboard Interface.

Fig. 3 Tensorboard Interface.

Output Files

The checkpoint directory is saved based on the results recording frequency specified in the 'rec_results_freq'. The network directory folder (in this case 'outputs/') contains the following important files/directories.

  1. optim_checkpoint.pth, flow_network.pth: Optimizer checkpoint and flow network saved during training.

  2. constraints: This directory contains the data computed on the points added to the domain using add_constraint(). The data is present in the form of .vtp files. The .vtp files can be viewed using visualization tools like Paraview. You will see the true and predicted values of all the nodes that were passed to the nodes argument of the constraint. For example, the ./constraints/Interior.vtp will have the variables for pred_continuity and true_continuity representing the network predicted and the true value set for continuity. Figure Fig. 4 shows the comparison between true and computed continuity. This directory is useful to see how well the boundary conditions and equations are being satisfied at the sampled points.

    Visualization using Paraview. Left: Continuity as specified in the domain definition. Right: Computed continuity after training.

    Fig. 4 Visualization using Paraview. Left: Continuity as specified in the domain definition. Right: Computed continuity after training.

  3. validators: This directory contains the data computed on the points added in the domain using add_validator(). This domain is more useful for validating the data w.r.t. a reference solution. The data is in the form of .vtp and .npz files (based on the save_filetypes in configs). The .vtp files can be viewed using visualization tools like Paraview. The .vtp/.npz files in this directory will report predicted, true (validation data), pred (model’s inference) on the chosen points. For example, the ./validators/validator.vtp contains the valiables like true_u, true_v, true_p, and pred_u, pred_v, pred_p corresponding to the true and the network predicted values for variables like “u”, “v” and “p”. Figure Fig. 5 shows the comparison between true and Modulus predicted values of such variables.

image

Fig. 5 Comparison with OpenFOAM results

Extra: Adding Monitor and Inferencer

Monitor Node

Modulus allows you to monitor desired quantities by plotting them every fixed number of iterations in Tensorboard as the simulation progresses, and analyze the the convergence based on the relative changes in the monitored quantities. A PointwiseMonitor can be used to create such an feature. Examples of such quantities can be point values of variables, surface averages, volume averages or any derived quantities that can be formed using the variables being solved.

The flow variables are available as PyTorch tensors. You can perform tensor operations to create any desired derived variable of your choice. In the code below, we create monitors for continuity and momentum imbalance in the interior.

The points to sample can be selected in a similar way as we did for specifying some of the interior constraints.

...
     # add monitors
     global_monitor = PointwiseMonitor(
         rec.sample_interior(4000, bounds={x: (-width/2, width/2), y: (-height/2, height/2)}),
         output_names=["continuity", "momentum_x", "momentum_y"],
         metrics={
             "mass_imbalance": lambda var: torch.sum(
                 var["area"] * torch.abs(var["continuity"])
             ),
             "momentum_imbalance": lambda var: torch.sum(
                 var["area"]
                 * (torch.abs(var["momentum_x"]) + torch.abs(var["momentum_y"]))
             ),
         },
         nodes=nodes,
     )
     ldc_domain.add_monitor(global_monitor)
image

Fig. 6 LDC Monitors in Tensorboard

Inferencer Node

Modulus also allows you to plot the results on arbritary domains. You can then monitor these domains in ParaView or Tensorboard itself. More details on how to add Modulus stuff to Tensorboard can be found in TensorBoard in Modulus. The code below shows the use PointwiseInferencer.

    # add inferencer data
    grid_inference = PointwiseInferencer(
        openfoam_invar_numpy,
        ["u", "v", "p"],
        nodes,
        batch_size=1024,
        plotter=InferencerPlotter(),
    )
    ldc_domain.add_inferencer(grid_inference, "inf_data")
image

Fig. 7 LDC Inference in Tensorboard