Release Notes

Modulus has been ported from TensorFlow to PyTorch for Modulus v22.03 to leverage new framework capabilities. It will also facilitate new research developments to be available faster for our ecosystem to build and extend. Several API changes have also been incorporated for Modulus v22.03 to achieve the above objectives.

New features/Highlights v22.03

New Network Architectures

  • Fourier Neural Operator: Physics-inspired Neural Network model that uses global convolutions in spectral space as an inductive bias for training Neural Network models of physical systems. It incorporates important spatial and temporal correlations, which strongly govern the dynamics of many physical systems that obey PDE laws.

  • Physics Informed Neural Operator: PINO is the explicitly physics-informed version of the FNO. PINO combines the operating-learning and function-optimization frameworks. In the operator-learning phase, PINO learns the solution operator over multiple instances of the parametric PDE family.

  • Adaptive Fourier Neural Operator: An adaptive FNO for scaling self-attention to high resolution images in vision transformers by establishing a link between operator learning and token mixing. AFNO is based on FNO which allows framing token mixing as a continuous global convolution without any dependence on the input resolution. The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size.

  • Deep-O Net: A DeepONet consists of two sub-networks, one for encoding the input function and another for encoding the locations and then merged to compute the output. Using inductive bias, DeepONets are shown to reduce the generalization error compared to the fully connected networks.

Modeling Enhancements

  • 2-eqn. Turbulence: Solution to 2-equation turbulence (k-e & k-w) models on a fully developed turbulent flow in a 2D channel case using wall functions. Two types of wall functions (standard and Launder-Spalding) have been tested and demonstrated on the above example problem.

  • Exact boundary condition imposition: A new algorithm based on the theory of R-functions and transfinite interpolation is implemented to exactly impose the Dirichlet boundary conditions on 2D geometries. In this algorithm, the neural network solution to a given PDE is constrained to a boundary condition-aware and geometry-aware ansatz, and a loss function based on the first-order formulation of the PDE is minimized to train a solution that exactly satisfies the boundary conditions.

Training features

  • Support for new optimizers: Modulus now supports 30+ optimizers including the built-in PyTorch optimizers and the optimizers in the torch-optimizer library. Includes support for AdaHessian, a second-order stochastic optimizer that approximates an exponential moving average of the Hessian diagonal for adaptive preconditioning of the gradient vector.

  • New algorithms for loss balancing: Three new loss balancing algorithms, namely Grad Norm, ReLoBRaLo (Relative Loss Balancing with Random Lookback), and Soft Adapt are implemented. These algorithms dynamically tune the loss weights based on the relative training rates of different losses. Also, Neural Tangent Kernel (NTK) analysis is implemented. NTK is a neural network analysis tool that indicates the convergent speed of each component. It will provide an explainable choice for the weights for different loss terms. Grouping the MSE of the loss allows computation of NTK dynamically.

  • Sobolev (gradient-enhanced) training: Sobolev training of neural networks solvers incorporate derivative information of the PDE residuals into the loss function.

  • Hydra Configs: A big part of model development is hyperparameter tuning that requires performing multiple training runs with different configurations. Usage of Hydra within Modulus allows for more extensibility and configurability. Certain components of the training pipeline can now be switched out for other variants with no code change. Hydra multi-run also allows for better training workflows and running a hyperparameter sweep with a single command.

  • Post-processing: Modulus now supports new Tensorboard and VTK features that will allow better visualizations of the Model outputs during and after training

Feature Summary

  • Improved stability in multi-GPU/multi-Node implementations using linear-exponential learning rate and utilization of TF32 precision for A100 GPUs

  • Physics types:

    • Linear Elasticity (plane stress, plane strain and 3d)

    • Fluid Mechanics

    • Heat Transfer

    • Coupled Fluid-Thermal

    • Electromagnetics

    • 2D wave propagation

    • 2-Eqn. Turbulence Model for channel flow

  • Solution of differential equations:

    • Ordinary Differential Equations

    • Partial Differential Equations

      • Differential (strong) form

      • Integral (weak) form

  • Several Neural Network architectures to choose from:

    • Fully Connected Network

    • Fourier Feature Network

    • Sinusoidal Representation Network

    • Modified Fourier Network

    • Deep Galerkin Method Network

    • Modified Highway Network

    • Multiplicative Filter Network

    • Multi-scale Fourier Networks

    • Spatial-temporal Fourier Feature Networks

    • Hash Encoding Network

    • Super Resolution Net

  • Neural Operators

    • Fourier Neural Operator (FNO)

    • Physics Informed Neural Operator (PINO)

    • Adaptive Fourier Neural Operator (AFNO)

    • DeepONet

  • Other Features include:

    • Global mass balance constraints

    • SDF (Signed Distance Function) weighting for PDEs in flow problems for rapid convergence

    • Exact mass balance constraints

    • Global and local learning rate annealing

    • Global adaptive activation functions

    • Halton sequences for low-discrepancy point cloud generation

    • Gradient accumulation

    • Time-stepping schemes for transient problems

    • Temporal loss weighting and time marching for continuous time approach

    • Importance Sampling

    • Homoscedastic task uncertainty quantification for loss weighting

    • Exact boundary condition imposition

    • Sobolev (gradient-enhanced) training

    • Loss balancing schemes:

      • Grad Norm

      • ReLoBRaLo

      • Soft Adapt

      • NTK

    • Parameterized system representation for solving several configurations concurrently

    • Transfer learning for efficient surrogate-based parameterizations

    • Polynomial chaos expansion method for accessing how the model input uncertainties manifest in its output

    • APIs to automatically generate point-clouds from Boolean compositions of geometry primitives or import point clouds for complex geometry (STL files)

    • STL point-cloud generation from superfast ray tracing method with uniformly emanating rays using Fibonacci sphere. Points categorized as inside/outside/on-the-surface, SDF, and its derivative calculation

    • Logically separate APIs for physics, boundary conditions and geometry consistent with traditional solver datasets

    • Support for optimizers: Modulus supports 30+ optimizers including the built-in PyTorch optimizers and optimizers from the torch-optimizer library. Support for AdaHessian optimizer

    • Hydra configs to allow for easy customization, improved accessibility and hyperparameter tuning

    • Tensorboard plots to easily visualize the outputs, histograms, etc. during training

Known Issues

  • TorchScript not supported for FNO, PINO and DeepONet architectures. This will be available next release with a PyTorch version update.

  • Multi-GPU training not supported for FNO and PINO architectures. This will be available next release.

  • Mutli-GPU training not supported for all use cases of SequentialSolver. This will be further addressed in the next release.

  • Performance for problems with higher order derivatives is not fully optimized in this release due to switch to PyTorch framework. We will work on more extensive optimizations in the upcoming releases.