Miscellaneous Concepts

In this section, we briefly introduce the generalized Polynomial Chaos (gPC) expansion, an efficient method for assessing how the uncertainties in a model input manifest in its output. Later in Section gPC Based Surrogate Modeling Accelerated via Transfer Learning, we show how the gPC can be used as a surrogate for shape parameterization in both of the tessellated and constructive solid geometry modules.

The \(p-th\) degree gPC expansion for a \(d\)-dimensional input \(\mathbf{\Xi}\) takes the following form

(97)\[u_p (\mathbf{\Xi}) = \sum_{\mathbf{i} \in \Lambda_{p,d}} c_{\mathbf{i}} \psi_{\mathbf{i}}(\mathbf{\Xi}),\]

where \(\mathbf{i}\) is a multi-index and \(\Lambda_{p,d}\) is the set of multi-indices defined as

(98)\[\Lambda_{p,d} = \{\mathbf{i} \in \mathbb{N}_0^d: ||\mathbf{i}||_1 \leq p\},\]

and the cardinality of \(\Lambda_{d,p}\) is

(99)\[C = |\Lambda_{p,d}| = \frac{(p+d)!}{p!d!}.\]

\(\{c_{\mathbf{i}}\}_{\mathbf{i} \in \mathbb{N}_0^d}\) is the set of unknown coefficients of the expansion, and can be determined based on the methods of stochastic Galerkin, stochastic collocation, or least square 2. For the example presented in this user guide, we will use the least square method. Although the number of required samples to solve this least square problem is \(C\), it is recommended to use at least \(2C\) samples for a reasonable accuracy 2. \(\{\psi_{\mathbf{i}}\}_{\mathbf{i} \in \mathbb{N}_0^d}\) is the set of orthonormal basis functions that satisfy the following condition

(100)\[ \int \psi_\mathbf{m}(\mathbf{\xi}) \psi_\mathbf{n}(\mathbf{\xi}) \rho(\mathbf{\xi}) d\mathbf{\xi} = \delta_{\mathbf{m} \mathbf{n}}, \,\,\, \mathbf{m}, \mathbf{n} \in \mathbb{N}_0^d.\]

For instance, for a uniformly and normally distributed \(\psi\), the normalized Legendre and Hermite polynomials, respectively, satisfy the orthonormality condition in Equation (100).

In this section, we give some essential definitions of Relative function spaces, Sobolev spaces and some important equalities. All the integral in this section should be understood by Lebesgue integral.

\(L^p\) space

Let \(\Omega \subset \mathbb{R}^d\) is an open set. For any real number \(1<p<\infty\), we define

(101)\[L^p(\Omega)=\left\{u:\Omega\mapsto\mathbb{R}\bigg|u\mbox{ is measurable on }\Omega,\ \int_{\Omega}|u|^pdx<\infty \right\},\]

endowed with the norm

(102)\[\|u\|_{L^p(\Omega)}=\left(\int_{\Omega}|u|^pdx\right)^{\frac{1}{p}}.\]

For \(p=\infty\), we have

(103)\[L^\infty(\Omega)=\left\{u:\Omega\mapsto\mathbb{R}\bigg|u\mbox{ is uniformly bounded in $ \Omega $ except a zero measure set} \right\},\]

endowed with the norm

(104)\[\|u\|_{L^\infty(\Omega)}=\sup_{\Omega}|u|.\]

Sometimes, for functions on unbounded domains, we consider their local integrability. To this end, we define the following local \(L^p\) space

(105)\[L^p_{loc}(\Omega)=\left\{u:\Omega\mapsto\mathbb{R}\bigg|u\in L^p(V),\ \forall V\subset\subset\Omega \right\},\]

where \(V\subset\subset\Omega\) means \(V\) is a compact subset of \(\Omega\).

\(C^k\) space

Let \(k\geq 0\) be an integer, and \(\Omega\subset \mathbb{R}^d\) is an open set. The \(C^k(\Omega)\) is the \(k\)-th differentiable function space given by

(106)\[C^k(\Omega)=\left\{u:\Omega\mapsto\mathbb{R}\bigg|u\mbox{ is $ k $-times continuously differentiable}\right\}.\]

Let \(\mathbf{\alpha}=(\alpha_1,\alpha_2,\cdots,\alpha_d)\) be a \(d\)-fold multi-index of order \(|\mathbf{\alpha}|=\alpha_1+\alpha_2+\cdots+\alpha_n=k\). The \(k\)-th order (classical) derivative of \(u\) is denoted by

(107)\[D^{\mathbf{\alpha}}u=\frac{\partial^k}{\partial x_1^{\alpha_1}\partial x_2^{\alpha_2}\cdots\partial x_d^{\alpha_d}}u.\]

For the closure of \(\Omega\), denoted by \(\overline{\Omega}\), we have

(108)\[C^k(\overline{\Omega})=\left\{u:\Omega\mapsto\mathbb{R}\bigg|D^{\mathbf{\alpha}}u\mbox{ is uniformly continuous on bounded subsets of $ \Omega $, }\forall|\mathbf{\alpha}|\leq k\right\}.\]

When \(k=0\), we also write \(C(\Omega)=C^0(\Omega)\) and \(C(\overline{\Omega})=C^0(\overline{\Omega})\).

We also define the infinitely differentiable function space

(109)\[C^\infty(\Omega)=\left\{u:\Omega\mapsto\mathbb{R}\bigg|u\mbox{ is infinitely differentiable} \right\}=\bigcap_{k=0}^\infty C^k(\Omega)\]

and

(110)\[C^\infty(\overline{\Omega})=\bigcap_{k=0}^\infty C^k(\overline{\Omega}).\]

We use \(C_0(\Omega)\) and \(C_0^k(\Omega)\) denote these functions in \(C(\Omega)\), \(C^k(\Omega)\) with compact support.

\(W^{k,p}\) space

The weak derivative is given by the following definition 1.

Definition

Suppose \(u,\ v\in L^1_{loc}(\Omega)\) and \(\mathbf{\alpha}\) is a multi-index. We say that \(v\) is the \(\mathbf{\alpha}^{th}\) weak derivative of \(u\), written

(111)\[D^{\mathbf{\alpha}}u=v,\]

provided

(112)\[\int_\Omega uD^{\mathbf{\alpha}}\phi dx=(-1)^{|\mathbf{\alpha}|}\int_{\Omega}v\phi dx\]

for all test functions \(\phi\in C_0^\infty(\Omega)\).

As a typical example, let \(u(x)=|x|\) and \(\Omega=(-1,1)\). For calculus we know that \(u\) is not (classical) differentiable at \(x=0\). However, it has weak derivative

(113)\[\begin{split}(Du)(x)= \begin{cases} 1 & x>0,\\ -1 & x\leq 0. \end{cases}\end{split}\]

Definition

For an integer \(k\geq 0\) and real number \(p\geq 1\), the Sobolev space is defined by

(114)\[W^{k,p}(\Omega)=\left\{u\in L^p(\Omega)\bigg|D^{\mathbf{\alpha}}u\in L^p(\Omega),\ \forall|\mathbf{\alpha}|\leq k\right\},\]

endowed with the norm

(115)\[\|u\|_{k,p}=\left(\int_{\Omega}\sum_{|\mathbf{\alpha}|\leq k}|D^{\mathbf{\alpha}}u|^p\right)^{\frac{1}{p}}.\]

Obviously, when \(k=0\), we have \(W^{0,p}(\Omega)=L^p(\Omega)\).

When \(p=2\), \(W^{k,p}(\Omega)\) is a Hilbert space. And it also denoted by \(H^k(\Omega)=W^{k,2}(\Omega)\). The inner product in \(H^k(\Omega)\) is given by

(116)\[\langle u, v \rangle =\int_{\Omega}\sum_{|\mathbf{\alpha}|\leq k}D^{\mathbf{\alpha}}uD^{\mathbf{\alpha}}v dx\]

A crucial subset of \(W^{k,p}(\Omega)\), denoted by \(W^{k,p}_0(\Omega)\), is

(117)\[W^{k,p}_0(\Omega)=\left\{u\in W^{k,p}(\Omega)\bigg| D^{\mathbf{\alpha}}u|_{\partial\Omega}=0,\ \forall|\mathbf{\alpha}|\leq k-1\right\}.\]

It is customary to write \(H^k_0(\Omega)=W_0^{k,2}(\Omega)\).

In this subsection, we assume \(\Omega\subset \mathbb{R}^d\) is a Lipschitz bounded domain (see 3 for the definition of Lipschitz domain).

Theorem (Green’s formulae)

Let \(u,\ v\in C^2(\overline{\Omega})\). Then

  1. (118)\[\int_\Omega \Delta u dx =\int_{\partial\Omega} \frac{\partial u}{\partial n} dS\]
  2. (119)\[\int_\Omega \nabla u\cdot\nabla v dx = -\int_\Omega u\Delta v dx+\int_{\partial\Omega} u \frac{\partial v}{\partial n} dS\]
  3. (120)\[\int_{\Omega} u\Delta v-v\Delta u dx = \int_{\partial\Omega} u\frac{\partial v}{\partial n}-v\frac{\partial u}{\partial n} dS\]

For curl operator we have some similar identities. To begin with, we define the 1D and 2D curl operators. For a scalar function \(u(x_1,x_2)\in C^1(\overline{\Omega})\), we have

(121)\[\nabla \times u = \left(\frac{\partial u}{\partial x_2},-\frac{\partial u}{\partial x_1}\right)\]

For a 2D vector function \(\mathbf{v}=(v_1(x_1,x_2),v_2(x_1,x_2))\in(C^1(\overline{\Omega}))^2\), we have

(122)\[\nabla \times \mathbf{v} = \frac{\partial v_2}{\partial x_1}-\frac{\partial v_1}{\partial x_2}\]

Then we have the following integral identities for curl operators.

Theorem

  1. Let \(\Omega\subset \mathbb{R}^3\) and \(\mathbf{u},\ \mathbf{v}\in (C^1(\overline{\Omega}))^3\). Then

    (123)\[\int_{\Omega}\nabla \times \mathbf{u}\cdot\mathbf{v} dx = \int_{\Omega}\mathbf{u}\cdot\nabla \times \mathbf{v} dx+\int_{\partial\Omega}\mathbf{n} \times \mathbf{u} \cdot \mathbf{v} dS,\]

    where \(\mathbf{n}\) is the unit outward normal.

  2. Let \(\Omega\subset \mathbb{R}^2\) and \(\mathbf{u}\in (C^1(\overline{\Omega}))^2\) and \(v\in C^1(\overline{\Omega})\). Then

    (124)\[\int_{\Omega}\nabla\times\mathbf{u} v dx = \int_{\Omega}\mathbf{u}\cdot\nabla\times v dx+\int_{\partial\Omega}\mathbf{\tau}\cdot\mathbf{u} vdS,\]

    where \(\mathbf{\tau}\) is the unit tangent to \(\partial \Omega\).

Let \(\Omega_1 = (0,0.5)\times(0,1)\), \(\Omega_2 = (0.5,1)\times(0,1)\), \(\Omega=(0,1)^2\). The interface is \(\Gamma=\overline{\Omega}_1\cap\overline{\Omega}_2\), and the Dirichlet boundary is \(\Gamma_D=\partial\Omega\). The domain for the problem can be visualized in Fig. 31. The problem was originally defined in 4.

domain_combine.png

Fig. 31 Left: Domain of interface problem. Right: True Solution

The PDEs for the problem are defined as

(125)\[\begin{split}\begin{aligned} -u &= f \quad \text{ in } \Omega_1 \cup \Omega_2\\ u &= g_D \quad \text{ on } \Gamma_D\\ \frac{\partial u}{\partial \textbf{n}} &=g_I \quad \text{ on } \Gamma\end{aligned}\end{split}\]

where \(f=-2\), \(g_I=2\) and

(126)\[\begin{split}g_D = \begin{cases} x^2 & 0\leq x\leq \frac{1}{2}\\ (x-1)^2 & \frac{1}{2}< x\leq 1 \end{cases} .\end{split}\]

The \(g_D\) is the exact solution of (125).

The jump \([\cdot]\) on the interface \(\Gamma\) is defined by

(127)\[ \left[\frac{\partial u}{\partial \mathbf{n}}\right]=\nabla u_1\cdot\mathbf{n}_1+\nabla u_2\cdot\mathbf{n}_2,\label{var_ex}\]

where \(u_i\) is the solution in \(\Omega_i\) and the \(\mathbf{n}_i\) is the unit normal on \(\partial\Omega_i\cap\Gamma\).

As suggested in the original reference, this problem does not accept a strong (classical) solution but only a unique weak solution (\(g_D\)) which is shown in Fig. 31.

Note: It is noted that in the original paper 4, the PDE is incorrect and (125) defines the corrected PDEs for the problem.

We now construct the variational form of (125). This is the first step to obtain its weak solution. Since the equation suggests that the solution’s derivative is broken at interface (\(\Gamma\)), we have to do the variational form on \(\Omega_1\) and \(\Omega_2\) separately. Specifically, let \(v_i\) be a suitable test function on \(\Omega_i\), and by integration by parts, we have for \(i=1,2\),

(128)\[\int_{\Omega_i}(\nabla u\cdot\nabla v_i-fv_i) dx - \int_{\partial\Omega_i}\frac{\partial u }{\partial \mathbf{n}}v_i ds = 0.\]

If we are using one neural network and a test function defined on whole \(\Omega\), then by adding these two equalities, we have

(129)\[\int_{\Omega}(\nabla u\cdot\nabla v - fv) dx - \int_{\partial} g_Iv ds - \int_{\Gamma_D} \frac{\partial u}{\partial \mathbf{n}}v ds = 0\label{var_cont}\]

If we are using two neural networks, and the test functions are different on \(\Omega_1\) and \(\Omega_2\), then we may use the discontinuous Galerkin formulation 5. To this end, we first define the jump and average of scalar and vector functions. Consider the two adjacent elements as shown in Fig. 32. \(\mathbf{n}^+\) and \(\mathbf{n}^-\)and unit normals for \(T^+\), \(T^-\) on \(F=T^+\cap T^-\), respectively. As we can observe, we have \(\mathbf{n}^+=-\mathbf{n}^-\).

Let \(u^+\) and \(u^-\) be two scalar functions on \(T^+\) and \(T^-\), and \(\mathbf{v}^+\) and \(\mathbf{v}^-\) are two vector fields on \(T^+\) and \(T^-\), respectively. The jump and the average on \(F\) is defined by

(130)\[\begin{split}\begin{aligned} \langle u \rangle = \frac{1}{2}(u^++u^-) && \langle \mathbf{v} \rangle = \frac{1}{2}(\mathbf{v}^++\mathbf{v}^-)\\ [\![ u ]\!] = u^+\mathbf{n}^++u^-\mathbf{n}^- && [\![ \mathbf{v} ]\!] = \mathbf{v} ^+\cdot\mathbf{n}^++\mathbf{v} ^-\cdot\mathbf{n}^-\end{aligned}\end{split}\]

element_new.png

Fig. 32 Adjacent Elements.

Lemma

On \(F\) of Fig. 32, we have

(131)\[[\![ u\mathbf{v} ]\!] = [\![ u ]\!] \langle \mathbf{v} \rangle + [\![ \mathbf{v} ]\!] \langle u \rangle.\]

By using the above lemma, we have the following equality, which is an essential tool for discontinuous formulation.

Theorem

Suppose \(\Omega\) has been partitioned into a mesh. Let \(\mathcal{T}\) be the set of all elements of the mesh, \(\mathcal{F}_I\) be the set of all interior facets of the mesh, and \(\mathcal{F}_E\) be the set of all exterior (boundary) facets of the mesh. Then we have

(132)\[ \sum_{T\in\mathcal{T}}\int_{\partial T}\frac{\partial u}{\partial \mathbf{n}} v ds = \sum_{e\in\mathcal{F}_I}\int_e \left([\![ \nabla u ]\!] \langle v \rangle + \langle \nabla u \rangle [\![ v ]\!] \right)ds+\sum_{e\in\mathcal{F}_E}\int_e \frac{\partial u}{\partial \mathbf{n}} v ds\label{var_eqn}\]

Using (127) and (132), we have the following variational form

(133)\[ \sum_{i=1}^2(\nabla u_i\cdot v_i - fv_i) dx - \sum_{i=1}^2\int_{\Gamma_D}\frac{\partial u_i}{\partial \mathbf{n}} v_i ds-\int_{\partial}(g_I\langle v \rangle+\langle \nabla u \rangle [\![ v ]\!] ds =0\label{var_discont}\]

Details on how to use these forms can be found in tutorial Interface Problem by Variational Method.

References

[1]

Evans, Lawrence C. “Partial differential equations and Monge-Kantorovich mass transfer.” Current developments in mathematics 1997.1 (1997): 65-126.

[2](1,2)

Xiu, Dongbin. Numerical methods for stochastic computations. Princeton university press, 2010.

[3]

Monk, Peter. “A finite element method for approximating the time-harmonic Maxwell equations.” Numerische mathematik 63.1 (1992): 243-261.

[4](1,2)

Zang, Yaohua, et al. “Weak adversarial networks for high-dimensional partial differential equations.” Journal of Computational Physics 411 (2020): 109409.

[5]

Cockburn, Bernardo, George E. Karniadakis, and Chi-Wang Shu, eds. Discontinuous Galerkin methods: theory, computation and applications. Vol. 11. Springer Science & Business Media, 2012.

Previous Recommended Practices in Modulus Sym
Next Geometry Modules
© Copyright 2023, NVIDIA Modulus Team. Last updated on Jan 25, 2024.