In this section, we briefly introduce the generalized Polynomial Chaos (gPC) expansion, an efficient method for assessing how the uncertainties in a model input manifest in its output. Later in Section gPC Based Surrogate Modeling Accelerated via Transfer Learning, we show how the gPC can be used as a surrogate for shape parameterization in both of the tessellated and constructive solid geometry modules.
The \(pth\) degree gPC expansion for a \(d\)dimensional input \(\mathbf{\Xi}\) takes the following form
(97)\[u_p (\mathbf{\Xi}) = \sum_{\mathbf{i} \in \Lambda_{p,d}} c_{\mathbf{i}} \psi_{\mathbf{i}}(\mathbf{\Xi}),\]
where \(\mathbf{i}\) is a multiindex and \(\Lambda_{p,d}\) is the set of multiindices defined as
(98)\[\Lambda_{p,d} = \{\mathbf{i} \in \mathbb{N}_0^d: \mathbf{i}_1 \leq p\},\]
and the cardinality of \(\Lambda_{d,p}\) is
(99)\[C = \Lambda_{p,d} = \frac{(p+d)!}{p!d!}.\]
\(\{c_{\mathbf{i}}\}_{\mathbf{i} \in \mathbb{N}_0^d}\) is the set of unknown coefficients of the expansion, and can be determined based on the methods of stochastic Galerkin, stochastic collocation, or least square ^{2}. For the example presented in this user guide, we will use the least square method. Although the number of required samples to solve this least square problem is \(C\), it is recommended to use at least \(2C\) samples for a reasonable accuracy ^{2}. \(\{\psi_{\mathbf{i}}\}_{\mathbf{i} \in \mathbb{N}_0^d}\) is the set of orthonormal basis functions that satisfy the following condition
(100)\[ \int \psi_\mathbf{m}(\mathbf{\xi}) \psi_\mathbf{n}(\mathbf{\xi}) \rho(\mathbf{\xi}) d\mathbf{\xi} = \delta_{\mathbf{m} \mathbf{n}}, \,\,\, \mathbf{m}, \mathbf{n} \in \mathbb{N}_0^d.\]
For instance, for a uniformly and normally distributed \(\psi\), the normalized Legendre and Hermite polynomials, respectively, satisfy the orthonormality condition in Equation (100).
In this section, we give some essential definitions of Relative function spaces, Sobolev spaces and some important equalities. All the integral in this section should be understood by Lebesgue integral.
\(L^p\) space
Let \(\Omega \subset \mathbb{R}^d\) is an open set. For any real number \(1<p<\infty\), we define
endowed with the norm
For \(p=\infty\), we have
endowed with the norm
Sometimes, for functions on unbounded domains, we consider their local integrability. To this end, we define the following local \(L^p\) space
where \(V\subset\subset\Omega\) means \(V\) is a compact subset of \(\Omega\).
\(C^k\) space
Let \(k\geq 0\) be an integer, and \(\Omega\subset \mathbb{R}^d\) is an open set. The \(C^k(\Omega)\) is the \(k\)th differentiable function space given by
Let \(\mathbf{\alpha}=(\alpha_1,\alpha_2,\cdots,\alpha_d)\) be a \(d\)fold multiindex of order \(\mathbf{\alpha}=\alpha_1+\alpha_2+\cdots+\alpha_n=k\). The \(k\)th order (classical) derivative of \(u\) is denoted by
For the closure of \(\Omega\), denoted by \(\overline{\Omega}\), we have
When \(k=0\), we also write \(C(\Omega)=C^0(\Omega)\) and \(C(\overline{\Omega})=C^0(\overline{\Omega})\).
We also define the infinitely differentiable function space
and
We use \(C_0(\Omega)\) and \(C_0^k(\Omega)\) denote these functions in \(C(\Omega)\), \(C^k(\Omega)\) with compact support.
\(W^{k,p}\) space
The weak derivative is given by the following definition ^{1}.
Definition
Suppose \(u,\ v\in L^1_{loc}(\Omega)\) and \(\mathbf{\alpha}\) is a multiindex. We say that \(v\) is the \(\mathbf{\alpha}^{th}\) weak derivative of \(u\), written
provided
for all test functions \(\phi\in C_0^\infty(\Omega)\).
As a typical example, let \(u(x)=x\) and \(\Omega=(1,1)\). For calculus we know that \(u\) is not (classical) differentiable at \(x=0\). However, it has weak derivative
Definition
For an integer \(k\geq 0\) and real number \(p\geq 1\), the Sobolev space is defined by
endowed with the norm
Obviously, when \(k=0\), we have \(W^{0,p}(\Omega)=L^p(\Omega)\).
When \(p=2\), \(W^{k,p}(\Omega)\) is a Hilbert space. And it also denoted by \(H^k(\Omega)=W^{k,2}(\Omega)\). The inner product in \(H^k(\Omega)\) is given by
A crucial subset of \(W^{k,p}(\Omega)\), denoted by \(W^{k,p}_0(\Omega)\), is
It is customary to write \(H^k_0(\Omega)=W_0^{k,2}(\Omega)\).
In this subsection, we assume \(\Omega\subset \mathbb{R}^d\) is a Lipschitz bounded domain (see ^{3} for the definition of Lipschitz domain).
Theorem (Green’s formulae)
Let \(u,\ v\in C^2(\overline{\Omega})\). Then

(118)\[\int_\Omega \Delta u dx =\int_{\partial\Omega} \frac{\partial u}{\partial n} dS\]

(119)\[\int_\Omega \nabla u\cdot\nabla v dx = \int_\Omega u\Delta v dx+\int_{\partial\Omega} u \frac{\partial v}{\partial n} dS\]

(120)\[\int_{\Omega} u\Delta vv\Delta u dx = \int_{\partial\Omega} u\frac{\partial v}{\partial n}v\frac{\partial u}{\partial n} dS\]
For curl operator we have some similar identities. To begin with, we define the 1D and 2D curl operators. For a scalar function \(u(x_1,x_2)\in C^1(\overline{\Omega})\), we have
(121)\[\nabla \times u = \left(\frac{\partial u}{\partial x_2},\frac{\partial u}{\partial x_1}\right)\]
For a 2D vector function \(\mathbf{v}=(v_1(x_1,x_2),v_2(x_1,x_2))\in(C^1(\overline{\Omega}))^2\), we have
(122)\[\nabla \times \mathbf{v} = \frac{\partial v_2}{\partial x_1}\frac{\partial v_1}{\partial x_2}\]
Then we have the following integral identities for curl operators.
Theorem
Let \(\Omega\subset \mathbb{R}^3\) and \(\mathbf{u},\ \mathbf{v}\in (C^1(\overline{\Omega}))^3\). Then
(123)\[\int_{\Omega}\nabla \times \mathbf{u}\cdot\mathbf{v} dx = \int_{\Omega}\mathbf{u}\cdot\nabla \times \mathbf{v} dx+\int_{\partial\Omega}\mathbf{n} \times \mathbf{u} \cdot \mathbf{v} dS,\]
where \(\mathbf{n}\) is the unit outward normal.
Let \(\Omega\subset \mathbb{R}^2\) and \(\mathbf{u}\in (C^1(\overline{\Omega}))^2\) and \(v\in C^1(\overline{\Omega})\). Then
(124)\[\int_{\Omega}\nabla\times\mathbf{u} v dx = \int_{\Omega}\mathbf{u}\cdot\nabla\times v dx+\int_{\partial\Omega}\mathbf{\tau}\cdot\mathbf{u} vdS,\]
where \(\mathbf{\tau}\) is the unit tangent to \(\partial \Omega\).
Let \(\Omega_1 = (0,0.5)\times(0,1)\), \(\Omega_2 = (0.5,1)\times(0,1)\), \(\Omega=(0,1)^2\). The interface is \(\Gamma=\overline{\Omega}_1\cap\overline{\Omega}_2\), and the Dirichlet boundary is \(\Gamma_D=\partial\Omega\). The domain for the problem can be visualized in Fig. 31. The problem was originally defined in ^{4}.
The PDEs for the problem are defined as
(125)\[\begin{split}\begin{aligned}
u &= f \quad \text{ in } \Omega_1 \cup \Omega_2\\
u &= g_D \quad \text{ on } \Gamma_D\\
\frac{\partial u}{\partial \textbf{n}} &=g_I \quad \text{ on } \Gamma\end{aligned}\end{split}\]
where \(f=2\), \(g_I=2\) and
(126)\[\begin{split}g_D =
\begin{cases}
x^2 & 0\leq x\leq \frac{1}{2}\\
(x1)^2 & \frac{1}{2}< x\leq 1
\end{cases}
.\end{split}\]
The \(g_D\) is the exact solution of (125).
The jump \([\cdot]\) on the interface \(\Gamma\) is defined by
(127)\[ \left[\frac{\partial u}{\partial \mathbf{n}}\right]=\nabla u_1\cdot\mathbf{n}_1+\nabla u_2\cdot\mathbf{n}_2,\label{var_ex}\]
where \(u_i\) is the solution in \(\Omega_i\) and the \(\mathbf{n}_i\) is the unit normal on \(\partial\Omega_i\cap\Gamma\).
As suggested in the original reference, this problem does not accept a strong (classical) solution but only a unique weak solution (\(g_D\)) which is shown in Fig. 31.
Note: It is noted that in the original paper ^{4}, the PDE is incorrect and (125) defines the corrected PDEs for the problem.
We now construct the variational form of (125). This is the first step to obtain its weak solution. Since the equation suggests that the solution’s derivative is broken at interface (\(\Gamma\)), we have to do the variational form on \(\Omega_1\) and \(\Omega_2\) separately. Specifically, let \(v_i\) be a suitable test function on \(\Omega_i\), and by integration by parts, we have for \(i=1,2\),
(128)\[\int_{\Omega_i}(\nabla u\cdot\nabla v_ifv_i) dx  \int_{\partial\Omega_i}\frac{\partial u }{\partial \mathbf{n}}v_i ds = 0.\]
If we are using one neural network and a test function defined on whole \(\Omega\), then by adding these two equalities, we have
(129)\[\int_{\Omega}(\nabla u\cdot\nabla v  fv) dx  \int_{\partial} g_Iv ds  \int_{\Gamma_D} \frac{\partial u}{\partial \mathbf{n}}v ds = 0\label{var_cont}\]
If we are using two neural networks, and the test functions are different on \(\Omega_1\) and \(\Omega_2\), then we may use the discontinuous Galerkin formulation ^{5}. To this end, we first define the jump and average of scalar and vector functions. Consider the two adjacent elements as shown in Fig. 32. \(\mathbf{n}^+\) and \(\mathbf{n}^\)and unit normals for \(T^+\), \(T^\) on \(F=T^+\cap T^\), respectively. As we can observe, we have \(\mathbf{n}^+=\mathbf{n}^\).
Let \(u^+\) and \(u^\) be two scalar functions on \(T^+\) and \(T^\), and \(\mathbf{v}^+\) and \(\mathbf{v}^\) are two vector fields on \(T^+\) and \(T^\), respectively. The jump and the average on \(F\) is defined by
(130)\[\begin{split}\begin{aligned}
\langle u \rangle = \frac{1}{2}(u^++u^) && \langle \mathbf{v} \rangle = \frac{1}{2}(\mathbf{v}^++\mathbf{v}^)\\
[\![ u ]\!] = u^+\mathbf{n}^++u^\mathbf{n}^ && [\![ \mathbf{v} ]\!] = \mathbf{v} ^+\cdot\mathbf{n}^++\mathbf{v} ^\cdot\mathbf{n}^\end{aligned}\end{split}\]
Fig. 32Adjacent Elements.
Lemma
On \(F\) of Fig. 32, we have
(131)\[[\![ u\mathbf{v} ]\!] = [\![ u ]\!] \langle \mathbf{v} \rangle + [\![ \mathbf{v} ]\!] \langle u \rangle.\]
By using the above lemma, we have the following equality, which is an essential tool for discontinuous formulation.
Theorem
Suppose \(\Omega\) has been partitioned into a mesh. Let \(\mathcal{T}\) be the set of all elements of the mesh, \(\mathcal{F}_I\) be the set of all interior facets of the mesh, and \(\mathcal{F}_E\) be the set of all exterior (boundary) facets of the mesh. Then we have
(132)\[ \sum_{T\in\mathcal{T}}\int_{\partial T}\frac{\partial u}{\partial \mathbf{n}} v ds = \sum_{e\in\mathcal{F}_I}\int_e \left([\![ \nabla u ]\!] \langle v \rangle + \langle \nabla u \rangle [\![ v ]\!] \right)ds+\sum_{e\in\mathcal{F}_E}\int_e \frac{\partial u}{\partial \mathbf{n}} v ds\label{var_eqn}\]
Using (127) and (132), we have the following variational form
(133)\[ \sum_{i=1}^2(\nabla u_i\cdot v_i  fv_i) dx  \sum_{i=1}^2\int_{\Gamma_D}\frac{\partial u_i}{\partial \mathbf{n}} v_i ds\int_{\partial}(g_I\langle v \rangle+\langle \nabla u \rangle [\![ v ]\!] ds =0\label{var_discont}\]
Details on how to use these forms can be found in tutorial Interface Problem by Variational Method.
References
 <a class="fnbackref" href="#id4" target="_self">1</a>
Evans, Lawrence C. “Partial differential equations and MongeKantorovich mass transfer.” Current developments in mathematics 1997.1 (1997): 65126.
 2(<a href="#id2" target="_self">1</a>,<a href="#id3" target="_self">2</a>)
Xiu, Dongbin. Numerical methods for stochastic computations. Princeton university press, 2010.
 <a class="fnbackref" href="#id5" target="_self">3</a>
Monk, Peter. “A finite element method for approximating the timeharmonic Maxwell equations.” Numerische mathematik 63.1 (1992): 243261.
 4(<a href="#id6" target="_self">1</a>,<a href="#id7" target="_self">2</a>)
Zang, Yaohua, et al. “Weak adversarial networks for highdimensional partial differential equations.” Journal of Computational Physics 411 (2020): 109409.
 <a class="fnbackref" href="#id8" target="_self">5</a>
Cockburn, Bernardo, George E. Karniadakis, and ChiWang Shu, eds. Discontinuous Galerkin methods: theory, computation and applications. Vol. 11. Springer Science & Business Media, 2012.