Derivative Functionals#
- physicsnemo.nn.functional.uniform_grid_gradient(
- field: Tensor,
- spacing: float | Sequence[float] = 1.0,
- order: int = 2,
- derivative_orders: int | Sequence[int] = 1,
- include_mixed: bool = False,
Compute periodic central-difference gradients on a uniform grid.
This functional computes first-order and/or second-order derivatives of a scalar field defined on a 1D/2D/3D uniform Cartesian grid with periodic indexing.
For each axis \(k\), the first derivative is:
\[\partial_k f(\mathbf{i}) \approx \frac{f(\mathbf{i}+\hat{e}_k) - f(\mathbf{i}-\hat{e}_k)}{2\,\Delta x_k}\]and the pure second derivative is:
\[\partial_{kk} f(\mathbf{i}) \approx \frac{f(\mathbf{i}+\hat{e}_k)-2f(\mathbf{i})+f(\mathbf{i}-\hat{e}_k)} {\Delta x_k^2}\]with periodic wrap-around at boundaries.
- Parameters:
field (torch.Tensor) – Scalar grid field with shape
(n0,),(n0,n1), or(n0,n1,n2).spacing (float | Sequence[float], optional) – Uniform spacing per axis. Use a scalar for isotropic spacing or a sequence matching field dimensionality.
order (int, optional) – Central-difference accuracy order. Supported values are
2and4.derivative_orders (int | Sequence[int], optional) – Derivative orders to compute. Supported values are
1,2, or(1, 2).include_mixed (bool, optional) – Include mixed second derivatives when requesting second derivatives. Mixed terms are appended in axis-pair order
(x,y),(x,z),(y,z).implementation ({"warp", "torch"} or None) – Explicit backend selection. When
None, rank-based backend dispatch is used.
- Returns:
Gradient tensor of shape
(num_derivatives, *field.shape).- Return type:
torch.Tensor
- physicsnemo.nn.functional.rectilinear_grid_gradient(
- field: Tensor,
- coordinates: Sequence[Tensor],
- periods: float | Sequence[float] | None = None,
- derivative_orders: int | Sequence[int] = 1,
- include_mixed: bool = False,
Compute periodic gradients on rectilinear grids with nonuniform spacing.
This functional computes first-order and/or second-order derivatives of a scalar field on a 1D/2D/3D rectilinear grid where each axis has independent, potentially nonuniform coordinate spacing.
For each axis \(k\), first-order nonuniform central differencing is:
\[\partial_k f_i \approx a_i\,f_{i-1} + b_i\,f_i + c_i\,f_{i+1}\]with
\[a_i = -\frac{h_i^+}{h_i^-(h_i^-+h_i^+)}, \quad b_i = \frac{h_i^+ - h_i^-}{h_i^- h_i^+}, \quad c_i = \frac{h_i^-}{h_i^+(h_i^-+h_i^+)}\]and pure second derivatives are:
\[\partial_{kk} f_i \approx \tilde{a}_i\,f_{i-1} + \tilde{b}_i\,f_i + \tilde{c}_i\,f_{i+1}\]with
\[\tilde{a}_i = \frac{2}{h_i^-(h_i^-+h_i^+)}, \quad \tilde{b}_i = -\frac{2}{h_i^- h_i^+}, \quad \tilde{c}_i = \frac{2}{h_i^+(h_i^-+h_i^+)}\]where \(h_i^-\) and \(h_i^+\) are left/right periodic distances along that axis.
- Parameters:
field (torch.Tensor) – Scalar grid field with shape
(n0,),(n0,n1), or(n0,n1,n2).coordinates (Sequence[torch.Tensor]) – Per-axis coordinate tensors
(x0, x1, x2)matching field dimensions. Each axis tensor must be rank-1, strictly increasing, and length compatible withfield.shape[axis].periods (float | Sequence[float] | None, optional) – Period length per axis. If
None, each axis is inferred ascoords[-1] - coords[0] + (coords[1] - coords[0]).derivative_orders (int | Sequence[int], optional) – Derivative orders to compute. Supported values are
1,2, or(1, 2).include_mixed (bool, optional) – Include mixed second derivatives when requesting second derivatives. Mixed terms are appended in axis-pair order
(x,y),(x,z),(y,z).implementation ({"warp", "torch"} or None) – Explicit backend selection. When
None, dispatch selects by rank.
- Returns:
Gradient tensor of shape
(num_derivatives, *field.shape).- Return type:
torch.Tensor
- physicsnemo.nn.functional.mesh_lsq_gradient(
- points: Tensor,
- values: Tensor,
- neighbor_offsets: Tensor,
- neighbor_indices: Tensor,
- weight_power: float = 2.0,
- min_neighbors: int = 0,
- safe_epsilon: float | None = None,
Weighted least-squares gradient reconstruction on unstructured entities.
This functional computes gradients from unstructured neighborhoods provided as CSR adjacency (neighbor_offsets, neighbor_indices).
For each entity \(i\), it solves the weighted least-squares problem:
\[\nabla \phi_i = \arg\min_g \sum_{j \in \mathcal{N}(i)} w_{ij} \left(g^T(x_j - x_i) - (\phi_j - \phi_i)\right)^2\]with inverse-distance weighting:
\[w_{ij} = ||x_j - x_i||^{-\alpha}\]where \(\alpha\) is
weight_power.- Parameters:
points (torch.Tensor) – Entity coordinates with shape
(n_entities, dims).values (torch.Tensor) – Scalar or tensor values with shape
(n_entities,)or(n_entities, ...).neighbor_offsets (torch.Tensor) – CSR offsets with shape
(n_entities + 1,).neighbor_indices (torch.Tensor) – CSR flattened neighbor indices with shape
(nnz,).weight_power (float, optional) – Inverse-distance exponent used for weighting.
min_neighbors (int, optional) – Entities with fewer than this count get zero gradients.
safe_epsilon (float | None, optional) – Positive floor applied to squared neighbor distances before inverse-distance weighting. When
None, a dtype-derived default is used by each backend.implementation ({"warp", "torch"} or None) – Explicit backend selection. When
None, dispatch selects by rank.
- Returns:
Gradients with shape
(n_entities, dims)for scalar values or(n_entities, dims, ...)for tensor values.- Return type:
torch.Tensor
- physicsnemo.nn.functional.mesh_green_gauss_gradient(
- points: Tensor,
- cells: Tensor,
- neighbors: Tensor,
- values: Tensor,
Compute cell-centered gradients using Green-Gauss face flux balances.
This functional reconstructs gradients from cell-centered values on simplicial meshes (2D triangles or 3D tetrahedra) using:
\[\nabla \phi_i \approx \frac{1}{V_i} \sum_{f \in \partial i} \phi_f \, \mathbf{A}_{i,f}\]where \(V_i\) is cell volume/area, \(\mathbf{A}_{i,f}\) is outward face-area vector, and face value \(\phi_f\) uses centered interpolation on interior faces:
\[\phi_f = \tfrac{1}{2}(\phi_i + \phi_j)\]while boundary faces use \(\phi_f=\phi_i\).
- Parameters:
points (torch.Tensor) – Mesh point coordinates with shape
(n_points, dims)fordimsin{2, 3}.cells (torch.Tensor) – Simplicial connectivity with shape
(n_cells, dims+1).neighbors (torch.Tensor) – Precomputed cell-neighbor indices with shape
(n_cells, n_faces), where boundary faces are marked with-1.values (torch.Tensor) – Cell-centered values with shape
(n_cells,)or(n_cells, ...).implementation ({"warp", "torch"} or None) – Explicit backend selection. When
None, dispatch selects by rank.
- Returns:
Reconstructed gradients with shape
(n_cells, dims)for scalar values or(n_cells, dims, ...)for tensor values.- Return type:
torch.Tensor
- physicsnemo.nn.functional.spectral_grid_gradient(
- field: Tensor,
- lengths: float | Sequence[float] = 1.0,
- derivative_orders: int | Sequence[int] = 1,
- include_mixed: bool = False,
Compute periodic derivatives with Fourier spectral differentiation.
This functional computes first-order and/or second-order derivatives on 1D/2D/3D periodic scalar fields by transforming to Fourier space, applying exact derivative multipliers, and transforming back.
- Parameters:
field (torch.Tensor) – Scalar field on a periodic uniform grid with shape
(n0,),(n0, n1), or(n0, n1, n2).lengths (float | Sequence[float], optional) – Physical domain lengths per axis. A scalar applies the same length to every axis.
derivative_orders (int | Sequence[int], optional) – Derivative orders to compute. Supported values are
1,2, or(1, 2).include_mixed (bool, optional) – Include mixed second derivatives when requesting second derivatives.
implementation ({"torch"} or None) – Implementation to use. When
None, dispatch selects the available implementation.
- Returns:
Stacked derivative tensor with shape
(num_derivatives, *field.shape). Derivative ordering is deterministic: first derivatives, then pure second derivatives, then mixed second derivatives in axis-pair order(x,y), (x,z), (y,z).- Return type:
torch.Tensor
- physicsnemo.nn.functional.meshless_fd_derivatives(
- stencil_values: Tensor,
- spacing: float | Sequence[float] = 1.0,
- derivative_orders: int | Sequence[int] = 1,
- include_mixed: bool = False,
Compute meshless finite-difference derivatives from local stencil values.
This functional expects values already sampled on a canonical Cartesian
{-1, 0, 1}stencil around each query point. It does not build stencil coordinates internally; it only maps stencil values to derivative estimates using central finite-difference formulas.- Parameters:
stencil_values (torch.Tensor) – Values sampled on a canonical
{-1,0,1}stencil with shape(num_points, stencil_size)or(num_points, stencil_size, channels). Stencil sizes must be3,9, or27.spacing (float | Sequence[float], optional) – Stencil spacing per axis.
derivative_orders (int | Sequence[int], optional) – Derivative orders to compute. Supported values are
1,2, or(1, 2).include_mixed (bool, optional) – Include mixed second derivatives when requesting second derivatives.
implementation ({"torch"} or None) – Implementation to use. When
None, dispatch selects the available implementation.
- Returns:
Stacked derivatives with shape
(num_derivatives, num_points)for scalar input or(num_derivatives, num_points, channels)for vector input.- Return type:
torch.Tensor
Notes
Derivative stack ordering is deterministic: first derivatives, then pure second derivatives, then mixed second derivatives in axis-combination order.
The stencil size infers dimensionality:
3 -> 1D,9 -> 2D,27 -> 3D.