Newton Raphson Method In Two Dimensions#

Learning Outcomes#

This example teaches how to compute the solution for systems of equations in two variables using NumPy. There are two equations, \(f_{1}(x,y)\) and \(f_{2}(x, y)\), with two variables each, \(x\) and \(y\). We seek to find a solution that satisfies these two equations using Newton’s method. To understand Newton’s method in multiple dimensions, please see this note by Markus Grasmair.

The example also teaches how to interpret a warning from cuNumeric when the import statement is changed from importing numpy to importing cuNumeric.


Background#

We consider the following functions,

\[f_{1}(x,y) = x^{2} + y^{2} - 13 = 0\]
\[f_{2}(x,y) = x^{2} - 2y^{2} + 14 = 0\]

and their Jacobian, \(J\),

\[\begin{split}J = \begin{bmatrix} \frac{\partial f_{1}}{\partial x} & \frac{\partial f_{1}}{\partial y} \\ \frac{\partial f_{2}}{\partial x} & \frac{\partial f_{2}}{\partial y} \end{bmatrix}\end{split}\]

Substituting the functions, \(f_{1}(x, y)\) and \(f_{2}(x, y)\), we get,

\[\begin{split}J = \begin{matrix} 2x & 2y \\ 2x & -4y \end{matrix}\end{split}\]

Implementation#

[1]:
import numpy as np
[2]:
def function(x: np.ndarray) -> np.ndarray:
    "Return a numpy array that has the computed values of $f_{1}(x, y)$ and $f_{2}(x, y)$"
    return np.array([np.sum(x**2) - 13.0, x[0]**2 - 2.0*x[1]**2 + 14.0])

def jacobian(x: np.ndarray) -> np.ndarray:
    "Return a 2x2 numpy array that has the computed values of the Jacobian, J"
    return np.array([[2*x[0], 2*x[1]], [2.0*x[0], -4.0*x[1]]])
Setup an iterative loop that updates an initial guess \(x_{k} = x_{k-1} - {[\mathbf{J}(x_{k})]}^{-1} \cdot \mathbf{f}(x_{k})\)
To compute the inverse of the matrix, \(\mathbf{J}\), we use the inv API from NumPy’s linalg package, and to determine when to terminate the loop,
we compute the L2 norm of the difference in solution between two iterations and check if it is less than a specified tolerance.

When you switch the import statement from importing to importing cunumeric, you might see a warning like this:


RuntimeWarning: cuNumeric has not implemented inv and is falling back to canonical NumPy. You may notice significantly decreased performance for this function call.


This means that cuNumeric has not implemented the linalg.inv API and is falling back to NumPy’s implementation. This means that the API would be eagerly executed using NumPy’s single-threaded implementation. If the API was intended to be invoked from a GPU, the data will get transferred from the GPU to the CPU before the API is executed. This can have performance implications, as indicated by the warning.

[3]:
# number of iterations to try
niters = 20

# tolerance that sets the accuracy of solution
tol = 1e-6

# print additional information
verbose = False

# initial guess
xk = np.array([-20.0, 20.0])

# Newton's method
for iter in range(niters):
    xk_old = xk

    if verbose:
        print(f"iter: {iter}, xk: {xk}")
    xk = xk - np.linalg.inv(jacobian(xk)).dot(function(xk))

    l2_norm = np.linalg.norm((xk - xk_old))
    if l2_norm < tol:
        break

# let the user know if the solution converged or not
if iter == niters - 1:
    print(f"\nNewton's method did not converge for this function, tolerance ({tol}) and number of iterations ({niters})")
else:
    print(f"\nNewton's method converged in {iter} iterations to xk: {xk}")

Newton's method converged in 7 iterations to xk: [-2.  3.]

We see that the solution has converged to \((x, y) = (-2, 3)\) which satisfies both the equation in 7 iterations

The problem can be cast such that the computation of inverse is substituted by a linear solve, as shown below:
\(x_{k} = x_{k-1} - x_{k}^{*}\)
\(x_{k}^{*} = {[\mathbf{J}(x_{k})]}^{-1} \cdot \mathbf{f}(x_{k})\)

And $x_{k}^{*} $ is solution to the system of equation defined as \({\mathbf{J}(x_{k})}~ x_{k}^{*} = \mathbf{f}(x_{k})\)


We can then use NumPy’s linalg.solve API to perform the linear solve as shown below. And we can see that the algorithm converges to the same solution in exactly the same number of iteration

[4]:
# number of iterations to try
niters = 20

# tolerance that sets the accuracy of solution
tol = 1e-6

# print additional information
verbose = False

# initial guess
xk = np.array([-20.0, 20.0])

# Newton's method
for iter in range(niters):
    xk_old = xk

    if verbose:
        print(f"iter: {iter}, xk: {xk}")
    xk = xk - np.linalg.solve(jacobian(xk), function(xk)) ## This uses linalg.solve

    l2_norm = np.linalg.norm((xk - xk_old))
    if l2_norm < tol:
        break

# let the user know if the solution converged or not
if iter == niters - 1:
    print(f"\nNewton's method did not converge for this function, tolerance ({tol}) and number of iterations ({niters})")
else:
    print(f"\nNewton's method converged in {iter} iterations to xk: {xk}")

Newton's method converged in 7 iterations to xk: [-2.  3.]
[ ]: