Multivariate Root Finding

Multivariate Root Finding#

Imagine a vector function,

f(x)=(f1(x)f2(x)fN(x))

where

x=(x1x2xN)

We want to find the x that zeros f(x), i.e.,

f(x)=0

We’ll use a generalization of Newton’s method to systems of equations.

Solution procedure#

Note

This is the simplest case of a multivariable root finding algorithm. More sophisticated methods exist, but this method will give you a feel for what is involve

Start with an initial guess, x(0)

Taylor expand our function seeking a correction δx. Each component has the form:

fi(x(0)+δx)0=fi(x(0))+j=1Nfixjδxj+

and we can write the vector form as:

f(x(0)+δx)=f(x(0))+Jδx+0

where J is the Jacobian:

J=(f1x1f1x2f1xNf2x1f2x2f2xNfNx1fNx2fNxN)

This can be expresses as the linear system:

Jδx=f(x(0))

Our solution procedure is iterative:

  • Iterate over k, seeking a new guess at the solution xk+1:

    • Solve the linear system:

      Jδx=f(x(k))
    • Correct our initial guess:

      x(k+1)=x(k)+δx
    • Stop iteration if

      δx<ϵx(k)

      where is a suitable vector norm.