Broyden's method

Last updated

In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965. [1]

Contents

Newton's method for solving f(x) = 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian can be a difficult and expensive operation; for large problems such as those involving solving the Kohn–Sham equations in quantum mechanics the number of variables can be in the hundreds of thousands. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration, and to do rank-one updates at other iterations.

In 1979 Gay proved that when Broyden's method is applied to a linear system of size n × n, it terminates in 2 n steps, [2] although like all quasi-Newton methods, it may not converge for nonlinear systems.

Description of the method

Solving single-variable nonlinear equation

In the secant method, we replace the first derivative f at xn with the finite-difference approximation:

and proceed similar to Newton's method:

where n is the iteration index.

Solving a system of nonlinear equations

Consider a system of k nonlinear equations in unknowns

where f is a vector-valued function of vector x

For such problems, Broyden gives a variation of the one-dimensional Newton's method, replacing the derivative with an approximate Jacobian J. The approximate Jacobian matrix is determined iteratively based on the secant equation, a finite-difference approximation:

where n is the iteration index. For clarity, define

so the above may be rewritten as

The above equation is underdetermined when k is greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix, Jn−1, and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification to Jn−1:

This minimizes the Frobenius norm

One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach.

If this is the full Newton step; commonly a line search or trust region method is used to control . The initial Jacobian can be taken as a diagonal, unit matrix, although more common is to scale it based upon the first step. [3] Broyden also suggested using the Sherman–Morrison formula [4] to directly update the inverse of the approximate Jacobian matrix:

This first method is commonly known as the "good Broyden's method."

A similar technique can be derived by using a slightly different modification to Jn−1. This yields a second method, the so-called "bad Broyden's method":

This minimizes a different Frobenius norm

In his original paper Broyden could not get the bad method to work, but there are cases where it does [5] for which several explanations have been proposed. [6] [7] Many other quasi-Newton schemes have been suggested in optimization such as the BFGS, where one seeks a maximum or minimum by finding zeros of the first derivatives (zeros of the gradient in multiple dimensions). The Jacobian of the gradient is called the Hessian and is symmetric, adding further constraints to its approximation.

The Broyden Class of Methods

In addition to the two methods described above, Broyden defined a wider class of related methods. [1] :578 In general, methods in the Broyden class are given in the form [8] :150 where and and for each . The choice of determines the method.

Other methods in the Broyden class have been introduced by other authors.

See also

Related Research Articles

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation is derived from the previous ones.

<span class="mw-page-title-main">Newton's method</span> Algorithm for finding zeros of functions

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

<span class="mw-page-title-main">Least squares</span> Approximation method in statistics

The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals made in the results of each individual equation.

<span class="mw-page-title-main">Lyapunov exponent</span> The rate of separation of infinitesimally close trajectories

In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

In mathematics, linearization is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in fields such as engineering, physics, economics, and ecology.

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s.

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.

Quasi-Newton methods are methods used to find either zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.

In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations,

The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.

The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This update maintains the symmetry of the matrix but does not guarantee that the update be positive definite.

The Davidon–Fletcher–Powell formula finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It was the first quasi-Newton method to generalize the secant method to a multidimensional problem. This update maintains the symmetry and positive definiteness of the Hessian matrix.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

Powell's dog leg method, also called Powell's hybrid method, is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step.

The Barzilai-Borwein method is an iterative gradient descent method for unconstrained optimization using either of two step sizes derived from the linear trend of the most recent two iterates. This method, and modifications, are globally convergent under mild conditions, and perform competitively with conjugate gradient methods for many problems. Not depending on the objective itself, it can also solve some systems of linear and non-linear equations.

References

  1. 1 2 3 Broyden, C. G. (1965). "A Class of Methods for Solving Nonlinear Simultaneous Equations". Mathematics of Computation. 19 (92). American Mathematical Society: 577–593. doi: 10.1090/S0025-5718-1965-0198670-6 . JSTOR   2003941.
  2. Gay, D. M. (1979). "Some convergence properties of Broyden's method". SIAM Journal on Numerical Analysis. 16 (4). SIAM: 623–630. doi:10.1137/0716047.
  3. Shanno, D. F.; Phua, Kang -Hoh (1978). "Matrix conditioning and nonlinear optimization". Mathematical Programming. 14 (1): 149–160. doi:10.1007/BF01588962. ISSN   0025-5610.
  4. Sherman, Jack; Morrison, Winifred J. (1950). "Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix". The Annals of Mathematical Statistics. 21 (1): 124–127. doi:10.1214/aoms/1177729893. ISSN   0003-4851.
  5. Kvaalen, Eric (1991). "A faster Broyden method". BIT Numerical Mathematics. 31 (2). SIAM: 369–372. doi:10.1007/BF01931297.
  6. Martı́nez, José Mario (2000). "Practical quasi-Newton methods for solving nonlinear systems". Journal of Computational and Applied Mathematics. 124 (1–2): 97–121. doi:10.1016/s0377-0427(00)00434-9. ISSN   0377-0427.
  7. 1 2 Marks, L. D.; Luke, D. R. (2008). "Robust mixing for ab initio quantum mechanical calculations". Physical Review B. 78 (7). doi:10.1103/physrevb.78.075114. ISSN   1098-0121.
  8. 1 2 Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer New York. doi:10.1007/978-0-387-40065-5. ISBN   978-0-387-30303-1.
  9. Anderson, Donald G. (1965). "Iterative Procedures for Nonlinear Integral Equations". Journal of the ACM. 12 (4): 547–560. doi:10.1145/321296.321305. ISSN   0004-5411.
  10. Schubert, L. K. (1970). "Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian". Mathematics of Computation. 24 (109): 27–30. doi: 10.1090/S0025-5718-1970-0258276-9 . ISSN   0025-5718.
  11. Pulay, Péter (1980). "Convergence acceleration of iterative sequences. the case of scf iteration". Chemical Physics Letters. 73 (2): 393–398. doi:10.1016/0009-2614(80)80396-4.
  12. Kresse, G.; Furthmüller, J. (1996). "Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set". Physical Review B. 54 (16): 11169–11186. doi:10.1103/PhysRevB.54.11169. ISSN   0163-1829.
  13. Srivastava, G P (1984). "Broyden's method for self-consistent field convergence acceleration". Journal of Physics A: Mathematical and General. 17 (6): L317–L321. doi:10.1088/0305-4470/17/6/002. ISSN   0305-4470.
  14. Klement, Jan (2014). "On Using Quasi-Newton Algorithms of the Broyden Class for Model-to-Test Correlation". Journal of Aerospace Technology and Management. 6 (4): 407–414. doi: 10.5028/jatm.v6i4.373 . ISSN   2175-9146.
  15. "Broyden class methods – File Exchange – MATLAB Central". www.mathworks.com. Retrieved 2016-02-04.
  16. Woods, N D; Payne, M C; Hasnip, P J (2019). "Computing the self-consistent field in Kohn–Sham density functional theory". Journal of Physics: Condensed Matter. 31 (45): 453001. doi:10.1088/1361-648X/ab31c0. ISSN   0953-8984.

Further reading