Conjugate gradient squared method

Last updated

In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form , particularly in cases where computing the transpose is impractical. [1] The CGS method was developed as an improvement to the biconjugate gradient method. [2] [3] [4]

Contents

Background

A system of linear equations consists of a known matrix and a known vector . To solve the system is to find the value of the unknown vector . [3] [5] A direct method for solving a system of linear equations is to take the inverse of the matrix , then calculate . However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess , and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution. [6] [7]

As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition. [8]

Algorithm

The algorithm is as follows: [9]

  1. Choose an initial guess
  2. Compute the residual
  3. Choose
  4. For do:
    1. If , the method fails.
    2. If :
    3. Else:
    4. Solve , where is a pre-conditioner.
    5. Solve
    6. Check for convergence: if there is convergence, end the loop and return the result

See also

Related Research Articles

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation is derived from the previous ones.

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

<span class="mw-page-title-main">Onsager reciprocal relations</span> Relations between flows and forces, or gradients, in thermodynamic systems

In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

In physics, the acoustic wave equation is a second-order partial differential equation that governs the propagation of acoustic waves through a material medium resp. a standing wavefield. The equation describes the evolution of acoustic pressure p or particle velocity u as a function of position x and time t. A simplified (scalar) form of the equation describes acoustic waves in only one spatial dimension, while a more general form describes waves in three dimensions.

In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations

In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity, specific heat, electrical conductivity, optical, and magnetic properties, and reactivity.

In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.

Riemann invariants are mathematical transformations made on a system of conservation equations to make them more easily solvable. Riemann invariants are constant along the characteristic curves of the partial differential equations where they obtain the name invariant. They were first obtained by Bernhard Riemann in his work on plane waves in gas dynamics.

Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy.

The Shvab–Zeldovich formulation is an approach to remove the chemical-source terms from the conservation equations for energy and chemical species by linear combinations of independent variables, when the conservation equations are expressed in a common form. Expressing conservation equations in common form often limits the range of applicability of the formulation. The method was first introduced by V. A. Shvab in 1948 and by Yakov Zeldovich in 1949.

References

  1. Noel Black; Shirley Moore. "Conjugate Gradient Squared Method". Wolfram Mathworld.
  2. Mathworks. "cgs". Matlab documentation.
  3. 1 2 Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN   0-521-81828-1.
  4. Peter Sonneveld (1989). "CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems" . SIAM Journal on Scientific and Statistical Computing. 10 (1): 36–52. doi:10.1137/0910004. ProQuest   921988114.
  5. "Linear equations" (PDF), Matrix Analysis and Applied Linear Algebra, Philadelphia, PA: SIAM, pp. 1–40, doi:10.1137/1.9780898719512.ch1 (inactive 2024-09-19), archived from the original (PDF) on 2004-06-10, retrieved 2023-12-18{{citation}}: CS1 maint: DOI inactive as of September 2024 (link)
  6. "Iterative Methods for Linear Systems". Mathworks.
  7. Jean Gallier. "Iterative Methods for Solving Linear Systems" (PDF). UPenn.
  8. Alexandra Roberts; Anye Shi; Yue Sun. "Conjugate gradient methods". Cornell University . Retrieved 2023-12-26.
  9. R. Barrett; M. Berry; T. F. Chan; J. Demmel; J. Donato; J. Dongarra; V. Eijkhout; R. Pozo; C. Romine; H. Van der Vorst (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.