Last updated

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

## Contents

The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, [1] [2] who programmed it on the Z4, [3] and extensively researched. [4] [5]

The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear equations and black-box objective functions.

Suppose we want to solve the system of linear equations

${\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }$

for the vector ${\displaystyle \mathbf {x} }$, where the known ${\displaystyle n\times n}$ matrix ${\displaystyle \mathbf {A} }$ is symmetric (i.e., AT = A), positive-definite (i.e. xTAx > 0 for all non-zero vectors ${\displaystyle \mathbf {x} }$ in Rn), and real, and ${\displaystyle \mathbf {b} }$ is known as well. We denote the unique solution of this system by ${\displaystyle \mathbf {x} _{*}}$.

## Derivation as a direct method

The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method.

We say that two non-zero vectors u and v are conjugate (with respect to ${\displaystyle \mathbf {A} }$) if

${\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {A} \mathbf {v} =0.}$

Since ${\displaystyle \mathbf {A} }$ is symmetric and positive-definite, the left-hand side defines an inner product

${\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {A} \mathbf {v} =\langle \mathbf {u} ,\mathbf {v} \rangle _{\mathbf {A} }:=\langle \mathbf {A} \mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,\mathbf {A} ^{\mathsf {T}}\mathbf {v} \rangle =\langle \mathbf {u} ,\mathbf {A} \mathbf {v} \rangle .}$

Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if ${\displaystyle \mathbf {u} }$ is conjugate to ${\displaystyle \mathbf {v} }$, then ${\displaystyle \mathbf {v} }$ is conjugate to ${\displaystyle \mathbf {u} }$. Suppose that

${\displaystyle P=\{\mathbf {p} _{1},\dots ,\mathbf {p} _{n}\}}$

is a set of ${\displaystyle n}$ mutually conjugate vectors with respect to ${\displaystyle \mathbf {A} }$, i.e. ${\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}$ for all ${\displaystyle i\neq j}$. Then ${\displaystyle P}$ forms a basis for ${\displaystyle \mathbb {R} ^{n}}$, and we may express the solution ${\displaystyle \mathbf {x} _{*}}$ of ${\displaystyle \mathbf {Ax} =\mathbf {b} }$ in this basis:

${\displaystyle \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {p} _{i}\Rightarrow \mathbf {A} \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {A} \mathbf {p} _{i}.}$

Left-multiplying by ${\displaystyle \mathbf {p} _{k}^{\mathsf {T}}}$ yields

${\displaystyle \mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {x} _{*}=\sum _{i=1}^{n}\alpha _{i}\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{i}\Rightarrow \mathbf {p} _{k}^{\mathsf {T}}\mathbf {b} =\sum _{i=1}^{n}\alpha _{i}\left\langle \mathbf {p} _{k},\mathbf {p} _{i}\right\rangle _{\mathbf {A} }=\alpha _{k}\left\langle \mathbf {p} _{k},\mathbf {p} _{k}\right\rangle _{\mathbf {A} }\Rightarrow }$
${\displaystyle \alpha _{k}={\frac {\langle \mathbf {p} _{k},\mathbf {b} \rangle }{\langle \mathbf {p} _{k},\mathbf {p} _{k}\rangle _{\mathbf {A} }}}.}$

This gives the following method [4] for solving the equation Ax = b: find a sequence of ${\displaystyle n}$ conjugate directions, and then compute the coefficients ${\displaystyle \alpha _{k}}$.

## As an iterative method

If we choose the conjugate vectors ${\displaystyle \mathbf {p} _{k}}$ carefully, then we may not need all of them to obtain a good approximation to the solution ${\displaystyle \mathbf {x} _{*}}$. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where n is so large that the direct method would take too much time.

We denote the initial guess for x by x0 (we can assume without loss of generality that x0 = 0, otherwise consider the system Az = bAx0 instead). Starting with x0 we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solution x (that is unknown to us). This metric comes from the fact that the solution x is also the unique minimizer of the following quadratic function

${\displaystyle f(\mathbf {x} )={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}\mathbf {A} \mathbf {x} -\mathbf {x} ^{\mathsf {T}}\mathbf {b} ,\qquad \mathbf {x} \in \mathbf {R} ^{n}\,.}$

The existence of a unique minimizer is apparent as its second derivative is given by a symmetric positive-definite matrix

${\displaystyle \nabla ^{2}f(\mathbf {x} )=\mathbf {A} \,,}$

and that the minimizer (use Df(x)=0) solves the initial problem is obvious from its first derivative

${\displaystyle \nabla f(\mathbf {x} )=\mathbf {A} \mathbf {x} -\mathbf {b} \,.}$

This suggests taking the first basis vector p0 to be the negative of the gradient of f at x = x0. The gradient of f equals Axb. Starting with an initial guess x0, this means we take p0 = bAx0. The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method. Note that p0 is also the residual provided by this initial step of the algorithm.

Let rk be the residual at the kth step:

${\displaystyle \mathbf {r} _{k}=\mathbf {b} -\mathbf {Ax} _{k}.}$

As observed above, ${\displaystyle \mathbf {r} _{k}}$ is the negative gradient of ${\displaystyle f}$ at ${\displaystyle \mathbf {x} _{k}}$, so the gradient descent method would require to move in the direction rk. Here, however, we insist that the directions ${\displaystyle \mathbf {p} _{k}}$ be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example of Gram-Schmidt orthonormalization. This gives the following expression:

${\displaystyle \mathbf {p} _{k}=\mathbf {r} _{k}-\sum _{i

(see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by

${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}$

with

${\displaystyle \alpha _{k}={\frac {\mathbf {p} _{k}^{\mathsf {T}}(\mathbf {b} -\mathbf {Ax} _{k})}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}={\frac {\mathbf {p} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}},}$

where the last equality follows from the definition of ${\displaystyle \mathbf {r} _{k}}$ . The expression for ${\displaystyle \alpha _{k}}$ can be derived if one substitutes the expression for xk+1 into f and minimizing it w.r.t. ${\displaystyle \alpha _{k}}$

{\displaystyle {\begin{aligned}f(\mathbf {x} _{k+1})&=f(\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k})=:g(\alpha _{k})\\g'(\alpha _{k})&{\overset {!}{=}}0\quad \Rightarrow \quad \alpha _{k}={\frac {\mathbf {p} _{k}^{\mathsf {T}}(\mathbf {b} -\mathbf {Ax} _{k})}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}\,.\end{aligned}}}

### The resulting algorithm

The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix-vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows that ${\displaystyle \mathbf {r} _{i}}$ is orthogonal to ${\displaystyle \mathbf {r} _{j}}$, i.e. ${\displaystyle \mathbf {r} _{i}^{\mathsf {T}}\mathbf {r} _{j}=0}$ , for i ≠ j. And ${\displaystyle \mathbf {p} _{i}}$is ${\displaystyle \mathbf {A} }$-orthogonal to ${\displaystyle \mathbf {p} _{j}}$ , i.e. ${\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}$ , for ${\displaystyle i\neq j}$. This can be regarded that as the algorithm progresses, ${\displaystyle \mathbf {p} _{i}}$ and ${\displaystyle \mathbf {r} _{i}}$ span the same Krylov subspace. Where ${\displaystyle \mathbf {r} _{i}}$ form the orthogonal basis with respect to standard inner product, and ${\displaystyle \mathbf {p} _{i}}$ form the orthogonal basis with respect to inner product induced by ${\displaystyle \mathbf {A} }$. Therefore, ${\displaystyle \mathbf {x} _{k}}$ can be regarded as the projection of ${\displaystyle \mathbf {x} }$ on the Krylov subspace.

The algorithm is detailed below for solving Ax = b where ${\displaystyle \mathbf {A} }$ is a real, symmetric, positive-definite matrix. The input vector ${\displaystyle \mathbf {x} _{0}}$ can be an approximate initial solution or 0. It is a different formulation of the exact procedure described above.

{\displaystyle {\begin{aligned}&\mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}\\&{\hbox{if }}\mathbf {r} _{0}{\text{ is sufficiently small, then return }}\mathbf {x} _{0}{\text{ as the result}}\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&k:=0\\&{\text{repeat}}\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}\\&\qquad {\hbox{if }}\mathbf {r} _{k+1}{\text{ is sufficiently small, then exit loop}}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {r} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad k:=k+1\\&{\text{end repeat}}\\&{\text{return }}\mathbf {x} _{k+1}{\text{ as the result}}\end{aligned}}}

This is the most commonly used algorithm. The same formula for βk is also used in the Fletcher–Reeves nonlinear conjugate gradient method.

#### Restarts

We note that ${\displaystyle \mathbf {x} _{1}}$ is computed by the gradient descent method applied to ${\displaystyle \mathbf {x} _{0}}$. Setting ${\displaystyle \beta _{k}=0}$ would similarly make ${\displaystyle \mathbf {x} _{k+1}}$ computed by the gradient descent method from ${\displaystyle \mathbf {x} _{k}}$, i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations. [4] Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due to round-off error.

#### Explicit residual calculation

The formulas ${\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}$ and ${\displaystyle \mathbf {r} _{k}:=\mathbf {b} -\mathbf {Ax} _{k}}$, which both hold in exact arithmetic, make the formulas ${\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}$ and ${\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}$ mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication by ${\displaystyle \mathbf {A} }$ since the vector ${\displaystyle \mathbf {Ap} _{k}}$ is already computed to evaluate ${\displaystyle \alpha _{k}}$. The latter may be more accurate, substituting the explicit calculation ${\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}$ for the implicit one by the recursion subject to round-off error accumulation, and is thus recommended for an occasional evaluation. [6]

A norm of the residual is typically used for stopping criteria. The norm of the explicit residual ${\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}$ provides a guaranteed level of accuracy both in exact arithmetic and in the presence of the rounding errors, where convergence naturally stagnates. In contrast, the implicit residual ${\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}$ is known to keep getting smaller in amplitude well below the level of rounding errors and thus cannot be used to determine the stagnation of convergence.

#### Computation of alpha and beta

In the algorithm, αk is chosen such that ${\displaystyle \mathbf {r} _{k+1}}$ is orthogonal to ${\displaystyle \mathbf {r} _{k}}$. The denominator is simplified from

${\displaystyle \alpha _{k}={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}}$

since ${\displaystyle \mathbf {r} _{k+1}=\mathbf {p} _{k+1}-\mathbf {\beta } _{k}\mathbf {p} _{k}}$. The βk is chosen such that ${\displaystyle \mathbf {p} _{k+1}}$ is conjugate to ${\displaystyle \mathbf {p} _{k}}$. Initially, βk is

${\displaystyle \beta _{k}=-{\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}}}}$

using

${\displaystyle \mathbf {r} _{k+1}=\mathbf {r} _{k}-\alpha _{k}\mathbf {A} \mathbf {p} _{k}}$

and equivalently

${\displaystyle \mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}(\mathbf {r} _{k}-\mathbf {r} _{k+1}),}$

the numerator of βk is rewritten as

${\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}\mathbf {r} _{k+1}^{\mathsf {T}}(\mathbf {r} _{k}-\mathbf {r} _{k+1})=-{\frac {1}{\alpha _{k}}}\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {r} _{k+1}}$

because ${\displaystyle \mathbf {r} _{k+1}}$ and ${\displaystyle \mathbf {r} _{k}}$ are orthogonal by design. The denominator is rewritten as

${\displaystyle \mathbf {p} _{k}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}=(\mathbf {r} _{k}+\beta _{k-1}\mathbf {p} _{k-1})^{\mathsf {T}}\mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}\mathbf {r} _{k}^{\mathsf {T}}(\mathbf {r} _{k}-\mathbf {r} _{k+1})={\frac {1}{\alpha _{k}}}\mathbf {r} _{k}^{\mathsf {T}}\mathbf {r} _{k}}$

using that the search directions pk are conjugated and again that the residuals are orthogonal. This gives the β in the algorithm after cancelling αk.

#### Example code in MATLAB / GNU Octave

functionx=conjgrad(A, b, x)r=b-A*x;p=r;rsold=r'*r;fori=1:length(b)Ap=A*p;alpha=rsold/(p'*Ap);x=x+alpha*p;r=r-alpha*Ap;rsnew=r'*r;ifsqrt(rsnew)<1e-10breakendp=r+(rsnew/rsold)*p;rsold=rsnew;endend

### Numerical example

Consider the linear system Ax = b given by

${\displaystyle \mathbf {A} \mathbf {x} ={\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}1\\2\end{bmatrix}},}$

we will perform two steps of the conjugate gradient method beginning with the initial guess

${\displaystyle \mathbf {x} _{0}={\begin{bmatrix}2\\1\end{bmatrix}}}$

in order to find an approximate solution to the system.

#### Solution

For reference, the exact solution is

${\displaystyle \mathbf {x} ={\begin{bmatrix}{\frac {1}{11}}\\\\{\frac {7}{11}}\end{bmatrix}}\approx {\begin{bmatrix}0.0909\\\\0.6364\end{bmatrix}}}$

Our first step is to calculate the residual vector r0 associated with x0. This residual is computed from the formula r0 = b - Ax0, and in our case is equal to

${\displaystyle \mathbf {r} _{0}={\begin{bmatrix}1\\2\end{bmatrix}}-{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}2\\1\end{bmatrix}}={\begin{bmatrix}-8\\-3\end{bmatrix}}=\mathbf {p} _{0}.}$

Since this is the first iteration, we will use the residual vector r0 as our initial search direction p0; the method of selecting pk will change in further iterations.

We now compute the scalar α0 using the relationship

${\displaystyle \alpha _{0}={\frac {\mathbf {r} _{0}^{\mathsf {T}}\mathbf {r} _{0}}{\mathbf {p} _{0}^{\mathsf {T}}\mathbf {Ap} _{0}}}={\frac {{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}{{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}}={\frac {73}{331}}.}$

We can now compute x1 using the formula

${\displaystyle \mathbf {x} _{1}=\mathbf {x} _{0}+\alpha _{0}\mathbf {p} _{0}={\begin{bmatrix}2\\1\end{bmatrix}}+{\frac {73}{331}}{\begin{bmatrix}-8\\-3\end{bmatrix}}={\begin{bmatrix}0.2356\\0.3384\end{bmatrix}}.}$

This result completes the first iteration, the result being an "improved" approximate solution to the system, x1. We may now move on and compute the next residual vector r1 using the formula

${\displaystyle \mathbf {r} _{1}=\mathbf {r} _{0}-\alpha _{0}\mathbf {A} \mathbf {p} _{0}={\begin{bmatrix}-8\\-3\end{bmatrix}}-{\frac {73}{331}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}={\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}.}$

Our next step in the process is to compute the scalar β0 that will eventually be used to determine the next search direction p1.

${\displaystyle \beta _{0}={\frac {\mathbf {r} _{1}^{\mathsf {T}}\mathbf {r} _{1}}{\mathbf {r} _{0}^{\mathsf {T}}\mathbf {r} _{0}}}={\frac {{\begin{bmatrix}-0.2810&0.7492\end{bmatrix}}{\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}}{{\begin{bmatrix}-8&-3\end{bmatrix}}{\begin{bmatrix}-8\\-3\end{bmatrix}}}}=0.0088.}$

Now, using this scalar β0, we can compute the next search direction p1 using the relationship

${\displaystyle \mathbf {p} _{1}=\mathbf {r} _{1}+\beta _{0}\mathbf {p} _{0}={\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}+0.0088{\begin{bmatrix}-8\\-3\end{bmatrix}}={\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}.}$

We now compute the scalar α1 using our newly acquired p1 using the same method as that used for α0.

${\displaystyle \alpha _{1}={\frac {\mathbf {r} _{1}^{\mathsf {T}}\mathbf {r} _{1}}{\mathbf {p} _{1}^{\mathsf {T}}\mathbf {Ap} _{1}}}={\frac {{\begin{bmatrix}-0.2810&0.7492\end{bmatrix}}{\begin{bmatrix}-0.2810\\0.7492\end{bmatrix}}}{{\begin{bmatrix}-0.3511&0.7229\end{bmatrix}}{\begin{bmatrix}4&1\\1&3\end{bmatrix}}{\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}}}=0.4122.}$

Finally, we find x2 using the same method as that used to find x1.

${\displaystyle \mathbf {x} _{2}=\mathbf {x} _{1}+\alpha _{1}\mathbf {p} _{1}={\begin{bmatrix}0.2356\\0.3384\end{bmatrix}}+0.4122{\begin{bmatrix}-0.3511\\0.7229\end{bmatrix}}={\begin{bmatrix}0.0909\\0.6364\end{bmatrix}}.}$

The result, x2, is a "better" approximation to the system's solution than x1 and x0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached after n = 2 iterations (n being the order of the system).

## Convergence properties

The conjugate gradient method can theoretically be viewed as a direct method, as in the absence of round-off error it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces.

As an iterative method, the conjugate gradient method monotonically (in the energy norm) improves approximations ${\displaystyle \mathbf {x} _{k}}$ to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number ${\displaystyle \kappa (A)}$ of the system matrix ${\displaystyle A}$: the larger ${\displaystyle \kappa (A)}$ is, the slower the improvement. [7]

If ${\displaystyle \kappa (A)}$ is large, preconditioning is commonly used to replace the original system ${\displaystyle \mathbf {Ax} -\mathbf {b} =0}$ with ${\displaystyle \mathbf {M} ^{-1}(\mathbf {Ax} -\mathbf {b} )=0}$ such that ${\displaystyle \kappa (\mathbf {M} ^{-1}\mathbf {A} )}$ is smaller than ${\displaystyle \kappa (\mathbf {A} )}$, see below.

### Convergence theorem

Define a subset of polynomials as

${\displaystyle \Pi _{k}^{*}:=\left\lbrace \ p\in \Pi _{k}\$ :\ p(0)=1\ \right\rbrace \,,}

where ${\displaystyle \Pi _{k}}$ is the set of polynomials of maximal degree ${\displaystyle k}$.

Let ${\displaystyle \left(\mathbf {x} _{k}\right)_{k}}$ be the iterative approximations of the exact solution ${\displaystyle \mathbf {x} _{*}}$, and define the errors as ${\displaystyle \mathbf {e} _{k}:=\mathbf {x} _{k}-\mathbf {x} _{*}}$. Now, the rate of convergence can be approximated as [4] [8]

{\displaystyle {\begin{aligned}\left\|\mathbf {e} _{k}\right\|_{\mathbf {A} }&=\min _{p\in \Pi _{k}^{*}}\left\|p(\mathbf {A} )\mathbf {e} _{0}\right\|_{\mathbf {A} }\\&\leq \min _{p\in \Pi _{k}^{*}}\,\max _{\lambda \in \sigma (\mathbf {A} )}|p(\lambda )|\ \left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\\&\leq 2\left({\frac {{\sqrt {\kappa (\mathbf {A} )}}-1}{{\sqrt {\kappa (\mathbf {A} )}}+1}}\right)^{k}\ \left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\,,\end{aligned}}}

where ${\displaystyle \sigma (\mathbf {A} )}$ denotes the spectrum, and ${\displaystyle \kappa (\mathbf {A} )}$ denotes the condition number.

Note, the important limit when ${\displaystyle \kappa (\mathbf {A} )}$ tends to ${\displaystyle \infty }$

${\displaystyle {\frac {{\sqrt {\kappa (\mathbf {A} )}}-1}{{\sqrt {\kappa (\mathbf {A} )}}+1}}\approx 1-{\frac {2}{\sqrt {\kappa (\mathbf {A} )}}}\quad {\text{for}}\quad \kappa (\mathbf {A} )\gg 1\,.}$

This limit shows a faster convergence rate compared to the iterative methods of Jacobi or Gauss–Seidel which scale as ${\displaystyle \approx 1-{\frac {2}{\kappa (\mathbf {A} )}}}$.

No round-off error is assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained [5] by Anne Greenbaum.

### Practical convergence

If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound with ${\displaystyle {\sqrt {\kappa (\mathbf {A} )}}}$, but may be super-linear, depending on a distribution of the spectrum of the matrix ${\displaystyle A}$ and the spectral distribution of the error. [5] In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications in double-precision floating-point format for matrices of large sizes, the conjugate gradient method uses a stopping criteria with a tolerance that terminates the iterations during the first or second stage.

## The preconditioned conjugate gradient method

In most cases, preconditioning is necessary to ensure fast convergence of the conjugate gradient method. The preconditioned conjugate gradient method takes the following form: [9]

${\displaystyle \mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}}$
${\displaystyle \mathbf {z} _{0}:=\mathbf {M} ^{-1}\mathbf {r} _{0}}$
${\displaystyle \mathbf {p} _{0}:=\mathbf {z} _{0}}$
${\displaystyle k:=0\,}$
repeat
${\displaystyle \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}{\mathbf {p} _{k}^{\mathsf {T}}\mathbf {Ap} _{k}}}}$
${\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}$
${\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}$
ifrk+1 is sufficiently small then exit loop end if
${\displaystyle \mathbf {z} _{k+1}:=\mathbf {M} ^{-1}\mathbf {r} _{k+1}}$
${\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}}$
${\displaystyle \mathbf {p} _{k+1}:=\mathbf {z} _{k+1}+\beta _{k}\mathbf {p} _{k}}$
${\displaystyle k:=k+1\,}$
end repeat
The result is xk+1

The above formulation is equivalent to applying the conjugate gradient method without preconditioning to the system [10]

${\displaystyle \mathbf {E} ^{-1}\mathbf {A} (\mathbf {E} ^{-1})^{\mathsf {T}}\mathbf {\hat {x}} =\mathbf {E} ^{-1}\mathbf {b} }$

where

${\displaystyle \mathbf {EE} ^{\mathsf {T}}=\mathbf {M} ,\qquad \mathbf {\hat {x}} =\mathbf {E} ^{\mathsf {T}}\mathbf {x} .}$

The preconditioner matrix M has to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration. If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable.

An example of a commonly used preconditioner is the incomplete Cholesky factorization. [11]

## The flexible preconditioned conjugate gradient method

In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using the Polak–Ribière formula

${\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\left(\mathbf {z} _{k+1}-\mathbf {z} _{k}\right)}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}}$

${\displaystyle \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k+1}}{\mathbf {r} _{k}^{\mathsf {T}}\mathbf {z} _{k}}}}$

may dramatically improve the convergence in this case. [12] This version of the preconditioned conjugate gradient method can be called [13] flexible, as it allows for variable preconditioning. The flexible version is also shown [14] to be robust even if the preconditioner is not symmetric positive definite (SPD).

The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner, ${\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k}=0,}$ so both formulas for βk are equivalent in exact arithmetic, i.e., without the round-off error.

The mathematical explanation of the better convergence behavior of the method with the Polak–Ribière formula is that the method is locally optimal in this case, in particular, it does not converge slower than the locally optimal steepest descent method. [15]

## Vs. the locally optimal steepest descent method

In both the original and the preconditioned conjugate gradient methods one only needs to set ${\displaystyle \beta _{k}:=0}$ in order to make them locally optimal, using the line search, steepest descent methods. With this substitution, vectors p are always the same as vectors z, so there is no need to store vectors p. Thus, every iteration of these steepest descent methods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPD preconditioner is used, see above.

## Conjugate gradient method as optimal feedback controller for double integrator

The conjugate gradient method can also be derived using optimal control theory. [16] In this approach, the conjugate gradient method falls out as an optimal feedback controller,

${\displaystyle u=k(x,v):=-\gamma _{a}\nabla f(x)-\gamma _{b}v}$

for the double integrator system,

${\displaystyle {\dot {x}}=v,\quad {\dot {v}}=u}$

The quantities ${\displaystyle \gamma _{a}}$ and ${\displaystyle \gamma _{b}}$ are variable feedback gains. [16]

## Conjugate gradient on the normal equations

The conjugate gradient method can be applied to an arbitrary n-by-m matrix by applying it to normal equations ATA and right-hand side vector ATb, since ATA is a symmetric positive-semidefinite matrix for any A. The result is conjugate gradient on the normal equations (CGNR).

ATAx = ATb

As an iterative method, it is not necessary to form ATA explicitly in memory but only to perform the matrix-vector and transpose matrix-vector multiplications. Therefore, CGNR is particularly useful when A is a sparse matrix since these operations are usually extremely efficient. However the downside of forming the normal equations is that the condition number κ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a good preconditioner is often an important part of using the CGNR method.

Several algorithms have been proposed (e.g., CGLS, LSQR). The LSQR algorithm purportedly has the best numerical stability when A is ill-conditioned, i.e., A has a large condition number.

## Conjugate gradient method for complex Hermitian matrices

The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equations ${\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }$ for the complex-valued vector x, where A is Hermitian (i.e., A' = A) and positive-definite matrix, and the symbol ' denotes the conjugate transpose using the MATLAB/GNU Octave style. The trivial modification is simply substituting the conjugate transpose for the real transpose everywhere. This substitution is backward compatible, since conjugate transpose turns into real transpose on real-valued vectors and matrices. The provided above Example code in MATLAB/GNU Octave thus already works for complex Hermitian matrices needed no modification.

## Related Research Articles

In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field whose value at a point is the vector whose components are the partial derivatives of at . That is, for , its gradient is defined at the point in n-dimensional space as the vector:

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v1, …, vk} for kn and generates an orthogonal set S′ = {u1, …, uk} that spans the same k-dimensional subspace of Rn as S.

Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis.

In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors v and w at a point of a surface and produces a real number scalar g(v, w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold.

Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.

In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix. It is also occasionally known as adjunct matrix, though this nomenclature appears to have decreased in usage.

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any value, it gives the same result as if it were applied once (idempotent). It leaves its image unchanged. Though abstract, this definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.

The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.

In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In directional statistics, the von Mises–Fisher distribution, is a probability distribution on the -sphere in . If the distribution reduces to the von Mises distribution on the circle.

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

In mathematics, the dual quaternions are an 8-dimensional real algebra isomorphic to the tensor product of the quaternions and the dual numbers. Thus, they may be constructed in the same way as the quaternions, except using dual numbers instead of real numbers as coefficients. A dual quaternion can be represented in the form A + εB, where A and B are ordinary quaternions and ε is the dual unit, which satisfies ε2 = 0 and commutes with every element of the algebra. Unlike quaternions, the dual quaternions do not form a division algebra.

In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system

The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.

## References

1. Hestenes, Magnus R.; Stiefel, Eduard (December 1952). "Methods of Conjugate Gradients for Solving Linear Systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409. doi:.
2. Straeter, T. A. (1971). "On the Extension of the Davidon–Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems". NASA Technical Reports Server. NASA. hdl:2060/19710026200.
3. Speiser, Ambros (2004). "Konrad Zuse und die ERMETH: Ein weltweiter Architektur-Vergleich" [Konrad Zuse and the ERMETH: A worldwide comparison of architectures]. In Hellige, Hans Dieter (ed.). Geschichten der Informatik. Visionen, Paradigmen, Leitmotive (in German). Berlin: Springer. p. 185. ISBN   3-540-00217-0.
4. Shewchuk, Jonathan R (1994). An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (PDF).
5. Saad, Yousef (2003). Iterative methods for sparse linear systems (2nd ed.). Philadelphia, Pa.: Society for Industrial and Applied Mathematics. pp.  195. ISBN   978-0-89871-534-7.
6. Hackbusch, W. (2016-06-21). Iterative solution of large sparse systems of equations (2nd ed.). Switzerland: Springer. ISBN   9783319284835. OCLC   952572240.
7. Barrett, Richard; Berry, Michael; Chan, Tony F.; Demmel, James; Donato, June; Dongarra, Jack; Eijkhout, Victor; Pozo, Roldan; Romine, Charles; van der Vorst, Henk. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (PDF) (2nd ed.). Philadelphia, PA: SIAM. p. 13. Retrieved 2020-03-31.
8. Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press. sec. 11.5.2. ISBN   978-1-4214-0794-4.
9. Concus, P.; Golub, G. H.; Meurant, G. (1985). "Block Preconditioning for the Conjugate Gradient Method". SIAM Journal on Scientific and Statistical Computing. 6 (1): 220–252. doi:10.1137/0906018.
10. Golub, Gene H.; Ye, Qiang (1999). "Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration". SIAM Journal on Scientific Computing. 21 (4): 1305. CiteSeerX  . doi:10.1137/S1064827597323415.
11. Notay, Yvan (2000). "Flexible Conjugate Gradients". SIAM Journal on Scientific Computing. 22 (4): 1444–1460. CiteSeerX  . doi:10.1137/S1064827599362314.
12. Henricus Bouwmeester, Andrew Dougherty, Andrew V Knyazev. Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods. Procedia Computer Science, Volume 51, Pages 276-285, Elsevier, 2015. doi : 10.1016/j.procs.2015.05.241
13. Knyazev, Andrew V.; Lashuk, Ilya (2008). "Steepest Descent and Conjugate Gradient Methods with Variable Preconditioning". SIAM Journal on Matrix Analysis and Applications. 29 (4): 1267. arXiv:. doi:10.1137/060675290. S2CID   17614913.
14. Ross, I. M., "An Optimal Control Theory for Accelerated Optimization," arXiv : 1902.09004, 2019.
• Atkinson, Kendell A. (1988). "Section 8.9". (2nd ed.). John Wiley and Sons. ISBN   978-0-471-50023-0.
• Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN   978-0-486-43227-4.
• Golub, Gene H.; Van Loan, Charles F. (2013). "Chapter 11". Matrix Computations (4th ed.). Johns Hopkins University Press. ISBN   978-1-4214-0794-4.
• Saad, Yousef (2003-04-01). . (2nd ed.). SIAM. ISBN   978-0-89871-534-7.