The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, [1] and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). [2] ART includes the positivity constraint, making it nonlinear. [3]
The Kaczmarz method is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method. [4]
It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS). [5] [6]
Let be a system of linear equations, let be the number of rows of A, be the th row of complex-valued matrix , and let be arbitrary complex-valued initial approximation to the solution of . For compute:
(1) |
where and denotes complex conjugation of .
If the system is consistent, converges to the minimum-norm solution, provided that the iterations start with the zero vector.
A more general algorithm can be defined using a relaxation parameter
There are versions of the method that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method. [7]
In 2009, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Thomas Strohmer and Roman Vershynin [8] in which the i-th equation is selected randomly with probability proportional to
This method can be seen as a particular case of stochastic gradient descent. [9]
Under such circumstances converges exponentially fast to the solution of and the rate of convergence depends only on the scaled condition number .
We have
(2) |
Using
we can write ( 2 ) as
(3) |
The main point of the proof is to view the left hand side in ( 3 ) as an expectation of some random variable. Namely, recall that the solution space of the equation of is the hyperplane
whose normal is Define a random vector Z whose values are the normals to all the equations of , with probabilities as in our algorithm:
Then ( 3 ) says that
(4) |
The orthogonal projection onto the solution space of a random equation of is given by
Now we are ready to analyze our algorithm. We want to show that the error reduces at each step in average (conditioned on the previous steps) by at least the factor of The next approximation is computed from as where are independent realizations of the random projection The vector is in the kernel of It is orthogonal to the solution space of the equation onto which projects, which contains the vector (recall that is the solution to all equations). The orthogonality of these two vectors then yields
To complete the proof, we have to bound from below. By the definition of , we have
where are independent realizations of the random vector
Thus
Now we take the expectation of both sides conditional upon the choice of the random vectors (hence we fix the choice of the random projections and thus the random vectors and we average over the random vector ). Then
By ( 4 ) and the independence,
Taking the full expectation of both sides, we conclude that
The superiority of this selection was illustrated with the reconstruction of a bandlimited function from its nonuniformly spaced sampling values. However, it has been pointed out [10] that the reported success by Strohmer and Vershynin depends on the specific choices that were made there in translating the underlying problem, whose geometrical nature is to find a common point of a set of hyperplanes, into a system of algebraic equations. There will always be legitimate algebraic representations of the underlying problem for which the selection method in [8] will perform in an inferior manner. [8] [10] [11]
The Kaczmarz iteration ( 1 ) has a purely geometric interpretation: the algorithm successively projects the current iterate onto the hyperplane defined by the next equation. Hence, any scaling of the equations is irrelevant; it can also be seen from ( 1 ) that any (nonzero) scaling of the equations cancels out. Thus, in RK, one can use or any other weights that may be relevant. Specifically, in the above-mentioned reconstruction example, the equations were chosen with probability proportional to the average distance of each sample point from its two nearest neighbors — a concept introduced by Feichtinger and Gröchenig. For additional progress on this topic, see, [12] [13] and the references therein.
In 2015, Robert M. Gower and Peter Richtarik [14] developed a versatile randomized iterative method for solving a consistent system of linear equations which includes the randomized Kaczmarz algorithm as a special case. Other special cases include randomized coordinate descent, randomized Gaussian descent and randomized Newton method. Block versions and versions with importance sampling of all these methods also arise as special cases. The method is shown to enjoy exponential rate decay (in expectation) - also known as linear convergence, under very mild conditions on the way randomness enters the algorithm. The Gower-Richtarik method is the first algorithm uncovering a "sibling" relationship between these methods, some of which were independently proposed before, while many of which were new.
Interesting new insights about the randomized Kaczmarz method that can be gained from the analysis of the method include:
The Gower-Richtarik method enjoys six seemingly different but equivalent formulations, shedding additional light on how to interpret it (and, as a consequence, how to interpret its many variants, including randomized Kaczmarz):
We now describe some of these viewpoints. The method depends on 2 parameters:
Given previous iterate the new point is computed by drawing a random matrix (in an iid fashion from some fixed distribution), and setting
That is, is obtained as the projection of onto the randomly sketched system . The idea behind this method is to pick in such a way that a projection onto the sketched system is substantially simpler than the solution of the original system . Randomized Kaczmarz method is obtained by picking to be the identity matrix, and to be the unit coordinate vector with probability Different choices of and lead to different variants of the method.
A seemingly different but entirely equivalent formulation of the method (obtained via Lagrangian duality) is
where is also allowed to vary, and where is any solution of the system Hence, is obtained by first constraining the update to the linear subspace spanned by the columns of the random matrix , i.e., to
and then choosing the point from this subspace which best approximates . This formulation may look surprising as it seems impossible to perform the approximation step due to the fact that is not known (after all, this is what we are trying the compute!). However, it is still possible to do this, simply because computed this way is the same as computed via the sketch and project formulation and since does not appear there.
The update can also be written explicitly as
where by we denote the Moore-Penrose pseudo-inverse of matrix . Hence, the method can be written in the form , where is a random update vector.
Letting it can be shown that the system always has a solution , and that for all such solutions the vector is the same. Hence, it does not matter which of these solutions is chosen, and the method can be also written as . The pseudo-inverse leads just to one particular solution. The role of the pseudo-inverse is twofold:
If we subtract from both sides of the random update formula, denote
and use the fact that we arrive at the last formulation:
where is the identity matrix. The iteration matrix, is random, whence the name of this formulation.
By taking conditional expectations in the 6th formulation (conditional on ), we obtain
By taking expectation again, and using the tower property of expectations, we obtain
Gower and Richtarik [14] show that
where the matrix norm is defined by
Moreover, without any assumptions on one has By taking norms and unrolling the recurrence, we obtain
Remark. A sufficient condition for the expected residuals to converge to 0 is This can be achieved if has a full column rank and under very mild conditions on Convergence of the method can be established also without the full column rank assumption in a different way. [15]
It is also possible to show a stronger result:
The expected squared norms (rather than norms of expectations) converge at the same rate:
Remark. This second type of convergence is stronger due to the following identity [14] which holds for any random vector and any fixed vector :
We have seen that the randomized Kaczmarz method appears as a special case of the Gower-Richtarik method for and being the unit coordinate vector with probability where is the row of It can be checked by direct calculation that
Since the convergence of the (randomized) Kaczmarz method depends on a rate of convergence the method may make slow progress on some practical problems. [10] To ensure finite termination of the method, Johannes Brust and Michael Saunders (academic) [16] have developed a process that generalizes the (randomized) Kaczmarz iteration and terminates in at most iterations to a solution for the consistent system . The process is based on Dimensionality reduction, or projections onto lower dimensional spaces, which is how it derives its name PLSS (Projected Linear Systems Solver). An iteration of PLSS-Kaczmarz can be regarded as the generalization
where is the selection of rows 1 to and all columns of . A randomized version of the method uses non repeated row indices at each iteration: where each is in . The iteration converges to a solution when . In particular, since it holds that
and therefore is a solution to the linear system. The computation of iterates in PLSS-Kaczmarz can be simplified and organized effectively. The resulting algorithm only requires matrix-vector products and has a direct form
algorithm PLSS-Kaczmarz isinput: matrix A right hand side boutput: solution x such that Ax=bx := 0, P = [0]forkin1,2,...,mdoa := A(ik,:)' // Select an index ik in 1,...,m without resamplingd := P' * ac1 := norm(a)c2 := norm(d)c3 := (bik-x'*a)/((c1-c2)*(c1+c2))p := c3*(a - P*(P'*a)) P := [ P, p/norm(p) ] // Append a normalized updatex := x + p returnx
In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
Covariance in probability theory and statistics is a measure of the joint variability of two random variables.
In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the line perpendicular to the tangent line to the curve at the point.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.
The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low-energy physics of quantum many-body systems with high accuracy. As a variational method, DMRG is an efficient algorithm that attempts to find the lowest-energy matrix product state wavefunction of a Hamiltonian. It was invented in 1992 by Steven R. White and it is nowadays the most efficient method for 1-dimensional systems.
In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.
In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method.
The algebraic reconstruction technique (ART) is an iterative reconstruction technique used in computed tomography. It reconstructs an image from a series of angular projections. Gordon, Bender and Herman first showed its use in image reconstruction; whereas the method is known as Kaczmarz method in numerical linear algebra.
The Minimal Residual Method or MINRES is a Krylov subspace method for the iterative solution of symmetric linear equation systems. It was proposed by mathematicians Christopher Conway Paige and Michael Alan Saunders in 1975.