Henk van der Vorst

Last updated

Hendrik "Henk" Albertus van der Vorst (born 5 May 1944, Venlo) [1] is a Dutch mathematician and Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paper [2] on the BiCGSTAB method was the most cited paper in the field of mathematics in the 1990s. [3] He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 2002 [4] and the Netherlands Academy of Technology and Innovation. [5] In 2006 he was awarded a knighthood of the Order of the Netherlands Lion. [6] Henk van der Vorst is a Fellow of Society for Industrial and Applied Mathematics (SIAM). [7]

His major contributions include preconditioned iterative methods, in particular the ICCG (incomplete Cholesky conjugate gradient) method (developed together with Koos Meijerink), a version of preconditioned conjugate gradient method, [8] [9] the BiCGSTAB [2] and (together with Kees Vuik) GMRESR [10] Krylov subspace methods and (together with Gerard Sleijpen) the Jacobi-Davidson method [11] for solving ordinary, generalized, and nonlinear eigenproblems. He has analyzed convergence behavior of the conjugate gradient [12] and Lanczos methods. He has also developed a number of preconditioners for parallel computers, [13] including truncated Neumann series preconditioner, incomplete twisted factorizations, and the incomplete factorization based on the so-called "vdv" ordering.

He is the author of the book Iterative Krylov Methods for Large Linear systems [14] and one of the authors of the Templates projects for linear problems [15] and eigenproblems. [16]

Related Research Articles

In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones.

<span class="mw-page-title-main">Numerical analysis</span> Study of algorithms using numerical approximation

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt at finding approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

In numerical analysis, a multigrid method is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.

In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:

In numerical linear algebra, an incomplete LU factorization of a matrix is a sparse approximation of the LU factorization often used as a preconditioner.

IML++, or the Iterative Methods Library, is a C++ library for solving linear systems of equations. It is said to be "templated" in the sense that the same source code works for dense, sparse, and distributed matrices.

In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form

<span class="mw-page-title-main">Lis (linear algebra library)</span>

Lis is a scalable parallel software library for solving discretized linear equations and eigenvalue problems that mainly arise in the numerical solution of partial differential equations by using iterative methods. Although it is designed for parallel computers, the library can be used without being conscious of parallel processing.

In numerical analysis, BDDC (balancing domain decomposition by constraints) is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arise from the finite element method. BDDC is used as a preconditioner to the conjugate gradient method. A specific version of BDDC is characterized by the choice of coarse degrees of freedom, which can be values at the corners of the subdomains, or averages over the edges or the faces of the interface between the subdomains. One application of the BDDC preconditioner then combines the solution of local problems on each subdomains with the solution of a global coarse problem with the coarse degrees of freedom as the unknowns. The local problems on different subdomains are completely independent of each other, so the method is suitable for parallel computing. With a proper choice of the coarse degrees of freedom (corners in 2D, corners plus edges or corners plus faces in 3D) and with regular subdomain shapes, the condition number of the method is bounded when increasing the number of subdomains, and it grows only very slowly with the number of elements per subdomain. Thus the number of iterations is bounded in the same way, and the method scales well with the problem size and the number of subdomains.

In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev.

In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the biconjugate gradient method (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugate gradient squared method (CGS). It is a Krylov subspace method. Unlike the original BiCG method, it doesn't require multiplication by the transpose of the system matrix.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

SLEPc is a software library for the parallel computation of eigenvalues and eigenvectors of large, sparse matrices. It can be seen as a module of PETSc that provides solvers for different types of eigenproblems, including linear and nonlinear, as well as the SVD. Recent versions also include support for matrix functions. It uses the MPI standard for parallelization. Both real and complex arithmetic are supported, with single, double and quadruple precision.

The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see timeline and history of mathematics.

In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming.

The truncated Newton method, originated in a paper by Ron Dembo and Trond Steihaug, also known as Hessian-free optimization, are a family of optimization algorithms designed for optimizing non-linear functions with large numbers of independent variables. A truncated Newton method consists of repeated application of an iterative optimization algorithm to approximately solve Newton's equations, to determine an update to the function's parameters. The inner solver is truncated, i.e., run for only a limited number of iterations. It follows that, for truncated Newton methods to work, the inner solver needs to produce a good approximation in a finite number of iterations; conjugate gradient has been suggested and evaluated as a candidate inner loop. Another prerequisite is good preconditioning for the inner algorithm.

References

  1. Prof.dr. H.A. van der Vorst at the Catalogus Professorum Academiæ Rheno-Traiectinæ
  2. 1 2 H.A. van der Vorst (1992), "Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., 13 (2): 631–644, doi:10.1137/0913035, hdl:10338.dmlcz/104566
  3. in-cites, September 2001, 2001
  4. "Henk van der Vorst" (in Dutch). Royal Netherlands Academy of Arts and Sciences. Retrieved 14 July 2015.
  5. Members of the Netherlands Academy of Technology and Innovation, archived from the original on 2011-07-24
  6. Jan Brandts; Bernd Fischer; Andy Wathen (December 2006), "Reflections on Sir Henk van der Vorst", SIAM News, 39 (10)
  7. "SIAM Fellows: Class of 2009". SIAM . Retrieved 2009-12-18.
  8. J.A. Meijerink; H.A.van der Vorst (1977), "An Iterative Solution Method for Linear Systems of Which the Coefficient Matrix is a Symmetric M-Matrix", Math. Comp., 31 (137): 148–162, doi:10.2307/2005786, JSTOR   2005786
  9. H.A. van der Vorst (1981), "Iterative solution methods for certain sparse linear systems with a non-symmetric matrix arising from PDE-problems", J. Comput. Phys., 44 (1): 1–19, Bibcode:1981JCoPh..44....1V, doi:10.1016/0021-9991(81)90034-6
  10. H.A. van der Vorst; C. Vuik (1994), "GMRESR: A family of nested GMRES methods", Numer. Lin. Alg. Appl., 1 (4): 369–386, CiteSeerX   10.1.1.465.4477 , doi:10.1002/nla.1680010404
  11. G.L.G. Sleijpen; H.A. van der Vorst (1996), "A Jacobi-Davidson iteration method for linear eigenvalue problems", SIAM J. Matrix Anal. Appl., 17 (2): 401–425, CiteSeerX   10.1.1.50.2569 , doi:10.1137/S0895479894270427
  12. A. van der Sluis; H.A. van der Vorst (1986), "The rate of convergence of conjugate gradients", Numerische Mathematik , 48 (5): 543–560, doi:10.1007/BF01389450, S2CID   122190605
  13. H.A. van der Vorst (1989), "High performance preconditioning", SIAM J. Sci. Stat. Comput., 10 (6): 1174–1185, doi:10.1137/0910071
  14. H.A. van der Vorst (April 2003), Iterative Krylov Methods for Large Linear systems, Cambridge University Press, Cambridge, ISBN   978-0-521-81828-5
  15. Barrett, Richard; Berry, Michael W.; Chan, Tony F.; Demmel, James; Donato, June; Dongarra, Jack; Eijkhout, Victor; Pozo, Roldan; Romine, Charles; Vorst, Henk van der (January 1994), Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, ISBN   978-0-89871-328-2 , retrieved 1 January 2008
  16. Bai, Zhaojun; Demmel, James; Dongarra, Jack; Ruhe, Axel; Vorst, Henk van der (January 2000), Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide, ISBN   978-0-89871-471-5 , retrieved 1 January 2008