Beresford Parlett | |
---|---|
Born | London, England | July 4, 1932
Alma mater | University of Oxford (B.A.) Stanford University (Ph.D.) |
Scientific career | |
Fields | Numerical analysis |
Institutions | University of California, Berkeley |
Thesis | I. Bundles of Matrices and the Linear Independence of Their Minors; II. Applications of Laguerre's Method to the Matrix Eigenvalue Problem [1] (1962) |
Doctoral advisor | George Forsythe |
Doctoral students | Inderjit Dhillon Anne Greenbaum |
Beresford Neill Parlett (born 1932) is an English applied mathematician, specializing in numerical analysis and scientific computation. [2]
Parlett received in 1955 his bachelor's degree in mathematics from the University of Oxford and then worked in his father's timber business for three years. From 1958 to 1962 he was a graduate student in mathematics at Stanford University, where he received his Ph.D. in 1962. He was a postdoc for two years at Manhattan's Courant Institute and one year at the Stevens Institute of Technology. From 1965 until his retirement, he was a faculty member of the mathematics department at the University of California, Berkeley. There he served for some years as chair of the department of computer science, director of the Center for Pure and Applied Mathematics, and professor in the department of electrical engineering and computer science. He was a visiting professor at the University of Toronto, Pierre and Marie Curie University (Paris VI), and the University of Oxford. [3]
Parlett is the author of many influential papers on the numerical solution of eigenvalue problems, the QR algorithm, the Lanczos algorithm, symmetric indefinite systems, and sparse matrix computations. [3]
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.
Cornelius (Cornel) Lanczos was a Hungarian-Jewish, Hungarian-American and later Hungarian-Irish mathematician and physicist. According to György Marx he was one of The Martians.
In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.
Gene Howard Golub, was an American numerical analyst who taught at Stanford University as Fletcher Jones Professor of Computer Science and held a courtesy appointment in electrical engineering.
The following tables list the computational complexity of various algorithms for common mathematical operations.
Lloyd Nicholas Trefethen is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford.
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.
In numerical analysis, the balancing domain decomposition method (BDD) is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method. In each iteration, it combines the solution of local problems on non-overlapping subdomains with a coarse problem created from the subdomain nullspaces. BDD requires only solution of subdomain problems rather than access to the matrices of those problems, so it is applicable to situations where only the solution operators are available, such as in oil reservoir simulation by mixed finite elements. In its original formulation, BDD performs well only for 2nd order problems, such elasticity in 2D and 3D. For 4th order problems, such as plate bending, it needs to be modified by adding to the coarse problem special basis functions that enforce continuity of the solution at subdomain corners, which makes it however more expensive. The BDDC method uses the same corner basis functions as, but in an additive rather than multiplicative fashion. The dual counterpart to BDD is FETI, which enforces the equality of the solution between the subdomain by Lagrange multipliers. The base versions of BDD and FETI are not mathematically equivalent, though a special version of FETI designed to be robust for hard problems has the same eigenvalues and thus essentially the same performance as BDD.
David M. Young Jr. was an American mathematician and computer scientist who was one of the pioneers in the field of modern numerical analysis/scientific computing.
Peter Wynn (1931—2017) was an English mathematician. His main achievements concern approximation theory – in particular the theory of Padé approximants – and its application in numerical methods for improving the rate of convergence of sequences of real numbers.
The following is a timeline of scientific computing, also known as computational science.
The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see timeline and history of mathematics.
Alan Stuart Edelman is an American mathematician and computer scientist. He is a professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and a Principal Investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) where he leads a group in applied computing. In 2004, he founded a business called Interactive Supercomputing which was later acquired by Microsoft. Edelman is a fellow of American Mathematical Society (AMS), Society for Industrial and Applied Mathematics (SIAM), Institute of Electrical and Electronics Engineers (IEEE), and Association for Computing Machinery (ACM), for his contributions in numerical linear algebra, computational science, parallel computing, and random matrix theory. He is one of the cocreators of the technical programming language Julia.
In mathematics, in particular linear algebra, the Bunch–Nielsen–Sorensen formula, named after James R. Bunch, Christopher P. Nielsen and Danny C. Sorensen, expresses the eigenvectors of the sum of a symmetric matrix and the outer product, , of vector with itself.
William B. Gragg (1936–2016) ended his career as an Emeritus Professor in the Department of Applied Mathematics at the Naval Postgraduate School. He has made fundamental contributions in numerical analysis, particularly the areas of numerical linear algebra and numerical methods for ordinary differential equations.
Hans Jörg Stetter is a German mathematician, specializing in numerical analysis.
Charles William Clenshaw was an English mathematician, specializing in numerical analysis. He is known for the Clenshaw algorithm (1955) and Clenshaw–Curtis quadrature (1960). In a 1984 paper Beyond Floating Point, Clenshaw and Frank W. J. Olver introduced symmetric level-index arithmetic.
A Bohemian matrix family is a set of matrices whose free entries come from a single discrete, usually finite population, denoted by the matrix list P. Each input of any matrix from this particular Bohemian matrix family is required to be an element of the set P. Such matrices containing these properties are defined as Bohemian matrices. Bohemian matrices may possess additional structure, a Toeplitz matrix or an upper Hessenberg matrix, for example. Usually only one Bohemian matrix family with a fixed population P is studied at a time, so one can classify any given matrix as being Bohemian or not without significant ambiguity.