Yousef Saad | |
---|---|
Born | 1950 (age 73–74) Algiers, Algeria |
Nationality | Algerian |
Alma mater | University of Grenoble University of Algiers |
Scientific career | |
Fields | Computer science |
Institutions | University of Minnesota |
Yousef Saad (born 1950) in Algiers, Algeria from Boghni, Tizi Ouzou, Kabylia is an I.T. Distinguished Professor of Computer Science in the Department of Computer Science and Engineering at the University of Minnesota. [1] He holds the William Norris Chair for Large-Scale Computing since January 2006. He is known for his contributions to the matrix computations, including the iterative methods for solving large sparse linear algebraic systems, eigenvalue problems, and parallel computing. He is listed as an ISI highly cited researcher in mathematics, [2] is the most cited author in the journal Numerical Linear Algebra with Applications, [3] [4] and is the author of the highly cited book Iterative Methods for Sparse Linear Systems. He is a SIAM fellow (class of 2010) and a fellow of the AAAS (2011).
In 2023, he won the John von Neumann Prize.
Saad received his B.S. degree in mathematics from the University of Algiers, Algeria in 1970. He then joined University of Grenoble for the doctoral program and obtained a junior doctorate, 'Doctorat de troisieme cycle' in 1974 and a higher doctorate, 'Doctorat d’Etat' in 1983. During the course of his academic career, he has held various positions, including Research Scientist in the Computer Science Department at Yale University (1981–1983), Associate Professor in the University of Tizi-Ouzou in Algeria (1983–1984), Research Scientist at the Computer Science Department at Yale University (1984–1986), and Associate Professor in the Mathematics Department at University of Illinois at Urbana-Champaign (1986–1988). He also worked as a Senior Scientist in the Research Institute for Advanced Computer Science (RIACS) during 1980–1990. [1]
Saad joined University of Minnesota as a Professor in the Department of Computer Science in 1990. At Minnesota, he held the position of Head of the Department of Computer Science and Engineering between January 1997 and June 2000. Currently, he is the I. T. Distinguished Professor of Computer Science at University of Minnesota.
Saad is the author of a couple of influential books in linear algebra and matrix computation which include
He has also co-edited the following article collections:
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.
In linear algebra, the order-rKrylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A, that is,
Gene Howard Golub, was an American numerical analyst who taught at Stanford University as Fletcher Jones Professor of Computer Science and held a courtesy appointment in electrical engineering.
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
Hendrik "Henk" Albertus van der Vorst is a Dutch mathematician and Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paper on the BiCGSTAB method was the most cited paper in the field of mathematics in the 1990s. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 2002 and the Netherlands Academy of Technology and Innovation. In 2006 he was awarded a knighthood of the Order of the Netherlands Lion. Henk van der Vorst is a Fellow of Society for Industrial and Applied Mathematics (SIAM).
Lis is a scalable parallel software library to solve discretized linear equations and eigenvalue problems that mainly arise from the numerical solution of partial differential equations using iterative methods. Although it is designed for parallel computers, the library can be used without being conscious of parallel processing.
David M. Young Jr. was an American mathematician and computer scientist who was one of the pioneers in the field of modern numerical analysis/scientific computing.
Andrew Knyazev is an American mathematician. He graduated from the Faculty of Computational Mathematics and Cybernetics of Moscow State University under the supervision of Evgenii Georgievich D'yakonov in 1981 and obtained his PhD in Numerical Mathematics at the Russian Academy of Sciences under the supervision of Vyacheslav Ivanovich Lebedev in 1985. He worked at the Kurchatov Institute between 1981–1983, and then to 1992 at the Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences, headed by Gury Marchuk.
The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see timeline and history of mathematics.
Michael Alan Saunders is a New Zealand American numerical analyst and computer scientist. He is a research professor of Management Science and Engineering at Stanford University. Saunders is known for his contributions to numerical linear algebra and numerical optimization and has developed many widely used software packages, such as MINOS, NPSOL, and SNOPT.
Alan Stuart Edelman is an American mathematician and computer scientist. He is a professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and a Principal Investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) where he leads a group in applied computing. In 2004, he founded a business called Interactive Supercomputing which was later acquired by Microsoft. Edelman is a fellow of American Mathematical Society (AMS), Society for Industrial and Applied Mathematics (SIAM), Institute of Electrical and Electronics Engineers (IEEE), and Association for Computing Machinery (ACM), for his contributions in numerical linear algebra, computational science, parallel computing, and random matrix theory. He is one of the creators of the technical programming language Julia.
Edmond Chow is a full professor in the School of Computational Science and Engineering of Georgia Institute of Technology. His main areas of research are in designing numerical methods for high-performance computing and applying these methods to solve large-scale scientific computing problems.
Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification is numerics including mathematically strict error evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems.
Beresford Neill Parlett is an English applied mathematician, specializing in numerical analysis and scientific computation.
Françoise Chatelin was a French mathematician whose research interests included spectral theory, numerical analysis, scientific computing, and the Cayley–Dickson construction.