Jack Joseph Dongarra FRS [8] (born July 18, 1950) is an American computer scientist and mathematician. He is a University Distinguished Professor Emeritus of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. [9] He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. [10] He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). [11] Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. [12] He was the recipient of the Turing Award in 2021.
Dongarra received a BSc degree in mathematics from Chicago State University in 1972 and a MSc degree in Computer Science from the Illinois Institute of Technology in 1973. In 1980, he received PhD in Applied Mathematics from the University of New Mexico under the supervision of Cleve Moler. [7]
Dongarra worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing, and documentation of high-quality mathematical software. He has contributed to the design and implementation of the following open-source software packages and systems: EISPACK, LINPACK, the Basic Linear Algebra Subprograms (BLAS), Linear Algebra Package (LAPACK), ScaLAPACK, [1] [2] Parallel Virtual Machine (PVM), Message Passing Interface (MPI), [3] NetSolve, [4] TOP500, Automatically Tuned Linear Algebra Software (ATLAS), [5] High-Performance Conjugate Gradient (HPCG) [13] [14] and Performance Application Programming Interface (PAPI). [6] These libraries excel in the accuracy of the underlying numerical algorithms and the reliability and performance of the software. [15] They benefit a very wide range of users through their incorporation into software including MATLAB, Maple, Wolfram Mathematica, GNU Octave, the R programming language, SciPy, and others. [15]
With Eric Grosse, Dongarra pioneered the distribution via email and the web of numeric open-source code collected in Netlib. He has published approximately 300 articles, papers, reports, and technical memoranda, and he is the co-author of several books. He holds appointments with Oak Ridge National Laboratory and the University of Manchester, where he has served as a Turing Fellow since 2007. [16] [17]
In 2004, Dongarra was awarded the IEEE Sid Fernbach Award for his contributions in the application of high-performance computers using innovative approaches. [18] In 2008, he was the recipient of the first IEEE Medal of Excellence in Scalable Computing. [19] In 2010, Dongarra was the first recipient of the SIAM Activity Group on Supercomputing Career Prize. [20] In 2011, he was the recipient of the IEEE Computer Society Charles Babbage Award. [21] In 2013, he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high-performance computing. [22] In 2019, Dongarra received the SIAM/ACM Prize in Computational Science. [8] In 2020, he received the IEEE Computer Pioneer Award for leadership in the area of high-performance mathematical software. [19]
Dongarra was elected a Fellow of the American Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), the Society for Industrial and Applied Mathematics (SIAM), and the Institute of Electrical and Electronics Engineers (IEEE) and a foreign member of the Russian Academy of Sciences and a foreign member of the Royal Society (ForMemRS). [8] In 2001, he was elected a member of the US National Academy of Engineering for contributions to numerical software, parallel and distributed computation, and problem-solving environments. [23] In 2023, Dongarra was elected to the U.S. National Academy of Sciences in recognition of his distinguished and continuing achievements in original research in the field of high-performance computing. [24] In 2024, Dongarra received an Honorary Doctorate degree from the Department of Informatics, Ionian University. [25]
Dongarra received the 2021 Turing Award "for pioneering contributions to numerical algorithms and libraries that enabled high performance computational software to keep pace with exponential hardware improvements for over four decades.". [26] His algorithms and software are regarded to have fueled the growth of high-performance computing and had significant impacts in many areas of computational science, from artificial intelligence to computer graphics. [17]
James Hardy Wilkinson FRS was a prominent figure in the field of numerical analysis, a field at the boundary of applied mathematics and computer science particularly useful to physics and engineering.
LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures.
Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in C and Fortran, with some programs in other languages.
LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes
The ScaLAPACK library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.
Nicholas John Higham FRS was a British numerical analyst. He was Royal Society Research Professor and Richardson Professor of Applied Mathematics in the Department of Mathematics at the University of Manchester.
William Douglas Gropp is the director of the National Center for Supercomputing Applications (NCSA) and the Thomas M. Siebel Chair in the Department of Computer Science at the University of Illinois at Urbana–Champaign. He is also the founding Director of the Parallel Computing Institute. Gropp helped to create the Message Passing Interface, also known as MPI, and the Portable, Extensible Toolkit for Scientific Computation, also known as PETSc.
The Sidney Fernbach Award established in 1992 by the IEEE Computer Society, in memory of Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems as the Division Chief for the Computation Division at Lawrence Livermore Laboratory from the late 1950s through the 1970s. A certificate and $2,000 are awarded for outstanding contributions in the application of high performance computers using innovative approaches. The nomination deadline is 1 July each year.
The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage.
The Ken Kennedy Award, established in 2009 by the Association for Computing Machinery and the IEEE Computer Society in memory of Ken Kennedy, is awarded annually and recognizes substantial contributions to programmability and productivity in computing and substantial community service or mentoring contributions. The award includes a $5,000 honorarium and the award recipient will be announced at the ACM - IEEE Supercomputing Conference.
James Weldon Demmel Jr. is an American mathematician and computer scientist, the Dr. Richard Carl Dehmel Distinguished Professor of Mathematics and Computer Science at the University of California, Berkeley.
Parallel Basic Linear Algebra Subprograms (PBLAS) is an implementation of Level 2 and 3 BLAS intended for distributed memory architectures. It provides a computational backbone for ScaLAPACK, a parallel implementation of LAPACK. It depends on Level 1 sequential BLAS operations for local computation and BLACS for communication between nodes.
Alan Stuart Edelman is an American mathematician and computer scientist. He is a professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and a Principal Investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) where he leads a group in applied computing. In 2004, he founded a business called Interactive Supercomputing which was later acquired by Microsoft. Edelman is a fellow of American Mathematical Society (AMS), Society for Industrial and Applied Mathematics (SIAM), Institute of Electrical and Electronics Engineers (IEEE), and Association for Computing Machinery (ACM), for his contributions in numerical linear algebra, computational science, parallel computing, and random matrix theory. He is one of the creators of the technical programming language Julia.
Edmond Chow is a full professor in the School of Computational Science and Engineering of Georgia Institute of Technology. His main areas of research are in designing numerical methods for high-performance computing and applying these methods to solve large-scale scientific computing problems.
Michela Taufer is an Italian-American computer scientist and holds the Jack Dongarra Professorship in High Performance Computing within the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. She is an ACM Distinguished Scientist and an IEEE Senior Member. In 2021, together with a team al Lawrence Livermore National Laboratory, she earned a R&D 100 Award for the Flux workload management software framework in the Software/Services category.
Computing in Science & Engineering (CiSE) is a bimonthly technical magazine published by the IEEE Computer Society. It was founded in 1999 from the merger of two publications: Computational Science & Engineering (CS&E) and Computers in Physics (CIP), the first published by IEEE and the second by the American Institute of Physics (AIP). The founding editor-in-chief was George Cybenko, known for proving one of the first versions of the universal approximation theorem of neural networks.
Media related to Jack Dongarra at Wikimedia Commons