Jack Dongarra

Last updated

Jack Dongarra

FRS
Jack-dongarra-2022.jpg
Dongarra in 2022
Born (1950-07-18) July 18, 1950 (age 73)
Chicago, Illinois, U.S.
Alma mater
Known for EISPACK, LINPACK, BLAS, LAPACK, ScaLAPACK, [1] [2] Netlib, PVM, MPI, [3] NetSolve, [4] Top500, ATLAS, [5] and PAPI [6]
Awards
Scientific career
Fields Computer Science
Computational science
Parallel computing
Institutions University of Tennessee
University of New Mexico
Rice University
Argonne National Laboratory
Oak Ridge National Laboratory
University of Manchester
Thesis Improving the Accuracy of Computed Matrix Eigenvalues  (1980)
Doctoral advisor Cleve Moler [7]
Website netlib.org/utk/people/JackDongarra/ OOjs UI icon edit-ltr-progressive.svg

Jack Joseph Dongarra FRS [8] (born July 18, 1950) is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. [9] He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. [10] He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). [11] Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. [12] He was the recipient of the Turing Award in 2021.

Contents

Education

Dongarra received a BSc degree in mathematics from Chicago State University in 1972 and a MSc degree in Computer Science from the Illinois Institute of Technology in 1973. In 1980, he received PhD in Applied Mathematics from the University of New Mexico under the supervision of Cleve Moler. [7]

Research and career

Dongarra worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing, and documentation of high-quality mathematical software. He has contributed to the design and implementation of the following open-source software packages and systems: EISPACK, LINPACK, the Basic Linear Algebra Subprograms (BLAS), Linear Algebra Package (LAPACK), ScaLAPACK, [1] [2] Parallel Virtual Machine (PVM), Message Passing Interface (MPI), [3] NetSolve, [4] TOP500, Automatically Tuned Linear Algebra Software (ATLAS), [5] High-Performance Conjugate Gradient (HPCG) [13] [14] and Performance Application Programming Interface (PAPI). [6] These libraries excel in the accuracy of the underlying numerical algorithms and the reliability and performance of the software. [15] They benefit a very wide range of users through their incorporation into software including MATLAB, Maple, Wolfram Mathematica, GNU Octave, the R programming language, SciPy, and others. [15]

With Eric Grosse, Dongarra pioneered the distribution via email and the web of numeric open-source code collected in Netlib. He has published approximately 300 articles, papers, reports, and technical memoranda, and he is the co-author of several books. He holds appointments with Oak Ridge National Laboratory and the University of Manchester, where he has served as a Turing Fellow since 2007. [16] [17]

Awards and honors

In 2004, Dongarra was awarded the IEEE Sid Fernbach Award for his contributions in the application of high-performance computers using innovative approaches. [18] In 2008, he was the recipient of the first IEEE Medal of Excellence in Scalable Computing. [19] In 2010, Dongarra was the first recipient of the SIAM Activity Group on Supercomputing Career Prize. [20] In 2011, he was the recipient of the IEEE Computer Society Charles Babbage Award. [21] In 2013, he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high-performance computing. [22] In 2019, Dongarra received the SIAM/ACM Prize in Computational Science. [8] In 2020, he received the IEEE Computer Pioneer Award for leadership in the area of high-performance mathematical software. [19]

Dongarra was elected a Fellow of the American Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), the Society for Industrial and Applied Mathematics (SIAM), and the Institute of Electrical and Electronics Engineers (IEEE) and a foreign member of the Russian Academy of Sciences and a foreign member of the Royal Society (ForMemRS). [8] In 2001, he was elected a member of the US National Academy of Engineering for contributions to numerical software, parallel and distributed computation, and problem-solving environments. [23]

Dongarra received the 2021 Turing Award "for pioneering contributions to numerical algorithms and libraries that enabled high performance computational software to keep pace with exponential hardware improvements for over four decades.". [24] His algorithms and software are regarded to have fueled the growth of high-performance computing and had significant impacts in many areas of computational science, from artificial intelligence to computer graphics. [17]

Related Research Articles

<span class="mw-page-title-main">James H. Wilkinson</span>

James Hardy Wilkinson FRS was a prominent figure in the field of numerical analysis, a field at the boundary of applied mathematics and computer science particularly useful to physics and engineering.

LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures.

Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in C and Fortran, with some programs in other languages.

<span class="mw-page-title-main">LAPACK</span> Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes

The ScaLAPACK library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

<span class="mw-page-title-main">Nicholas Higham</span> British numerical analyst (1961–2024)

Nicholas John Higham FRS was a British numerical analyst. He was Royal Society Research Professor and Richardson Professor of Applied Mathematics in the Department of Mathematics at the University of Manchester.

<span class="mw-page-title-main">Bill Gropp</span>

William Douglas Gropp is the director of the National Center for Supercomputing Applications (NCSA) and the Thomas M. Siebel Chair in the Department of Computer Science at the University of Illinois at Urbana–Champaign. He is also the founding Director of the Parallel Computing Institute. Gropp helped to create the Message Passing Interface, also known as MPI, and the Portable, Extensible Toolkit for Scientific Computation, also known as PETSc.

The Sidney Fernbach Award established in 1992 by the IEEE Computer Society, in memory of Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems as the Division Chief for the Computation Division at Lawrence Livermore Laboratory from the late 1950s through the 1970s. A certificate and $2,000 are awarded for outstanding contributions in the application of high performance computers using innovative approaches. The nomination deadline is 1 July each year.

The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage.

The Ken Kennedy Award, established in 2009 by the Association for Computing Machinery and the IEEE Computer Society in memory of Ken Kennedy, is awarded annually and recognizes substantial contributions to programmability and productivity in computing and substantial community service or mentoring contributions. The award includes a $5,000 honorarium and the award recipient will be announced at the ACM - IEEE Supercomputing Conference.

<span class="mw-page-title-main">James Demmel</span> American mathematician

James Weldon Demmel Jr. is an American mathematician and computer scientist, the Dr. Richard Carl Dehmel Distinguished Professor of Mathematics and Computer Science at the University of California, Berkeley.

Parallel Basic Linear Algebra Subprograms (PBLAS) is an implementation of Level 2 and 3 BLAS intended for distributed memory architectures. It provides a computational backbone for ScaLAPACK, a parallel implementation of LAPACK. It depends on Level 1 sequential BLAS operations for local computation and BLACS for communication between nodes.

<span class="mw-page-title-main">Alan Edelman</span> American mathematician

Alan Stuart Edelman is an American mathematician and computer scientist. He is a professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and a Principal Investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) where he leads a group in applied computing. In 2004, he founded a business called Interactive Supercomputing which was later acquired by Microsoft. Edelman is a fellow of American Mathematical Society (AMS), Society for Industrial and Applied Mathematics (SIAM), Institute of Electrical and Electronics Engineers (IEEE), and Association for Computing Machinery (ACM), for his contributions in numerical linear algebra, computational science, parallel computing, and random matrix theory. He is one of the cocreators of the technical programming language Julia.

<span class="mw-page-title-main">Inderjit Dhillon</span>

Inderjit S. Dhillon is the Gottesman Family Centennial Professor of Computer Science and Mathematics at the University of Texas at Austin, where he is also the Director of the ICES Center for Big Data Analytics. His main research interests are in machine learning, data analysis, parallel computing, network analysis, linear algebra and optimization.

Michela Taufer is an Italian-American computer scientist and holds the Jack Dongarra Professorship in High Performance Computing within the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. She is an ACM Distinguished Scientist and an IEEE Senior Member. In 2021, together with a team al Lawrence Livermore National Laboratory, she earned a R&D 100 Award for the Flux workload management software framework in the Software/Services category.

<i>Computing in Science & Engineering</i> Technical magazine

Computing in Science & Engineering (CiSE) is a bimonthly technical magazine published by the IEEE Computer Society. It was founded in 1999 from the merger of two publications: Computational Science & Engineering (CS&E) and Computers in Physics (CIP), the first published by IEEE and the second by the American Institute of Physics (AIP). The founding editor-in-chief was George Cybenko, known for proving one of the first versions of the universal approximation theorem of neural networks.

References

  1. 1 2 Choi, J.; Dongarra, J. J.; Pozo, R.; Walker, D. W. (1992). "ScaLAPACK: a scalable linear algebra library for distributed memory concurrent computers". Proceedings of the Fourth Symposium on the Frontiers of Massively Parallel Computation. p. 120. doi:10.1109/FMPC.1992.234898. ISBN   978-0-8186-2772-9. S2CID   15496519.
  2. 1 2 "ScaLAPACK — Scalable Linear Algebra PACKage". Netlib.org. Retrieved December 1, 2022.
  3. 1 2 Gabriel, E.; Fagg, G. E.; Bosilca, G.; Angskun, T.; Dongarra, J. J.; Squyres, J. M.; Sahay, V.; Kambadur, P.; Barrett, B.; Lumsdaine, A.; Castain, R. H.; Daniel, D. J.; Graham, R. L.; Woodall, T. S. (2004). "Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. Vol. 3241. p. 97. CiteSeerX   10.1.1.102.1555 . doi:10.1007/978-3-540-30218-6_19. ISBN   978-3-540-23163-9.
  4. 1 2 "NetSolve". Icl.cs.utk.edu. Retrieved July 14, 2012.
  5. 1 2 Clint Whaley, R.; Petitet, A.; Dongarra, J. J. (2001). "Automated empirical optimizations of software and the ATLAS project". Parallel Computing. 27 (1–2): 3–35. CiteSeerX   10.1.1.35.2297 . doi:10.1016/S0167-8191(00)00087-9.
  6. 1 2 "PAPI". Icl.cs.utk.edu. Retrieved July 14, 2012.
  7. 1 2 Jack J. Dongarra at the Mathematics Genealogy Project
  8. 1 2 3 "Jack Dongarra – Royal Society". Royalsociety.org. Retrieved April 23, 2019.
  9. "Min H. Kao Department of Electrical Engineering and Computer Science". Eecs.utk.edu. Retrieved December 1, 2022.
  10. "The History of Numerical Analysis and Scientific Computing". October 9, 2006. Archived from the original on October 9, 2006. Retrieved December 1, 2022.
  11. "Dr. Jack Dongarra — Hagler Institute for Advanced Study at Texas A&M University". Hias.tamu.edu. Archived from the original on September 21, 2017. Retrieved September 20, 2017.
  12. "Innovative Computing Laboratory – Academic Research in Enabling Technology and High Performance Computing". Icl.cs.utk.edu. Retrieved July 14, 2012.
  13. Hemsoth, Nicole (June 26, 2014). "New HPC Benchmark Delivers Promising Results". Hpcwire.com. Retrieved December 1, 2022.
  14. Dongarra, Jack; Heroux, Michael (June 2013). "Toward a New Metric for Ranking High Performance Computing Systems" (PDF). Sandia National Laboratory. Retrieved July 4, 2016.
  15. 1 2 "News and events – Jack Dongarra elected as Foreign Member of the Royal Society – The University of Manchester – School of Mathematics". Maths.manchester.ac.uk. Retrieved April 23, 2019.
  16. "Professor Jack Dongarra: Turing Fellow". manchester.ac.uk. University of Manchester.
  17. 1 2 "University of Tennessee's Jack Dongarra receives 2021 ACM A.M. Turing Award". awards.acm.org. Retrieved March 30, 2022.
  18. "Sidney Fernbach Award". IEEE Computer Society. IEEE. April 3, 2018. Retrieved March 31, 2022.
  19. 1 2 "Jack Dongarra Selected to Receive the 2020 IEEE Computer Society Women of the ENIAC Computer Pioneer Award". IEEE Computer Society. IEEE. January 14, 2020. Retrieved March 31, 2022.
  20. "SIAM Activity Group on Supercomputing Career Prize". SIAM. Retrieved March 31, 2022.
  21. "IEEE CS Charles Babbage Award". IEEE Computer Society. IEEE. April 3, 2018. Retrieved March 31, 2021.
  22. "Jack Dongarra to Receive Ken Kennedy Award for Software Technologies". IEEE Computer Society. IEEE. October 8, 2013. Retrieved March 31, 2022.
  23. "Dongarra Is Elected to NAE". SIAM: Society for Industrial and Applied Mathematics. SIAM. May 9, 2001. Retrieved March 31, 2022.
  24. "Open Graph Title: University of Tennessee's Jack Dongarra receives 2021 ACM A.M. Turing Award". awards.acm.org. Archived from the original on May 5, 2022. Retrieved March 30, 2022.

Commons-logo.svg Media related to Jack Dongarra at Wikimedia Commons