Thomas Sterling (computing)

Last updated
Thomas Sterling

Thomas Sterling.jpg

Thomas Sterling at LSU
Residence Bloomington, IN
Occupation Professor, Researcher
Employer Indiana University, Caltech, Oak Ridge National Laboratory
Known for Cluster computing, Beowulf clusters
Website http://www.soic.indiana.edu/all-people/profile.html?profile_id=303

Thomas Sterling is Professor of Computer Science at Indiana University, a Faculty Associate at California Institute of Technology, and a Distinguished Visiting Scientist at Oak Ridge National Laboratory. He received his PhD as a Hertz Fellow from MIT in 1984. He is probably best known as the father of Beowulf clusters (developed in collaboration with Don Becker) and for his research on Petaflops computing architecture. Professor Sterling is the co-author of six books and holds six patents. He was awarded the Gordon Bell Prize with collaborators in 1997. Dr. Sterling is working on a computational model called ParalleX, an advanced message-driven split-transaction computing model for scalable low-power fault-tolerant operation. In addition, he is developing an ultra lightweight supervisor runtime kernel in support of MIND and other fine grain architectures (like CELL) and the Agincourt parallel programming language for high efficiency through intrinsics in support of latency hiding and low overhead synchronization for both conventional and innovative parallel computer architectures.

Indiana University university system, Indiana, U.S.

Indiana University (IU) is a multi-campus public university system in the state of Indiana, United States. Indiana University has a combined student body of more than 110,000 students, which includes approximately 46,000 students enrolled at the Indiana University Bloomington campus.

Oak Ridge National Laboratory research facility in Tennessee, USA

Oak Ridge National Laboratory (ORNL) is an American multiprogram science and technology national laboratory sponsored by the U.S. Department of Energy (DOE) and administered, managed, and operated by UT–Battelle as a federally funded research and development center (FFRDC) under a contract with the DOE. ORNL is the largest science and energy national laboratory in the Department of Energy system by size and by annual budget. ORNL is located in Oak Ridge, Tennessee, near Knoxville. ORNL's scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security.

Hertz Foundation

The Fannie and John Hertz Foundation is an American non-profit organization that awards fellowships to Ph.D. students in the applied physical, biological and engineering sciences. The fellowship provides $250,000 of support over five years. The goal is for Fellows to be financially independent and free from traditional restrictions of their academic departments in order to promote innovation in collaboration with leading professors in the field. Through a rigorous application and interview process, the Hertz Foundation seeks to identify young scientists and engineers with the potential to change the world for the better and supports their research endeavors from an early stage. Fellowship recipients pledge to make their skills available to the United States in times of national emergency.

The Center for Computation and Technology (CCT) is an interdisciplinary research center located on the campus of Louisiana State University in Baton Rouge, Louisiana.


Related Research Articles

Parallel computing programming paradigm in which many calculations or the execution of processes are carried out simultaneously

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Bio-inspired computing, short for biologically inspired computing, is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. It is often closely related to the field of artificial intelligence, as many of its pursuits can be linked to machine learning. It relies heavily on the fields of biology, computer science and mathematics. Briefly put, it is the use of computers to model the living phenomena, and simultaneously the study of life to improve the usage of computers. Biologically inspired computing is a major subset of natural computation.

Jack Dennis American computer scientist

Jack Bonnell Dennis is a computer scientist and Emeritus Professor of Computer Science and Engineering at MIT.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.

Charles E. Leiserson American computer scientist

Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. He invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung. He conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but nevertheless use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, and Clifford Stein.

SIMD within a register (SWAR) is a technique for performing parallel operations on data contained in a processor register. SIMD stands for single instruction, multiple data.

Andrew James Herbert, OBE, FREng is a British computer scientist, formerly Chairman of Microsoft Research, for the Europe, Middle East and Africa region.

Kanianthra Mani Chandy is the Simon Ramo Professor of Computer Science at the California Institute of Technology (Caltech). He has been the Executive Officer of the Computer Science Department twice, and he has been a professor at Caltech since 1989. He also served as Chair of the Division of Engineering and Applied Science at the California Institute of Technology.

Edmund M. Clarke American computer scientist

Edmund Melson Clarke, Jr. is an American retired computer scientist and academic noted for developing model checking, a method for formally verifying hardware and software designs. He is the FORE Systems Professor of Computer Science at Carnegie Mellon University. Clarke, along with E. Allen Emerson and Joseph Sifakis, is a recipient of the 2007 Association for Computing Machinery A.M. Turing Award.

Alan L. Davis American computer scientist

Alan "Al" Lynn Davis is an American computer scientist and researcher, a professor of computer science at the University of Utah, and served as the associate director of the University of Utah School of Computing.

Uzi Vishkin is a computer scientist at the University of Maryland, College Park, where he is Professor of Electrical and Computer Engineering at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Uzi Vishkin is known for his work in the field of parallel computing. In 1996, he was inducted as a Fellow of the Association for Computing Machinery, with the following citation: "One of the pioneers of parallel algorithms research, Dr. Vishkin's seminal contributions played a leading role in forming and shaping what thinking in parallel has come to mean in the fundamental theory of Computer Science."

Edward S. Davidson is a professor emeritus in Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor.

Manycore processors are specialist multi-core processors designed for a high degree of parallel processing, containing a large number of simpler, independent processor cores. Manycore processors are used extensively in embedded computers and high-performance computing. As of November 2018, the world's third fastest supercomputer, the Chinese Sunway TaihuLight, obtains its performance from 40,960 SW26010 manycore processors, each containing 256 cores.

P├ęter Kacsuk Hungarian computer scientist

Péter Kacsuk is a Hungarian computer scientist at MTA-SZTAKI, Budapest, Hungary.

In computing, massively parallel refers to the use of a large number of processors to perform a set of coordinated computations in parallel (simultaneously).

High Performance ParalleX (HPX) is an environment for high performance computing. It is currently under active development by the Stellar group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism

Guang R. Gao is a computer scientist and a Professor of Electrical and Computer Engineering at the University of Delaware. Gao is a founder and Chief Scientist of ETI.

Geoffrey C. Fox Computer Scientist, physicist

Geoffrey Charles Fox is a British-born American theoretical physicist and computer scientist. He received a Ph.D. in Theoretical Physics from Cambridge University in 1967 and is now a Distinguished Professor of Informatics and Computing, and Physics at Indiana University where he is director of the Digital Science Center and Associate Dean for Research and Graduate Studies at the School of Informatics and Computing. He previously held positions at Caltech, Syracuse University and Florida State University. He has supervised the Ph.D. of 65 students and published over 1200 publications in physics and computer science according to Google Scholar, including his book Parallel Computing Works! He currently works in applying computer science to bioinformatics, defense, earthquake and ice-sheet science, particle physics and chemical informatics. He is principal investigator of FutureGrid – a new cyberinfrastructure test to enable development of new approaches to scientific computing. He is involved in several projects to enhance the capabilities of Minority Serving Institutions.

Alexander L. Wolf computer scientist

Alexander L. Wolf is a Computer Scientist known for his research in software engineering, distributed systems, and computer networking. He is credited, along with his many collaborators, with introducing the modern study of software architecture, content-based publish/subscribe messaging, content-based networking, automated process discovery, and the software deployment lifecycle. Wolf's 1985 Ph.D. dissertation developed language features for expressing a module's import/export specifications and the notion of multiple interfaces for a type, both of which are now common in modern computer programming languages.