Richard Mattson

Last updated

Richard Lewis Mattson (born May 29, 1935) [1] is an American computer scientist known for his pioneering work on using memory trace data to simulate the performance of the memory hierarchy. [2] He developed the stack distance profile, and used it to model page misses in virtual memory systems as a function of the amount of real memory available. The same methods have been applied as well more recently for modeling the behavior of CPU caches at lower levels of the memory hierarchy, [3] [4] and of web caches for internet content. [5]

Mattson was born in Greeley, Colorado. [1] He graduated from the University of California, Berkeley in 1957, with honors in electrical engineering. [6] He became a student of Bernard Widrow at Stanford University, where he completed his doctorate in 1962. His dissertation was The Analysis and Synthesis of Adaptive Systems Which Use Networks of Threshold Elements. [7] He then became a faculty member at Stanford himself, before moving to IBM Research in 1965. [8] While at Stanford, he supervised two doctoral students, John Hopcroft and Yale Patt, both of whom themselves became notable computer scientists, and he has many academic descendants through both of them. [7]

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">John L. Hennessy</span> American computer scientist

John Leroy Hennessy is an American computer scientist, academician and businessman who serves as Chairman of Alphabet Inc. Hennessy is one of the founders of MIPS Computer Systems Inc. as well as Atheros and served as the tenth President of Stanford University. Hennessy announced that he would step down in the summer of 2016. He was succeeded as president by Marc Tessier-Lavigne. Marc Andreessen called him "the godfather of Silicon Valley."

Cache only memory architecture (COMA) is a computer memory organization for use in multiprocessors in which the local memories at each node are used as cache. This is in contrast to using the local memories as actual main memory, as in NUMA organizations.

<span class="mw-page-title-main">ASCI Red</span> Supercomputer

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.

<span class="mw-page-title-main">AMD Am29000</span>

The AMD Am29000, commonly shortened to 29k, is a family of 32-bit RISC microprocessors and microcontrollers developed and fabricated by Advanced Micro Devices (AMD). Based on the seminal Berkeley RISC, the 29k added a number of significant improvements. They were, for a time, the most popular RISC chips on the market, widely used in laser printers from a variety of manufacturers.

In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems.

The International Symposium on Computer Architecture (ISCA) is an annual academic conference on computer architecture, generally viewed as the top-tier in the field. Association for Computing Machinery's Special Interest Group on Computer Architecture and Institute of Electrical and Electronics Engineers Computer Society are technical sponsors.

<span class="mw-page-title-main">Mark Horowitz</span> American electrical engineer (1957-)

Mark A. Horowitz is an American electrical engineer, computer scientist, inventor, and entrepreneur who is the Yahoo! Founders Professor in the School of Engineering at Stanford University and holds a joint appointment in the Electrical Engineering and Computer Science departments. He is a co-founder of Rambus Inc., now a technology licensing company. Horowitz has authored over 700 published conference and research papers and is among the most highly-cited computer architects of all time. He is a prolific inventor and holds 374 patents as of 2023.

Memory-level parallelism (MLP) is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer (TLB) misses, at the same time.

<span class="mw-page-title-main">Jacob K. White</span> American electronics engineer and professor

Jacob K. White is the Cecil H. Green Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. He researches fast numerical algorithms for simulation, particularly the simulation of circuits. His work on the FASTCAP program for three-dimensional capacitance calculation and FASTHENRY, a program for three-dimensional inductance calculations, is highly cited. He has also done extensive work on steady-state simulation of analog and microwave circuits. White was a significant early contributor to the development of Spectre and SpectreRF.

The Berkeley IRAM project was a 1996–2004 research project in the Computer Science Division of the University of California, Berkeley which explored computer architecture enabled by the wide bandwidth between memory and processor made possible when both are designed on the same integrated circuit (chip). Since it was envisioned that such a chip would consist primarily of random-access memory (RAM), with a smaller part needed for the central processing unit (CPU), the research team used the term "Intelligent RAM" to describe a chip with this architecture. Like the J–Machine project at MIT, the primary objective of the research was to avoid the Von Neumann bottleneck which occurs when the connection between memory and CPU is a relatively narrow memory bus between separate integrated circuits.

Computer security compromised by hardware failure is a branch of computer security applied to hardware. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. Such secret information could be retrieved by different ways. This article focus on the retrieval of data thanks to misused hardware or hardware failure. Hardware could be misused or exploited to get secret data. This article collects main types of attack that can lead to data theft.

<span class="mw-page-title-main">Tachyon (software)</span>

Tachyon is a parallel/multiprocessor ray tracing software. It is a parallel ray tracing library for use on distributed memory parallel computers, shared memory computers, and clusters of workstations. Tachyon implements rendering features such as ambient occlusion lighting, depth-of-field focal blur, shadows, reflections, and others. It was originally developed for the Intel iPSC/860 by John Stone for his M.S. thesis at University of Missouri-Rolla. Tachyon subsequently became a more functional and complete ray tracing engine, and it is now incorporated into a number of other open source software packages such as VMD, and SageMath. Tachyon is released under a permissive license.

<span class="mw-page-title-main">Anna Karlin</span> American computer scientist

Anna R. Karlin is an American computer scientist, the Microsoft Professor of Computer Science & Engineering at the University of Washington.

LIRS is a page replacement algorithm with an improved performance over LRU and many other newer replacement algorithms. This is achieved by using "reuse distance" as the locality metric for dynamically ranking accessed pages to make a replacement decision.

Stanford DASH was a cache coherent multiprocessor developed in the late 1980s by a group led by Anoop Gupta, John L. Hennessy, Mark Horowitz, and Monica S. Lam at Stanford University. It was based on adding a pair of directory boards designed at Stanford to up to 16 SGI IRIS 4D Power Series machines and then cabling the systems in a mesh topology using a Stanford-modified version of the Torus Routing Chip. The boards designed at Stanford implemented a directory-based cache coherence protocol allowing Stanford DASH to support distributed shared memory for up to 64 processors. Stanford DASH was also notable for both supporting and helping to formalize weak memory consistency models, including release consistency. Because Stanford DASH was the first operational machine to include scalable cache coherence, it influenced subsequent computer science research as well as the commercially available SGI Origin 2000. Stanford DASH is included in the 25th anniversary retrospective of selected papers from the International Symposium on Computer Architecture and several computer science books, has been simulated by the University of Edinburgh, and is used as a case study in contemporary computer science classes.

Cache prefetching is a technique used by computer processors to boost execution performance by fetching instructions or data from their original storage in slower memory to a faster local memory before it is actually needed. Most modern computer processors have fast and local cache memory in which prefetched data is held until it is required. The source for the prefetch operation is usually main memory. Because of their design, accessing cache memories is typically much faster than accessing main memory, so prefetching data and then accessing it from caches is usually many orders of magnitude faster than accessing it directly from main memory. Prefetching can be done with non-blocking cache control instructions.

A power law is a mathematical relationship between two quantities in which one is directly proportional to some power of the other. The power law for cache misses was first established by C. K. Chow in his 1974 paper, supported by experimental data on hit ratios for stack processing by Richard Mattson in 1971. The power law of cache misses can be used to narrow down the cache sizes to practical ranges, given a tolerable miss rate, as one of the early steps while designing the cache hierarchy for a uniprocessor system.

<span class="mw-page-title-main">Babak Falsafi</span>

Babak Falsafi is a computer scientist specializing in computer architecture and digital platform design. He is the founding director of EcoCloud at EPFL, an industrial/academic consortium investigating efficient and intelligent data-centric technologies. He is a professor in the School of Computer and Communication Sciences at EPFL. Prior to that he was a professor of electrical and computer engineering at Carnegie Mellon University, and an assistant professor of electrical and computer engineering at Purdue University. He holds a bachelor's degree in computer science, a bachelor's degree in electrical and computer engineering with distinctions from SUNY Buffalo, and a master's degree and PhD in computer science from University Wisconsin - Madison.

Trevor Mudge is a computer scientist, academic and researcher. He is the Bredt Family Chair of Computer Science and Engineering, and Professor of Electrical Engineering and Computer Science at the University of Michigan.

References

  1. 1 2 American Men & Women of Science, Volume 5, Thomson/Gale, 2009, p. 285
  2. Eggers, S.J.; Lazowska, E.D.; Lin, Yi-Bing (1989), "Techniques For The Trace-Driven Simulation Of Cache Performance", 1989 Winter Simulation Conference Proceedings, IEEE, pp. 1042–1046, doi:10.1109/wsc.1989.718790, ISBN   978-0-911801-58-3, S2CID   53234699
  3. Almási, George; Caşcaval, Cǎlin; Padua, David A. (June 2002), "Calculating Stack Distances Efficiently", Proceedings of the 2002 Workshop on Memory System Performance (MSP '02), SIGPLAN Notices, 38 (2S): 37–43, CiteSeerX   10.1.1.586.7474 , doi:10.1145/773039.773043
  4. Conte, T.M.; Hirsch, M.A.; Hwu, W.W. (1998), "Combining trace sampling with single pass methods for efficient cache simulations", IEEE Transactions on Computers, vol. C-47, IEEE, pp. 714–719, doi:10.1109/12.689650
  5. Fonseca, R.; Almeida, V.; Crovella, M.; Abrahao, B. (2003), "On the intrinsic locality properties of Web reference streams", Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE INFOCOM 2003), vol. 1, IEEE, pp. 448–458, CiteSeerX   10.1.1.73.3153 , doi:10.1109/infcom.2003.1208696, ISBN   978-0-7803-7752-3, S2CID   2688263
  6. The Ninety-Fourth Commencement, University of California, Berkeley, June 7, 1957, p. 141
  7. 1 2 Richard Mattson at the Mathematics Genealogy Project
  8. "Authors", IBM Systems Journal, 9 (2): 159, 1970, doi:10.1147/sj.92.0159