Olaf Storaasli

Last updated
Olaf Storaasli
Olaf O. Storaasli photo (Manchester UK).jpg
Born (1943-05-15) May 15, 1943 (age 80)
OccupationComputational Science & Engineering

Olaf O. Storaasli is a scientist & engineer who worked at NASA ), Oak Ridge National Laboratory ), Centrus Energy, & Synective Labs. At NASA, he led a hardware, software & applications teams to successfully develop one of NASA's first parallel computers, the finite element machine, & developed rapid matrix equation algorithms tailored for high-performance computers to harness FPGA & GPU accelerators to solve science & engineering applications. He was a graduate advisor & instructor at the University of Tennessee, George Washington University & Christopher Newport University.

Contents

Education

Storaasli received a B.A. in physics, mathematics, & French at Concordia College (1964). He went on to earn an M.A. in mathematics at University of South Dakota (1966) and a Ph.D in engineering mechanics at NCSU (1970). He was a postdoctoral fellow at the Norwegian University of Science & Technology ( (1984–85]) & the University of Edinburgh(2008).

Research

He develops, tests and documents parallel analysis software to speed matrix equation solution to simulate physical & biological behavior on advanced-computer architectures (e.g. NASA's GPS solver based on prior Finite element machine and rapid parallel analysis of Space Shuttle SRB redesign earned Cray's 1st GigaFLOP Performance Award at Supercomputing '89).

Books Archived 2018-03-30 at the Wayback Machine

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">David H. Bailey (mathematician)</span> American mathematician (born 1948)

David Harold Bailey is a mathematician and computer scientist. He received his B.S. in mathematics from Brigham Young University in 1972 and his Ph.D. in mathematics from Stanford University in 1976. He worked for 14 years as a computer scientist at NASA Ames Research Center, and then from 1998 to 2013 as a Senior Scientist at the Lawrence Berkeley National Laboratory. He is now retired from the Berkeley Lab.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with flexible hardware platforms like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to add custom computational blocks using FPGAs. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric, thus providing new computational blocks without the need to manufacture and add new chips to the existing system.

Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world.

<span class="mw-page-title-main">Jack Dongarra</span> American computer scientist (born 1950)

Jack Joseph Dongarra is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

Narendra Krishna Karmarkar is an Indian mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher.

<span class="mw-page-title-main">NASA Advanced Supercomputing Division</span> Provides computing resources for various NASA projects

The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.

<span class="mw-page-title-main">Edinburgh Parallel Computing Centre</span> Supercomputing centre at the University of Edinburgh

EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.

<span class="mw-page-title-main">Finite element machine</span> Project

The Finite Element Machine (FEM) was a late 1970s-early 1980s NASA project to build and evaluate the performance of a parallel computer for structural analysis. The FEM was completed and successfully tested at the NASA Langley Research Center in Hampton, Virginia. The motivation for FEM arose from the merger of two concepts: the finite element method of structural analysis and the introduction of relatively low-cost microprocessors.

The United States Department of Defense High Performance Computing Modernization Program (HPCMP) was initiated in 1992 in response to Congressional direction to modernize the Department of Defense (DoD) laboratories’ high performance computing capabilities. The HPCMP provides supercomputers, a national research network, high-end software tools, a secure environment, and computational science experts that together enable the Defense laboratories and test centers to conduct research, development, test and technology evaluation activities.

Marc Snir is an Israeli-American computer scientist. He holds a Michael Faiman and Saburo Muroga Professorship in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He currently pursues research in parallel computing. He was the principal investigator (PI) for the software of the petascale Blue Waters system and co-director of the Intel and Microsoft-funded Universal Parallel Computing Research Center (UPCRC).

Charbel Farhat is the Vivian Church Hoff Professor of Aircraft Structures in the School of Engineering at Stanford University, where from 2008 to 2023, he chaired the Department of Aeronautics and Astronautics. From 2022 to 2023, he chaired this department as the inaugural James and Anna Marie Spilker Chair of Aeronautics and Astronautics. He is also Professor in the Institute for Computational and Mathematical Engineering, and Director of the Stanford-King Abdulaziz City for Science and Technology Center of Excellence for Aeronautics and Astronautics. From 2017 to 2023, he served on the Space Technology Industry-Government-University Roundtable; from 2015 to 2019, he served on the United States Air Force Scientific Advisory Board (SAB); from 2008 to 2018, he served on the United States Bureau of Industry and Security's Emerging Technology and Research Advisory Committee (ETRAC) at the United States Department of Commerce; and from 2007 to 2018, he served as the Director of the Army High Performance Computing Research Center at Stanford University. He was designated by the US Navy recruiters as a Primary Key-Influencer and flew with the Blue Angels during Fleet Week 2014.

The Sidney Fernbach Award established in 1992 by the IEEE Computer Society, in memory of Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems as the Division Chief for the Computation Division at Lawrence Livermore Laboratory from the late 1950s through the 1970s. A certificate and $2,000 are awarded for outstanding contributions in the application of high performance computers using innovative approaches. The nomination deadline is 1 July each year.

MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions using adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations .

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

<span class="mw-page-title-main">Supercomputing in Pakistan</span> Overview of supercomputing in Pakistan

The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.

<span class="mw-page-title-main">Horst D. Simon</span> Computer scientist

Horst D. Simon is a computer scientist known for his contributions to high-performance computing (HPC) and computational science. He is director of ADIA Lab in Abu Dhabi, UAE and editor of TOP500.

The Center for Supercomputing Research and Development (CSRD) at the University of Illinois (UIUC) was a research center funded from 1984 to 1993. It built the shared memory Cedar computer system, which included four hardware multiprocessor clusters, as well as parallel system and applications software. It was distinguished from the four earlier UIUC Illiac systems by starting with commercial shared memory subsystems that were based on an earlier paper published by the CSRD founders. Thus CSRD was able to avoid many of the hardware design issues that slowed the Illiac series work. Over its 9 years of major funding, plus follow-on work by many of its participants, CSRD pioneered many of the shared memory architectural and software technologies upon which all 21st century computation is based.

References

1 Olaf Storaasli at the Mathematics Genealogy Project
2 State-of-the-Art in Heterogeneous Computing Archived 2016-05-06 at the Wayback Machine , Scientific Programming 18 pp. 1–33, IOS Press, 2010.(+PARA10)
3 High-Performance Mixed-Precision Linear Solver for FPGAs, IEEE Trans Computers 57/12, 1614–1623, 2008.
4 Accelerating Science Applications up to 100X with FPGAs, PARA08 Proc.Trondheim Norway, May 2008.
5 Computation Speed-up of Complex Durability Analysis of Large-Scale Composite Structures, AIAA 49th SDM Proc. 2008.
6 Accelerating Genome Sequencing 100-1000X Archived 2018-06-12 at the Wayback Machine MRSC Proc. Queen's University, Belfast, UK April 1–3, 2008.
7 Exploring Accelerating Science Applications with FPGAs, NCSA/RSSI Proc. Urbana, IL, July 20, 2007.
8 Performance Evaluation of FPGA-Based Biological Applications, Cray Users Group Proc. Seattle, May 2007.
9 Sparse Matrix-Vector Multiplication Design on FPGAs, IEEE 15th Symp on FCCM Proc., 349–352, 2007.
10 Computing at the Speed of Thought, Aerospace America pp. 35–38, Oct. 2004.
11 Preface: A Computational Scientist's Perspective on Appellate Technology, 15 J. App. Prac. & Process 39-46 2014.
12 before 2008.
13. Interview with Astronaut Charlie Camarda.