Michela Taufer

Last updated

Michela Taufer
Born (1971-04-23) 23 April 1971 (age 53)
NationalityAmerican
Alma mater ETH Zurich (Swiss Federal Institute of Technology in Zürich)
Known for
Awards
  • R&D100 Award in the Software/Service Category (2021) [5]
  • IEEE Senior Member (2020) [1]
  • IBM Faculty Award (2019, 2021) [6]
  • ACM Distinguished Scientist (2015) [7]
  • ACM Senior Member (2014) [8]
Scientific career
Fields
Institutions
Academic advisors
Website https://globalcomputing.group/about.html

Michela Taufer (born 23 April 1971) [9] is an Italian-American computer scientist and holds the Jack Dongarra Professorship in High Performance Computing within the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. [10] She is an ACM Distinguished Scientist [7] and an IEEE Senior Member. [1] In 2021, together with a team al Lawrence Livermore National Laboratory, she earned a R&D 100 Award for the Flux workload management software framework in the Software/Services category.

Contents

Education

Taufer attended the University of Padua where she obtained a Laurea in Computer Engineering in 1996. She later went on to earn her Ph.D. in computer science at ETH Zurich (Swiss Federal Institute of Technology in Zürich) in 2002. [9] The dissertation for her Ph.D. in computer science from ETH Zurich (Swiss Federal Institute of Technology in Zürich) was titled, Inverting Middleware: Performance Analysis of Layered Application Codes in High Performance Distributed Computing, and was supervised by Thomas M. Stricker and Daniel A. Reed. [9]

Research

Her current research interests [11] include high performance computing, [12] scientific applications, and their programmability on multi-core and many-core platforms. [13] She applies advances in computational and algorithmic solutions for high-performance computing technologies (i.e., volunteer computing, accelerators and GPUs, and in situ analytics workflows) [14] to multi-disciplinary fields including molecular dynamics, [15] ecoinformatics, seismology, and biology.

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Jack Dongarra</span> American computer scientist (born 1950)

Jack Joseph Dongarra is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

<span class="mw-page-title-main">Barbara Liskov</span> American computer scientist

Barbara Liskov is an American computer scientist who has made pioneering contributions to programming languages and distributed computing. Her notable work includes the introduction of abstract data types and the accompanying principle of data abstraction, along with the Liskov substitution principle, which applies these ideas to object-oriented programming, subtyping, and inheritance. Her work was recognized with the 2008 Turing Award, the highest distinction in computer science.

<span class="mw-page-title-main">David Bader (computer scientist)</span> American computer scientist

David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as the Chair of the Georgia Institute of Technology School of Computational Science & Engineering, where he was also a founding professor, and the executive director of High-Performance Computing at the Georgia Tech College of Computing. In 2007, he was named the first director of the Sony Toshiba IBM Center of Competence for the Cell Processor at Georgia Tech.

Kanianthra Mani Chandy is the Simon Ramo Professor of Computer Science at the California Institute of Technology (Caltech). He has been the Executive Officer of the Computer Science Department twice, and he has been a professor at Caltech since 1989. He also served as Chair of the Division of Engineering and Applied Science at the California Institute of Technology.

<span class="mw-page-title-main">Bill Dally</span> American computer scientist and educator (born 1960)

William James Dally is an American computer scientist and educator. He is the chief scientist and senior vice president at Nvidia and was previously a professor of Electrical Engineering and Computer Science at Stanford University and MIT. Since 2021, he has been a member of the President's Council of Advisors on Science and Technology (PCAST).

The Ken Kennedy Award, established in 2009 by the Association for Computing Machinery and the IEEE Computer Society in memory of Ken Kennedy, is awarded annually and recognizes substantial contributions to programmability and productivity in computing and substantial community service or mentoring contributions. The award includes a $5,000 honorarium and the award recipient will be announced at the ACM - IEEE Supercomputing Conference.

rCUDA Type of middleware software framework for remote GPU virtualization

rCUDA, which stands for Remote CUDA, is a type of middleware software framework for remote GPU virtualization. Fully compatible with the CUDA application programming interface (API), it allows the allocation of one or more CUDA-enabled GPUs to a single application. Each GPU can be part of a cluster or running inside of a virtual machine. The approach is aimed at improving performance in GPU clusters that are lacking full utilization. GPU virtualization reduces the number of GPUs needed in a cluster, and in turn, leads to a lower cost configuration – less energy, acquisition, and maintenance.

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

<span class="mw-page-title-main">ACM SIGARCH</span> ACMs Special Interest Group on computer architecture

ACM SIGARCH is the Association for Computing Machinery's Special Interest Group on computer architecture, a community of computer professionals and students from academia and industry involved in research and professional practice related to computer architecture and design. The organization sponsors many prestigious international conferences in this area, including the International Symposium on Computer Architecture (ISCA), recognized as the top conference in this area since 1975. Together with IEEE Computer Society's Technical Committee on Computer Architecture (TCCA), it is one of the two main professional organizations for people working in computer architecture.

<span class="mw-page-title-main">ACM SIGHPC</span> ACMs Special Interest Group on High Performance Computing

ACM SIGHPC is the Association for Computing Machinery's Special Interest Group on High Performance Computing, an international community of students, faculty, researchers, and practitioners working on research and in professional practice related to supercomputing, high-end computers, and cluster computing. The organization co-sponsors international conferences related to high performance and scientific computing, including: SC, the International Conference for High Performance Computing, Networking, Storage and Analysis; the Platform for Advanced Scientific Computing (PASC) Conference; Practice and Experience in Advanced Research Computing (PEARC); and PPoPP, the Symposium on Principles and Practice of Parallel Programming.

<span class="mw-page-title-main">Richard Vuduc</span>

Richard Vuduc is a tenured professor of computer science at the Georgia Institute of Technology. His research lab, The HPC Garage, studies high-performance computing, scientific computing, parallel algorithms, modeling, and engineering. He is a member of the Association for Computing Machinery (ACM). As of 2022, Vuduc serves as Vice President of the SIAM Activity Group on Supercomputing. He has co-authored over 200 articles in peer-reviewed journals and conferences.

David R. Kaeli is an American computer scientist and Northeastern University College of Engineering Distinguished Professor in Electrical and Computer Engineering. He has been cited over 16,000 times. His research involves the design and performance of high-performance computer systems and software.

Valérie Issarny was a Director of Research at the National Institute for Research in Digital Science and Technology (INRIA), France. Issarny was known for her research in middleware solutions for distributed collaborative services, including mobile services deployed over smartphones that interact with sensors.

<span class="mw-page-title-main">Torsten Hoefler</span> Computer science professor

Torsten Hoefler is a Professor of Computer Science at ETH Zurich and the Chief Architect for Machine Learning at the Swiss National Supercomputing Centre. Previously, he led the Advanced Application and User Support team at the Blue Waters Directorate of the National Center for Supercomputing Applications, and held an adjunct professor position at the Computer Science Department at the University of Illinois at Urbana Champaign. His expertise lies in large-scale parallel computing and high-performance computing systems. He focuses on applications in large-scale artificial intelligence as well as climate sciences.

References

  1. 1 2 3 4 Interview with 2019 Person to Watch Michela Taufer, by HPCwire Editorial Team, in HPCwire; published 18 April 2019; retrieved 26 April 2020
  2. 1 2 T. Estrada, M.Taufer, and K. Reed, "Modeling Job Lifespan Delays in Volunteer Computing Projects", " CCGRID '09: Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid"; pp. 331-338 (2009)
  3. 1 2 Julie Stewart, "Computer Scientist Wins NSF Grant", in UDaily ; published 11 December 2017; retrieved 26 April 2020
  4. 1 2 V. Stodden, M. McNutt, D.H. Bailey, E. Deelman, Y. Gil, B. Hanson, M.A. Heroux, J.P.A. Ioannidis, and M. Taufer, "Enhancing reproducibility for computational methods", Science ; 354(6317), pp. 1240-1241 (2016)
  5. R&D100 Award Winners, by R&D100 Awards; published 2021
  6. IBM Faculty awards Recipients, by IBM; published 6 February 2019; retrieved 26 April 2020
  7. 1 2 ACM's Distinguished Members Cited for Advances in Computing that Will Yield Real World Impact, by Jim Ormond; published 19 November 2015; retrieved 26 April 2020
  8. ACM-W – Supporting, Celebrating, and Advocating for Women in Computing; published 24 August 2014; retrieved 26 April 2020
  9. 1 2 3 4 Michela Taufer. (2002) Inverting Middleware: Performance Analysis of Layered Application Codes in High Performance Distributed Computing(PhD). ETH Zurich (Swiss Federal Institute of Technology in Zürich); retrieved 26 April 2020
  10. "Min H. Kao Department of Electrical Engineering & Computer Science - The University of Tennessee, Knoxville". Eecs.utk.edu. Retrieved 26 April 2020.
  11. "Dr. Michela Taufer, research profile – personal details (GCLab)" . Retrieved 27 April 2020.
  12. M. Taufer, C. An, A. Kerstens, and C.L. Brooks III, "Predictor@Home: A Protein Structure Prediction Supercomputer' Based on Global Computing", IEEE Transactions on Parallel and Distributed Systems,; 17(8), pp. 786-796 (2006)
  13. M. Taufer, B. Mohr, and J. M. Kunkel (2016). "High Performance Computing: ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS, June 19–23, 2016, Revised Selected Papers", Springer, Frankfurt, Germany; ISBN   9783319460796.
  14. M. Taufer, O. Padron, P Saponaro, and S. Patel, "Improving numerical reproducibility and stability in large-scale numerical simulations on GPUs", 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS), Atlanta, GA; pp. 1-9 (2010)
  15. M. Taufer, D.P. Anderson, P. Cicotti, and C.L. Brooks, "Homogeneous Redundancy: a Technique to Ensure Integrity of Molecular Simulation Results Using Public Computing", 19th IEEE International Parallel and Distributed Processing Symposium; 9, pp. - (2005)