Pittsburgh Supercomputing Center

Last updated

The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. [1] [2] PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States. [2]

Contents

In addition to providing a family of Big Data-optimized supercomputers with unique shared memory architectures, PSC features the National Institutes of Health-sponsored National Resource for Biomedical Supercomputing, [3] an Advanced Networking Group that conducts research on network performance and analysis, [4] and a STEM education and outreach program supporting K-20 education. [5] In 2012, PSC established a new Public Health Applications Group that will apply supercomputing resources to problems in preventing, monitoring and responding to epidemics and other public health needs. [6]

Mission

The Pittsburgh Supercomputing Center provides university, government, and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data-handling and analysis available nationwide for unclassified research. [7] As a resource provider in the Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation's network of integrated advanced digital resources, PSC works with its XSEDE partners to harness the full range of information technologies to enable discovery in U.S. science and engineering. [8]

Partnerships

PSC is a leading partner in XSEDE. [8] PSC-scientific co-director Ralph Roskies is a co-principal investigator of XSEDE and co-leads its Extended Collaborative Support Services. Other PSC staff lead XSEDE efforts in Networking, Incident Response, Systems & Software Engineering, Outreach, Allocations Coordination, and Novel & Innovative Projects. This NSF-funded program provides U.S. academic researchers with support for and access to leadership-class computing infrastructure and research. [7] [8]

The National Resource for Biomedical Supercomputing, sponsored by the National Institutes of Health, develops new algorithms, performs original research, and conducts training workshops, in addition to fostering collaborative projects and providing access to supercomputing resources to the national biomedical research community. [9]

In partnership with the DOE National Energy Technology Laboratory, Carnegie Mellon University, the University of Pittsburgh, West Virginia University, and Waynesburg College, PSC provides resources to the SuperComputing Science Consortium, a regional partnership to advance energy and environment technologies through the application of high performance computing and communications. [10]

Sponsors

Current high-performance computing capabilities

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Cornell University Center for Advanced Computing</span> PYAE199

The Cornell University Center for Advanced Computing (CAC), housed at Frank H. T. Rhodes Hall on the campus of Cornell University, is one of five original centers in the National Science Foundation's Supercomputer Centers Program. It was formerly called the Cornell Theory Center.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

<span class="mw-page-title-main">David Bader (computer scientist)</span> American computer scientist

David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as the Chair of the Georgia Institute of Technology School of Computational Science & Engineering, where he was also a founding professor, and the executive director of High-Performance Computing at the Georgia Tech College of Computing. In 2007, he was named the first director of the Sony Toshiba IBM Center of Competence for the Cell Processor at Georgia Tech.

<span class="mw-page-title-main">NASA Advanced Supercomputing Division</span> Provides computing resources for various NASA projects

The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.

<span class="mw-page-title-main">TeraGrid</span>

TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

<span class="mw-page-title-main">Irish Centre for High-End Computing</span> National high-performance computing centre in Ireland

The Irish Centre for High-End Computing (ICHEC) is the national high-performance computing centre in Ireland. It was established in 2005 and provides supercomputing resources, support, training and related services. ICHEC is involved in education and training, including providing courses for researchers.

<span class="mw-page-title-main">University of Minnesota Supercomputing Institute</span>

The Minnesota Supercomputing Institute (MSI) in Minneapolis, Minnesota is a core research facility of the University of Minnesota that provides hardware and software resources, as well as technical user support, to faculty and researchers at the university and at other institutions of higher education in Minnesota. MSI is located in Walter Library, on the university's Twin Cities campus.

<span class="mw-page-title-main">Tsubame (supercomputer)</span> Series of supercomputers

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Yellowstone (supercomputer)</span>

Yellowstone was the inaugural supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming. It was installed, tested, and readied for production in the summer of 2012. The Yellowstone supercomputing cluster was decommissioned on December 31, 2017, being replaced by its successor Cheyenne.

<span class="mw-page-title-main">Appro</span> American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

<span class="mw-page-title-main">NCAR-Wyoming Supercomputing Center</span> High performance computing center in Wyoming, US

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.

The Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming operated for seven years as one of the world’s most powerful and energy-efficient computers from 2017 to 2024. Ranked in November 2016 as the 20th most powerful computer in the world and November 2023 as 160th by Top500, the 5.34-petaflops system is capable of more than triple the amount of scientific computing performed by NCAR’s previous supercomputer, Yellowstone. It also is three times more energy efficient than Yellowstone, with a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed. It is currently up for auction.

<span class="mw-page-title-main">Ilkay Altintas</span> Turkish-American data and computer scientist (born 1977)

Ilkay Altintas is a Turkish-American data and computer scientist, and researcher in the domain of supercomputing and high-performance computing applications. Since 2015, Altintas has served as chief data science officer of the San Diego Supercomputer Center (SDSC), at the University of California, San Diego (UCSD), where she has also served as founder and director of the Workflows for Data Science Center of Excellence (WorDS) since 2014, as well as founder and director of the WIFIRE lab. Altintas is also the co-initiator of the Kepler scientific workflow system, an open-source platform that endows research scientists with the ability to readily collaborate, share, and design scientific workflows.

<span class="mw-page-title-main">Cerebras</span> American semiconductor company

Cerebras Systems Inc. is an American artificial intelligence company with offices in Sunnyvale and San Diego, Toronto, Tokyo and Bangalore, India. Cerebras builds computer systems for complex artificial intelligence deep learning applications.

<span class="mw-page-title-main">JUWELS</span> Supercomputer in Germany

JUWELS is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. It is capable of a theoretical peak of 70.980 petaflops and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.

Selene is a supercomputer developed by Nvidia, capable of achieving 63.460 petaflops, ranking as the fifth fastest supercomputer in the world, when it entered the list. Selene is based on the Nvidia DGX system consisting of AMD CPUs, Nvidia A100 GPUs, and Mellanox HDDR networking. Selene is based on the Nvidia DGX Superpod, which is a high performance turnkey supercomputer solution provided by Nvidia using DGX hardware. DGX Superpod is a tightly integrated system that combines high performance DGX compute nodes with fast storage and high bandwidth networking. It aims to provide a turnkey solution to high-demand machine learning workloads. Selene was built in three months and is the fastest industrial system in the US while being the second-most energy-efficient supercomputing system ever.

References

  1. Worlton, John. “Pittsburgh Supercomputing Center Celebrates Its 15th Birthday.” Carnegie Mellon University. 6 June 2003. 31 Mar. 2004. <http://www.cmu.edu/cmnews/extra/060615_psc.html>
  2. 1 2 The Pennsylvania Center for the Book - PGH Supercomputing Center. Pabook.libraries.psu.edu. Retrieved on 2013-07-17.
  3. 1 2 Stimulus funds bring supercomputer to Pittsburgh area - Pittsburgh Post-Gazette. Post-gazette.com (2010-05-05). Retrieved on 2013-07-17.
  4. "Archived copy" (PDF). Archived from the original (PDF) on 2013-11-25. Retrieved 2013-04-17.{{cite web}}: CS1 maint: archived copy as title (link)
  5. PSC to Develop Pilot Program in Math and Science Teaching. HPCwire (2011-03-15). Retrieved on 2013-07-17.
  6. Shawn Brown to Direct New Public Health Group at PSC. Psc.edu (2013-02-05). Retrieved on 2013-07-17.
  7. 1 2 Mission/History. Psc.edu (2013-07-12). Retrieved on 2013-07-17.
  8. 1 2 3 UT Given $18 Million to Link Nation's Supercomputers. HPCwire (2011-07-26). Retrieved on 2013-07-17.
  9. CMU, PSC Awarded $9.3 Million for Bio Systems Modeling. HPCwire (2012-11-30). Retrieved on 2013-07-17.
  10. Pittsburgh Supercomputing Center teams up with WVU and DOE. Old.post-gazette.com (1999-09-01). Retrieved on 2013-07-17.
  11. PSC: Bridges-2 Overview. psc.edu. Retrieved on 2021-12-08.
  12. Bridges User Guide. portal.xsede.org. Retrieved on 2019-05-07.
  13. PSC: Bridges to Bridges-2 Transition. psc.edu. Retrieved on 2021-12-08.
  14. Pittsburgh Supercomputing Center Boots Up 'Blacklight'. HPCwire (2010-10-11). Retrieved on 2013-07-17.
  15. The Pittsburgh Supercomputing Center Presents Sherlock, a YarcData uRiKa System for Unlocking the Secrets of Big Data - Yahoo! Finance. Finance.yahoo.com (2012-11-07). Retrieved on 2013-07-17.
  16. Pittsburgh Supercomputing Center Deploys Disk-Based Data Repository. HPCwire (2012-08-21). Retrieved on 2013-07-17.
  17. Pittsburgh Supercomputing Center Receives $1.5M Network Infrastructure Award. HPCwire (2010-09-09). Retrieved on 2013-07-17.

at PSC