The Journal of Supercomputing

Last updated

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">National Center for Supercomputing Applications</span> Illinois-based applied supercomputing research organization

The National Center for Supercomputing Applications (NCSA) is a state-federal partnership to develop and deploy national-scale cyberinfrastructure that advances research, science and engineering based in the United States. NCSA operates as a unit of the University of Illinois Urbana-Champaign, and provides high-performance computing resources to researchers across the country. Support for NCSA comes from the National Science Foundation, the state of Illinois, the University of Illinois, business and industry partners, and other federal agencies.

<span class="mw-page-title-main">Hunan University</span> Public university in Changsha, Hunan, China

Hunan University is a public university in Yuelu, Changsha, Hunan, China. It is affiliated with the Ministry of Education. The university is part of Project 211, Project 985, and the Double First-Class Construction.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

SUPER-UX was a version of the Unix operating system from NEC that is used on its SX series of supercomputers.

<span class="mw-page-title-main">Larry Smarr</span> American computer scientist (b. 1948)

Larry Lee Smarr is a physicist and leading pioneer in scientific computing, supercomputer applications, and Internet infrastructure. He is currently a Distinguished Professor Emeritus at the University of California, San Diego, and was the founding director of the California Institute for Telecommunications and Information Technology, as well as the Harry E. Gruber Endowed Chair Professor of Computer Science and Information Technologies at the Jacobs School of Engineering.

The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.

Within cluster and parallel computing, a cluster manager is usually backend graphical user interface (GUI) or command-line interface (CLI) software that runs on a set of cluster nodes that it manages. The cluster manager works together with a cluster management agent. These agents run on each node of the cluster to manage and configure services, a set of services, or to manage and configure the complete cluster server itself In some cases the cluster manager is mostly used to dispatch work for the cluster to perform. In this last case a subset of the cluster manager can be a remote desktop application that is used not for configuration but just to send work and get back work results from a cluster. In other cases the cluster is more related to availability and load balancing than to computational or specific service clusters.

The IBM HPC Systems Scientific Computing User Group (ScicomP) was a non-profit user-led group for scientific and technical users of IBM high-performance computing (HPC) systems. It was part of the SPXXL organization. It held yearly meetings with presentations to allow users share expertise and collaborate on the development of efficient and scalable scientific applications. Though not affiliated with the IBM Corporation, the group's meetings provide an opportunity to give feedback to IBM that would influence the design of future systems.

Hamid Reza Arabnia is a professor of computer science at the University of Georgia. He has been the editor-in-chief of The Journal of Supercomputing since 1997.

<span class="mw-page-title-main">Distributed European Infrastructure for Supercomputing Applications</span> Organization

Distributed European Infrastructure for Supercomputing Applications (DEISA) was a consortium of major national supercomputing centres in Europe. Initiated in 2002, it became a European Union funded supercomputer project. The consortium of eleven national supercomputing centres from seven European countries promoted pan-European research on European high-performance computing systems by creating a European collaborative environment in the area of supercomputing.

SC, the International Conference for High Performance Computing, Networking, Storage and Analysis, is the annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. In 2019, about 13,950 people participated overall; by 2022 attendance had rebounded to 11,830 both in-person and online. The not-for-profit conference is run by a committee of approximately 600 volunteers who spend roughly three years organizing each conference.

Dr. Subhash Saini is a senior computer scientist at NASA. He is a member of the Ames Research and Technology Council.

<span class="mw-page-title-main">Supercomputing in China</span> Overview of supercomputing in China

China operates a number of supercomputer centers. In the mid-2010s, Chinese supercomputers occupied top spots on the TOP500. Since 2019, after the U.S. began levying sanctions on several Chinese companies involving with supercomputing, less public information is available on the state of supercomputing in China.

SAGA-220 is a supercomputer built by the Indian Space Research Organisation (ISRO).

Supercomputing in India has a history going back to the 1980s. The Government of India created an indigenous development programme as they had difficulty purchasing foreign supercomputers. As of June 2023, the AIRAWAT supercomputer is the fastest supercomputer in India, having been ranked 75th fastest in the world in the TOP500 supercomputer list. AIRAWAT has been installed at the Centre for Development of Advanced Computing (C-DAC) in Pune.

<span class="mw-page-title-main">History of supercomputing</span>

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.

The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

<span class="mw-page-title-main">Gridcoin</span> Cryptocurrency rewarding work on BOINC

Gridcoin is an open source cryptocurrency which securely rewards volunteer computing performed on the BOINC network. Originally developed to support SETI@home, it became the platform for many other applications in areas as diverse as medicine, molecular biology, mathematics, linguistics, climatology, environmental science, and astrophysics.

References

  1. 1 2 "The Journal of Supercomputing". Springer Link. Springer Nature. 2024. Retrieved February 11, 2024.