Zettascale computing

Last updated

Zettascale computing refers to computing systems capable of calculating at least "1021 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (zetta FLOPS)". [1] It is a measure of supercomputer performance, and as of July 2022 is a hypothetical performance barrier. [2] A zettascale computer system could generate more single floating point data in one second than was stored by the total digital means on Earth in the first quarter of 2011.[ citation needed ]

Contents

Definitions

Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark. [3] [4]

Forecasts

In 2018, Chinese scientists predicted that the first zettascale system will be assembled in 2035. [5] This forecast looks plausible from a historical point of view as it took some 12 years to progress from the terascale machines (1012) to petascale systems (1015) and then 14 more years to move to exascale computers (1018). [5]

Scientists forecast that the zettascale systems are likely to be data-centric; this proposition means that the system components will move to the data, not vice versa, as the data volumes in the future are anticipated to be so large that moving data will be too expensive. It is also forecasted that zettascale systems are expected to be decentralized—because such a model can be the shortest route to achieving zettascale performance, with millions of less powerful components linked and working together to form a collective hypercomputer that is more powerful than any single machine. [5] Such decentralized systems may be designed to mimick complex biologic systems, and the next cybernetic paradigm may be based on liquid cybernetic systems with embodied intelligence solutions. [6] [ clarification needed ]

Potential configuration

China’s National University of Defense Technology propose the following metrics: [7]

Problems

As Moore's law nears its natural limits, supercomputing will face serious physical problems in moving from exascale to zettascale systems, making the decade after 2020 a vital period to develop key high-performance computing techniques. [8] Many forecasters, including Gordon Moore himself, [9] expect Moore's law to end by around 2025. [10] [11] Another challenge for reaching zettascale performance can be enormous energy consumption. [12] [13]

Applications

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.

<span class="mw-page-title-main">IBM Blue Gene</span> Series of supercomputers by IBM

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

<span class="mw-page-title-main">Roadrunner (supercomputer)</span> Former supercomputer built by IBM

Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.

MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the Riken research institute in Japan. It is a special purpose system built for molecular dynamics simulations, especially protein structure prediction.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

Petascale computing refers to computing systems capable of calculating at least 1015 floating point operations per second (1 petaFLOPS). Petascale computing allowed faster processing of traditional supercomputer applications. The first system to reach this milestone was the IBM Roadrunner in 2008. Petascale supercomputers were succeeded by exascale computers.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

<span class="mw-page-title-main">History of supercomputing</span>

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.

<span class="mw-page-title-main">Mira (supercomputer)</span>

Mira is a retired petascale Blue Gene/Q supercomputer. As of November 2017, it is listed on TOP500 as the 11th fastest supercomputer in the world, while it debuted June 2012 in 3rd place. It has a performance of 8.59 petaflops (LINPACK) and consumes 3.9 MW. The supercomputer was constructed by IBM for Argonne National Laboratory's Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation. Mira was used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry. The supercomputer was used initially for sixteen projects selected by the Department of Energy.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

The Sunway TaihuLight is a Chinese supercomputer which, as of November 2023, is ranked 11th in the TOP500 list, with a LINPACK benchmark rating of 93 petaflops. The name is translated as divine power, the light of Taihu Lake. This is nearly three times as fast as the previous Tianhe-2, which ran at 34 petaflops. As of June 2017, it is ranked as the 16th most energy-efficient supercomputer in the Green500, with an efficiency of 6.1 GFlops/watt. It was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi in the city of Wuxi, in Jiangsu province, China.

The European High-Performance Computing Joint Undertaking is a public-private partnership in high-performance computing (HPC), enabling the pooling of European Union–level resources with the resources of participating EU Member States and participating associated states of the Horizon Europe and Digital Europe programmes, as well as private stakeholders. The Joint Undertaking has the twin stated aims of developing a pan-European supercomputing infrastructure, and supporting research and innovation activities. Located in Luxembourg City, Luxembourg, the Joint Undertaking started operating in November 2018 under the control of the European Commission and became autonomous in 2020.

<span class="mw-page-title-main">Fugaku (supercomputer)</span> Japanese supercomputer

Fugaku(Japanese: 富岳) is a petascale supercomputer at the Riken Center for Computational Science in Kobe, Japan. It started development in 2014 as the successor to the K computer and made its debut in 2020. It is named after an alternative name for Mount Fuji.

<span class="mw-page-title-main">Aurora (supercomputer)</span> US DOE supercomputer by Intel and Cray

Aurora is an exascale supercomputer that was sponsored by the United States Department of Energy (DOE) and designed by Intel and Cray for the Argonne National Laboratory. It has been the second fastest supercomputer in the world since 2023. It is expected that after optimizing its performance it will exceed 2 ExaFLOPS, making it the fastest computer ever.

References

  1. "What is zettaflops? - Definition from WhatIs.com". WhatIs.com. Retrieved 24 August 2021.
  2. Feldman, Michael (11 December 2018). "Supercomputing Is Heading Toward an Existential Crisis". top500.org . Retrieved 24 August 2021.
  3. "FREQUENTLY ASKED QUESTIONS". www.top500.org. Retrieved 23 June 2020.
  4. Kogge, Peter, ed. (1 May 2008). ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems (PDF). United States Government. Retrieved 28 September 2008.
  5. 1 2 3 August 2020, Joel Khalili 29 (29 August 2020). "I confess, I'm scared of the next generation of supercomputers". TechRadar . Retrieved 24 August 2021.{{cite web}}: CS1 maint: numeric names: authors list (link)
  6. Chiolerio, Alessandro; Draper, Thomas C.; Jost, Carsten; Adamatzky, Andrew (2019). "Electrical Properties of Solvated Tectomers: Toward Zettascale Computing". Advanced Electronic Materials. 5 (12): 1900202. doi:10.1002/aelm.201900202. S2CID   204646269.
  7. "Will 1000 ExaFlop Supercomputers Come from Brute Force Scaling or New Technology? | NextBigFuture.com". nextbigfuture.com. Retrieved 6 October 2021.
  8. Liao, Xiang-ke; Lu, Kai; Yang, Can-qun; Li, Jin-wen; Yuan, Yuan; Lai, Ming-che; Huang, Li-bo; Lu, Ping-jing; Fang, Jian-bin; Ren, Jing; Shen, Jie (1 October 2018). "Moving from exascale to zettascale computing: challenges and techniques". Frontiers of Information Technology & Electronic Engineering . 19 (10): 1236–1244. doi:10.1631/FITEE.1800494. ISSN   2095-9230. S2CID   53819223 . Retrieved 24 August 2021.
  9. Cross, Tim. "After Moore's Law". The Economist Technology Quarterly. Retrieved 13 March 2016. chart: "Faith no Moore" Selected predictions for the end of Moore's law
  10. Kumar, Suhas (2012). "Fundamental Limits to Moore's Law". arXiv: 1511.05956 [cond-mat.mes-hall].
  11. McBride, Stephen (23 April 2019). "These 3 Computing Technologies Will Beat Moore's Law". Forbes . Retrieved 24 August 2021.
  12. Morgan, James (18 October 2013). "IBM unveils computer fed by 'electronic blood'". BBC News . Retrieved 4 October 2021.
  13. Hayes, Brian (22 July 2014). "Built for speed: Designing exascale computers". Harvard University . Retrieved 4 October 2021.
  14. DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. ACM Press. pp. 391–402. ISBN   1-59593-019-1.
  15. "Суперкомпьютеры достигают производительности в зеттафлопс | "Будущее сейчас"" (in Russian). futurenow.ru. Retrieved 29 September 2021.
  16. Kirkpatrick, Kay (2019). "BIO LOGIC: Biological Computation" (PDF). University of Illinois.