Zettascale computing refers to computing systems capable of calculating at least "1021 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (zetta FLOPS)". [1] It is a measure of supercomputer performance, and as of July 2022 [update] is a hypothetical performance barrier. [2] A zettascale computer system could generate more single floating point data in one second than was stored by the total digital means on Earth in the first quarter of 2011.[ citation needed ]
Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark. [3] [4]
In 2018, Chinese scientists predicted that the first zettascale system will be assembled in 2035. [5] This forecast looks plausible from a historical point of view as it took some 12 years to progress from the terascale machines (1012) to petascale systems (1015) and then 14 more years to move to exascale computers (1018). [5]
Scientists forecast that the zettascale systems are likely to be data-centric; this proposition means that the system components will move to the data, not vice versa, as the data volumes in the future are anticipated to be so large that moving data will be too expensive. It is also forecasted that zettascale systems are expected to be decentralized—because such a model can be the shortest route to achieving zettascale performance, with millions of less powerful components linked and working together to form a collective hypercomputer that is more powerful than any single machine. [5] Such decentralized systems may be designed to mimick complex biologic systems, and the next cybernetic paradigm may be based on liquid cybernetic systems with embodied intelligence solutions. [6] [ clarification needed ]
China’s National University of Defense Technology propose the following metrics: [7]
As Moore's law nears its natural limits, supercomputing will face serious physical problems in moving from exascale to zettascale systems, making the decade after 2020 a vital period to develop key high-performance computing techniques. [8] Many forecasters, including Gordon Moore himself, [9] expect Moore's law to end by around 2025. [10] [11] Another challenge for reaching zettascale performance can be enormous energy consumption. [12] [13]
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.
MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.
MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the Riken research institute in Japan. It is a special purpose system built for molecular dynamics simulations, especially protein structure prediction.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
Petascale computing refers to computing systems capable of calculating at least 1015 floating point operations per second (1 petaFLOPS). Petascale computing allowed faster processing of traditional supercomputer applications. The first system to reach this milestone was the IBM Roadrunner in 2008. Petascale supercomputers were succeeded by exascale computers.
Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.
Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.
Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.
The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.
Mira is a retired petascale Blue Gene/Q supercomputer. As of November 2017, it is listed on TOP500 as the 11th fastest supercomputer in the world, while it debuted June 2012 in 3rd place. It has a performance of 8.59 petaflops (LINPACK) and consumes 3.9 MW. The supercomputer was constructed by IBM for Argonne National Laboratory's Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation. Mira was used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry. The supercomputer was used initially for sixteen projects selected by the Department of Energy.
The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.
The Sunway TaihuLight is a Chinese supercomputer which, as of November 2023, is ranked 11th in the TOP500 list, with a LINPACK benchmark rating of 93 petaflops. The name is translated as divine power, the light of Taihu Lake. This is nearly three times as fast as the previous Tianhe-2, which ran at 34 petaflops. As of June 2017, it is ranked as the 16th most energy-efficient supercomputer in the Green500, with an efficiency of 6.1 GFlops/watt. It was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi in the city of Wuxi, in Jiangsu province, China.
The European High-Performance Computing Joint Undertaking is a public-private partnership in high-performance computing (HPC), enabling the pooling of European Union–level resources with the resources of participating EU Member States and participating associated states of the Horizon Europe and Digital Europe programmes, as well as private stakeholders. The Joint Undertaking has the twin stated aims of developing a pan-European supercomputing infrastructure, and supporting research and innovation activities. Located in Luxembourg City, Luxembourg, the Joint Undertaking started operating in November 2018 under the control of the European Commission and became autonomous in 2020.
Fugaku(Japanese: 富岳) is a petascale supercomputer at the Riken Center for Computational Science in Kobe, Japan. It started development in 2014 as the successor to the K computer and made its debut in 2020. It is named after an alternative name for Mount Fuji.
Aurora is an exascale supercomputer that was sponsored by the United States Department of Energy (DOE) and designed by Intel and Cray for the Argonne National Laboratory. It has been the second fastest supercomputer in the world since 2023. It is expected that after optimizing its performance it will exceed 2 ExaFLOPS, making it the fastest computer ever.
{{cite web}}
: CS1 maint: numeric names: authors list (link)chart: "Faith no Moore" Selected predictions for the end of Moore's law