Computer performance by orders of magnitude

Last updated

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Contents

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Milliscale computing (10−3)

Deciscale computing (10−1)

Scale computing (100)

Decascale computing (101)

Hectoscale computing (102)

Kiloscale computing (103)

Megascale computing (106)

Gigascale computing (109)

Terascale computing (1012)

Petascale computing (1015)

Exascale computing (1018)

Zettascale computing (1021)

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in the first quarter of 2011.[ citation needed ]

Beyond zettascale computing (>1021)

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.

<span class="mw-page-title-main">IBM Blue Gene</span> Series of supercomputers by IBM

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with relatively low power consumption.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Sequoia (supercomputer)</span> IBM supercomputer at Lawrence Livermore National Laboratory

IBM Sequoia was a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012. Sequoia was dismantled in 2020, its last position on the top500.org list was #22 in the November 2019 list.

Petascale computing refers to computing systems capable of performing at least 1 quadrillion (10^15) floating-point operations per second (FLOPS). These systems are often called petaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

Nebulae is a petascale supercomputer located at the National Supercomputing Center in Shenzhen, Guangdong, China. Built from a Dawning TC3600 Blade system with Intel Xeon X5650 processors and Nvidia Tesla C2050 GPUs, it has a peak performance of 1.271 petaflops using the LINPACK benchmark suite. Nebulae was ranked the second most powerful computer in the world in the June 2010 list of the fastest supercomputers according to TOP500. Nebulae has a theoretical peak performance of 2.9843 petaflops. This computer is used for multiple applications requiring advanced processing capabilities. It is ranked 10th among the June 2012 list of top500.org.

<span class="mw-page-title-main">K computer</span> Supercomputer in Kobe, Japan

The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

<span class="mw-page-title-main">Summit (supercomputer)</span> Supercomputer developed by IBM

Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, United States of America. As of June 2024, it is the 9th fastest supercomputer in the world on the TOP500 list. It held the number 1 position on this list from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

The Nvidia DGX represents a series of servers and workstations designed by Nvidia, primarily geared towards enhancing deep learning applications through the use of general-purpose computing on graphics processing units (GPGPU). These systems typically come in a rackmount format featuring high-performance x86 server CPUs on the motherboard.

<span class="mw-page-title-main">Sierra (supercomputer)</span> Supercomputer developed by IBM

Sierra or ATS-2 is a supercomputer built for the Lawrence Livermore National Laboratory for use by the National Nuclear Security Administration as the second Advanced Technology System. It is primarily used for predictive applications in nuclear weapon stockpile stewardship, helping to assure the safety, reliability, and effectiveness of the United States' nuclear weapons.

Christofari — are Christofari (2019), Christofari Neo (2021) supercomputers of Sberbank based on Nvidia corporation hardware Sberbank of Russia and Nvidia. Their main purpose is neural network learning. They are also used for scientific research and commercial calculations.

<span class="mw-page-title-main">Leonardo (supercomputer)</span> Supercomputer in Italy

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200 Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.

Zettascale computing refers to computing systems capable of calculating at least "1021 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (zettaFLOPS)". It is a measure of supercomputer performance, and as of July 2022 is a hypothetical performance barrier. A zettascale computer system could generate more single floating point data in one second than was stored by the total digital means on Earth in the first quarter of 2011.

References

  1. 1 2 Neumann, John Von; Brody, F.; Vamos, Tibor (1995). The Neumann Compendium. World Scientific. ISBN   978-981-02-2201-7.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 "Cost of CPU Performance Through Time 1944-2003". www.jcmit.net. Retrieved 2024-01-15.
  3. Copeland, B. Jack (2012-05-24). Alan Turing's Electronic Brain: The Struggle to Build the ACE, the World's Fastest Computer. OUP Oxford. ISBN   978-0-19-960915-4.
  4. 1 2 Gray, Robert H. (2020-04-23). "The Extended Kardashev Scale". The Astronomical Journal . 159 (5): 228. Bibcode:2020AJ....159..228G. doi: 10.3847/1538-3881/ab792b . ISSN   1538-3881. S2CID   218995201.
  5. "Intel 980x Gulftown | Synthetic Benchmarks | CPU & Mainboard | OC3D Review". www.overclock3d.net. March 12, 2010.
  6. Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
  7. "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  8. "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  9. "NVIDIA GeForce-News". 12 October 2022.
  10. "Build and train machine learning models on our new Google Cloud TPUs". 17 May 2017.
  11. 1 2 "Top500 List - June 2013 | TOP500 Supercomputer Sites". top500.org. Archived from the original on 2013-06-22.
  12. Kurzweil, Ray (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence . New York, NY: Penguin. ISBN   9780140282023.
  13. "Brain on a Chip". 30 November 2001.
  14. http://top500.org/list/2016/06/ Top500 list, June 2016
  15. "November 2018 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 2018-11-30.
  16. "June 2020 | TOP500".
  17. "Genomics Code Exceeds Exaops on Summit Supercomputer". Oak Ridge Leadership Computing Facility. Retrieved 2018-11-30.
  18. Pande lab. "Client Statistics by OS". Archive.is. Archived from the original on 2020-04-12. Retrieved 2020-04-12.
  19. DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. ACM Press. pp. 391–402. ISBN   1-59593-019-1.
  20. "Zettascale by 2035? China Thinks So". 6 December 2018.
  21. Jacob Eddison; Joe Marsden; Guy Levin; Darshan Vigneswara (2017-12-12), "Matrioshka Brain", Journal of Physics Special Topics, 16 (1), Department of Physics and Astronomy, University of Leicester
  22. Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.