Computer performance by orders of magnitude

Last updated

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Contents

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Milliscale computing (10−3)

Deciscale computing (10−1)

Scale computing (100)

Decascale computing (101)

Hectoscale computing (102)

Kiloscale computing (103)

Megascale computing (106)

Gigascale computing (109)

Terascale computing (1012)

Petascale computing (1015)

Exascale computing (1018)

Zettascale computing (1021)

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in the first quarter of 2011.[ citation needed ]

Beyond zettascale computing (>1021)

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

In computing, floating point operations per second is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second.

<span class="mw-page-title-main">IBM Blue Gene</span> Series of supercomputers by IBM

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Pleiades (supercomputer)</span> NASA supercomputer at Ames Research Center/NAS

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise and Intel.

<span class="mw-page-title-main">Sequoia (supercomputer)</span> IBM supercomputer at Lawrence Livermore National Laboratory

IBM Sequoia was a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012. Sequoia was dismantled in 2020, its last position on the top500.org list was #22 in the November 2019 list.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

Nebulae is a petascale supercomputer located at the National Supercomputing Center in Shenzhen, Guangdong, China. Built from a Dawning TC3600 Blade system with Intel Xeon X5650 processors and Nvidia Tesla C2050 GPUs, it has a peak performance of 1.271 petaflops using the LINPACK benchmark suite. Nebulae was ranked the second most powerful computer in the world in the June 2010 list of the fastest supercomputers according to TOP500. Nebulae has a theoretical peak performance of 2.9843 petaflops. This computer is used for multiple applications requiring advanced processing capabilities. It is ranked 10th among the June 2012 list of top500.org.

<span class="mw-page-title-main">K computer</span> Supercomputer in Kobe, Japan

The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Summit (supercomputer)</span> Supercomputer developed by IBM

Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, capable of 200 petaFLOPS thus making it the 5th fastest supercomputer in the world after Frontier (OLCF-5), Fugaku, LUMI, and Leonardo, with Frontier being the fastest. It held the number 1 position from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs. The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket or by a PCIe x16 slot.

<span class="mw-page-title-main">Sierra (supercomputer)</span> Supercomputer developed by IBM

Sierra or ATS-2 is a supercomputer built for the Lawrence Livermore National Laboratory for use by the National Nuclear Security Administration as the second Advanced Technology System. It is primarily used for predictive applications in nuclear weapon stockpile stewardship, helping to assure the safety, reliability, and effectiveness of the United States' nuclear weapons.

Christofari — are Christofari (2019), Christofari Neo (2021) supercomputers of Sberbank based on Nvidia corporation hardware Sberbank of Russia and Nvidia. Their main purpose is neural network learning. They are also used for scientific research and commercial calculations.

<span class="mw-page-title-main">Leonardo (supercomputer)</span> Supercomputer in Italy

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.

Zettascale computing refers to computing systems capable of calculating at least "1021 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (zettaFLOPS)". It is a measure of supercomputer performance, and as of July 2022 is a hypothetical performance barrier. A zettascale computer system could generate more single floating point data in one second than was stored by the total digital means on Earth in the first quarter of 2011.

References

  1. 1 2 Neumann, John Von; Brody, F.; Vamos, Tibor (1995). The Neumann Compendium. World Scientific. ISBN   978-981-02-2201-7.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 "Cost of CPU Performance Through Time 1944-2003". www.jcmit.net. Retrieved 2024-01-15.
  3. "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
  4. Copeland, B. Jack (2012-05-24). Alan Turing's Electronic Brain: The Struggle to Build the ACE, the World's Fastest Computer. OUP Oxford. ISBN   978-0-19-960915-4.
  5. 1 2 Gray, Robert H. (2020-04-23). "The Extended Kardashev Scale". The Astronomical Journal . 159 (5): 228. Bibcode:2020AJ....159..228G. doi: 10.3847/1538-3881/ab792b . ISSN   1538-3881. S2CID   218995201.
  6. "Intel 980x Gulftown | Synthetic Benchmarks | CPU & Mainboard | OC3D Review". www.overclock3d.net. March 12, 2010.
  7. Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
  8. "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  9. "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  10. "NVIDIA GeForce-News". 12 October 2022.
  11. "Build and train machine learning models on our new Google Cloud TPUs". 17 May 2017.
  12. 1 2 "Top500 List - June 2013 | TOP500 Supercomputer Sites". top500.org. Archived from the original on 2013-06-22.
  13. Kurzweil, Ray (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence . New York, NY: Penguin. ISBN   9780140282023.
  14. "Brain on a Chip". 30 November 2001.
  15. http://top500.org/list/2016/06/ Top500 list, June 2016
  16. "November 2018 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 2018-11-30.
  17. "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Archived from the original on 2008-10-01. Retrieved 2010-01-04. Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
  18. "June 2020 | TOP500".
  19. "Genomics Code Exceeds Exaops on Summit Supercomputer". Oak Ridge Leadership Computing Facility. Retrieved 2018-11-30.
  20. Pande lab. "Client Statistics by OS". Archive.is. Archived from the original on 2020-04-12. Retrieved 2020-04-12.
  21. DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. ACM Press. pp. 391–402. ISBN   1-59593-019-1.
  22. "Zettascale by 2035? China Thinks So". 6 December 2018.
  23. Jacob Eddison; Joe Marsden; Guy Levin; Darshan Vigneswara (2017-12-12), "Matrioshka Brain", Journal of Physics Special Topics, Department of Physics and Astronomy, University of Leicester, 16 (1)
  24. Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.