This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Petascale computing refers to high-power computing systems capable of performing at least 1 quadrillion (1015) arithmetic, or floating-point operations per second (FLOPS) [1] . (Note: FLOPS is a standard metric for supercomputer performance.) These systems are often called petaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations.
Petascale computing typically finds application in large-scale applications like climate and weather modeling, astrophysics simulations, advanced materials design, and realtime medical imaging. [1] .
Floating point operations per second (FLOPS) is one measure of computer performance (including supercomputers). FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark. [2] [3]
The metric typically refers to single computing systems, although it can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition. [3] It has been recognized that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement. [4] [5]
The petaFLOPS barrier was first broken by the RIKEN MDGRAPE-3 supercomputer in 2006, [6] [7] and then on 16 September 2007 by the distributed computing Folding@home project. [8] IBM's single petascale system, the Roadrunner, entered operation in 2008. [9] The Roadrunner, built by IBM, had a sustained performance of 1.026 petaFLOPS. The Jaguar became the next computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update. [10]
In 2020, Fugaku became the fastest supercomputer in the world, reaching 415 petaFLOPS in June 2020. Fugaku later achieved an Rmax of 442 petaFLOPS in November of the same year.
In 2022, exascale computing (1018 FLOPS of computational power) overtook petascale computing in terms of power with the development of Frontier, surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022. [11]
As of January 2026, El Capitan is the world's fastest exascale computer. Built by Hewlett Packard Enterprise (HPE) for Lawrence Livermore National Laboratory (LLNL), Livermore, California, it runs at a groundbreaking speed of 1.809 exaflops. [12]
Modern artificial intelligence (AI) systems require large amounts of computational power to train model parameters. OpenAI employed 25,000 Nvidia A100 GPUs to train GPT-4, using a total of 133 septillion floating-point operations. [13]
{{cite web}}: CS1 maint: url-status (link){{cite web}}: Check date values in: |access-date= (help)CS1 maint: url-status (link)