The Graph500 is a rating of supercomputer systems, focused on data-intensive loads. The project was announced on International Supercomputing Conference in June 2010. The first list was published at the ACM/IEEE Supercomputing Conference in November 2010. New versions of the list are published twice a year. The main performance metric used to rank the supercomputers is GTEPS (giga- traversed edges per second).
Richard Murphy from Sandia National Laboratories, says that "The Graph500's goal is to promote awareness of complex data problems", instead of focusing on computer benchmarks like HPL (High Performance Linpack), which TOP500 is based on. [1]
Despite its name, there were several hundreds of systems in the rating, growing up to 174 in June 2014. [2]
The algorithm and implementation that won the championship is published in the paper titled "Extreme scale breadth-first search on supercomputers". [3]
There is also list Green Graph 500, which uses same performance metric, but sorts list according to performance per Watt, like Green 500 works with TOP500 (HPL).
The benchmark used in Graph500 stresses the communication subsystem of the system, instead of counting double precision floating-point. [1] It is based on a breadth-first search in a large undirected graph (a model of Kronecker graph with average degree of 16). There are three computation kernels in the benchmark: the first kernel is to generate the graph and compress it into sparse structures CSR or CSC (Compressed Sparse Row/Column); the second kernel does a parallel BFS search of some random vertices (64 search iterations per run); the third kernel runs a single-source shortest paths (SSSP) computation. Six possible sizes (Scales) of graph are defined: toy (226 vertices; 17 GB of RAM), mini (229; 137 GB), small (232; 1.1 TB), medium (236; 17.6 TB), large (239; 140 TB), and huge (242; 1.1 PB of RAM). [4]
The reference implementation of the benchmark contains several versions: [5]
The implementation strategy that have won the championship on the Japanese K computer is described in. [6]
According to June 2023 release of the list the new Wuhan supercomputer is highest ranked for the SSSP results with 19039.1 GTEPS (and Fugaku is 4th) while for the BFS results its 2nd there with a different lower measurement for GTEPS: [7]
Rank | Country | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|---|
1 | Japan | RIKEN Advanced Institute for Computational Science | Supercomputer Fugaku (Fujitsu A64FX) | 152064 | 7299072 | 42 | 137096 |
2 | China | Wuhan | Kunpeng 920+Tesla A100 | 252 | 6999552 | 40 | 121804.3 |
3 | USA | Frontier | HPE Cray EX235a | 9248 | 8730112 | 40 | 29654.6 |
4 | China | Pengcheng Lab | Pengcheng Cloudbrain-II (Kunpeng 920+Ascend 910) | 488 | 93696 | 40 | 25242.9 |
5 | China | National Supercomputing Center in Wuxi | Sunway TaihuLight (Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
Japan also has a new computer ranked 8th.
According to November 2022 release of the list: [8]
Rank | Country | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|---|
1 | Japan | RIKEN Advanced Institute for Computational Science | Supercomputer Fugaku (Fujitsu A64FX) | 158976 | 7630848 | 41 | 102955 |
2 | China | Pengcheng Lab | Pengcheng Cloudbrain-II (Kunpeng 920+Ascend 910) | 488 | 93696 | 40 | 25242.9 |
3 | China | National Supercomputing Center in Wuxi | Sunway TaihuLight (Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
4 | Japan | Information Technology Center, University of Tokyo | Wisteria/BDEC-01 (PRIMEHPC FX1000) | 7680 | 368640 | 37 | 16118 |
5 | Japan | Japan Aerospace Exploration Agency | TOKI-SORA (PRIMEHPC FX1000) | 5760 | 276480 | 36 | 10813 |
6 | EU | EuroHPC/CSC | LUMI-C (HPE Cray EX) | 1492 | 190976 | 38 | 8467.71 |
7 | US | Oak Ridge National Laboratory | OLCF Summit (IBM POWER9) | 2048 | 86016 | 40 | 7665.7 |
8 | Germany | Leibniz Rechenzentrum | SuperMUC-NG (ThinkSystem SD530 Xeon Platinum 8174 24C 3.1GHz Intel Omni-Path) | 4096 | 196608 | 39 | 6279.47 |
9 | Germany | Zuse Institute Berlin | Lise (Intel Omni-Path) | 1270 | 121920 | 38 | 5423.94 |
10 | China | National Engineering Research Center for Big Data Technology and System | DepGraph Supernode (DepGraph (+GPU Tesla A100)) | 1 | 128 | 33 | 4623.379 |
According to June 2016 release of the list: [10]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | Riken Advanced Institute for Computational Science | K computer (Fujitsu custom) | 82944 | 663552 | 40 | 38621.4 |
2 | National Supercomputing Center in Wuxi | Sunway TaihuLight (NRCPC - Sunway MPP) | 40768 | 10599680 | 40 | 23755.7 |
3 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 98304 | 1572864 | 41 | 23751 |
4 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14982 |
5 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
6 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
7 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
8 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | Science and Technology Facilities Council – Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
8 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
According to June 2014 release of the list: [2]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | RIKEN Advanced Institute for Computational Science | K computer (Fujitsu custom) | 65536 | 524288 | 40 | 17977.1 |
2 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 65536 | 1048576 | 40 | 16599 |
3 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14328 |
4 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
5 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
6 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
7 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Science and Technology Facilities Council - Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
According to June 2013 release of the list: [11]
Rank | Site | Machine (architecture) | Number of nodes | Number of cores | Problem scale | GTEPS |
---|---|---|---|---|---|---|
1 | Lawrence Livermore National Laboratory | IBM Sequoia (Blue Gene/Q) | 65536 | 1048576 | 40 | 15363 |
2 | Argonne National Laboratory | IBM Mira (Blue Gene/Q) | 49152 | 786432 | 40 | 14328 |
3 | Forschungszentrum Jülich | JUQUEEN (Blue Gene/Q) | 16384 | 262144 | 38 | 5848 |
4 | RIKEN Advanced Institute for Computational Science | K computer (Fujitsu custom) | 65536 | 524288 | 40 | 5524.12 |
5 | CINECA | Fermi (Blue Gene/Q) | 8192 | 131072 | 37 | 2567 |
6 | Changsha, China | Tianhe-2 (NUDT custom) | 8192 | 196608 | 36 | 2061.48 |
7 | CNRS/IDRIS-GENCI | Turing (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Science and Technology Facilities Council - Daresbury Laboratory | Blue Joule (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | University of Edinburgh | DIRAC (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | EDF R&D | Zumbrota (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
7 | Victorian Life Sciences Computation Initiative | Avoca (Blue Gene/Q) | 4096 | 65536 | 36 | 1427 |
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.
Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.
David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as the Chair of the Georgia Institute of Technology School of Computational Science & Engineering, where he was also a founding professor, and the executive director of High-Performance Computing at the Georgia Tech College of Computing. In 2007, he was named the first director of the Sony Toshiba IBM Center of Competence for the Cell Processor at Georgia Tech.
NAS Parallel Benchmarks (NPB) are a set of benchmarks targeting performance evaluation of highly parallel supercomputers. They are developed and maintained by the NASA Advanced Supercomputing (NAS) Division based at the NASA Ames Research Center. NAS solicits performance results for NPB from all sources.
The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.
HPC Challenge Benchmark combines several benchmarks to test a number of independent attributes of the performance of high-performance computer (HPC) systems. The project has been co-sponsored by the DARPA High Productivity Computing Systems program, the United States Department of Energy and the National Science Foundation.
The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.
The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.
The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.
Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, capable of 200 petaFLOPS thus making it the 5th fastest supercomputer in the world after Frontier (OLCF-5), Fugaku, LUMI, and Leonardo, with Frontier being the fastest. It held the number 1 position from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.
The HPCGbenchmark is a supercomputing benchmark test proposed by Michael Heroux from Sandia National Laboratories, and Jack Dongarra and Piotr Luszczek from the University of Tennessee. It is intended to model the data access patterns of real-world applications such as sparse matrix calculations, thus testing the effect of limitations of the memory subsystem and internal interconnect of the supercomputer on its computing performance. Because it is internally I/O bound, HPCG testing generally achieves only a tiny fraction of the peak FLOPS the computer could theoretically deliver.
Torus fusion (tofu) is a proprietary computer network topology for supercomputers developed by Fujitsu. It is a variant of the torus interconnect. The system has been used in the K computer and the Fugaku supercomputer.
The breadth-first-search algorithm is a way to explore the vertices of a graph layer by layer. It is a basic algorithm in graph theory which can be used as a part of other graph algorithms. For instance, BFS is used by Dinic's algorithm to find maximum flow in a graph. Moreover, BFS is also one of the kernel algorithms in Graph500 benchmark, which is a benchmark for data-intensive supercomputing problems. This article discusses the possibility of speeding up BFS through the use of parallel computing.
Fugaku(Japanese: 富岳) is a petascale supercomputer at the Riken Center for Computational Science in Kobe, Japan. It started development in 2014 as the successor to the K computer and made its debut in 2020. It is named after an alternative name for Mount Fuji.
JUWELS is a supercomputer developed by Atos Forschungszentrum Jülich, capable of 70.980 petaflops. It replaced the now disused JUQUEEN supercomputer. JUWELS Booster Module is ranked as the eight fastest supercomputer in the world. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Module ranks separately as the 52nd fastest supercomputer in the world.