Tsubame (supercomputer)

Last updated
Networking racks of TSUBAME 3.0 supercomputer TSUBAME 3.0 PA075088.jpg
Networking racks of TSUBAME 3.0 supercomputer

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

Contents

Versions

Tsubame 1.0

The Sun Microsystems-built Tsubame 1.0 began operation in 2006 achieving 85 TFLOPS of performance, it was the most powerful supercomputer in Japan at the time. [1] [2] The system consisted of 655 InfiniBand connected nodes, each with a 8 dual-core AMD Opteron 880 and 885 CPUs and 32 GB of memory. [3] [4] Tsubame 1.0 also included 600 ClearSpeed X620 Advance cards. [5]

Tsubame 1.2

In 2008, Tsubame was upgraded with 170 Nvidia Tesla S1070 server racks, adding at total of 680 Tesla T10 GPU processors for GPGPU computing. [1] This increased performance to 170 TFLOPS, making it at the time the second most powerful supercomputer in Japan and 29th in the world.

Tsubame 2.0

Tsubame 2.0 was built in 2010 by HP and NEC as a replacement to Tsubame 1.0. [2] [6] With a peak of 2,288 TFLOPS, in June 2011 it was ranked 5th in the world. [7] [8] It has 1,400 nodes using six-core Xeon 5600 and eight-core Xeon 7500 processors. The system also included 4,200 of Nvidia Tesla M2050 GPGPU compute modules. In total the system had 80.6 TB of DRAM, in addition to 12.7 TB of GDDR memory on the GPU devices. [9]

Tsubame 2.5

Tsubame 2.0 was further upgrade to 2.5 in 2014, replacing all of the Nvidia M2050 GPGPU compute modules with Nvidia Tesla Kepler K20x compute modules. [10] [11] This yielded 17.1 PFLOPS of single precision performance.

Tsubame-KFC

Tsubame KFC added oil based liquid cooling to reduce power consumption. [12] [13] This allowed the system to achieve world's best performance efficiencies of 4.5 gigaflops/watt. [14] [15] [16]

Tsubame 3.0

In February 2017, Tokyo Institute of Technology announced it would add a new system Tsubame 3.0. [17] [18] It was developed with SGI and is focused on artificial intelligence and targeting 12.2 PFLOPS of double precision performance. The design is reported to utilize 2,160 Nvidia Tesla P100 GPGPU modules, in addition to Intel Xeon E5-2680 v4 processors.

Tsubame 3.0 ranked 13th at 8125 TFLOPS on the November 2017 list of the TOP500 supercomputer ranking. [19] It ranked 1st on the June 2017 list of the Green500 energy efficiency ranking at 14.110 GFLOPS/watts. [20]

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Irish Centre for High-End Computing</span> National high-performance computing centre in Ireland

The Irish Centre for High-End Computing (ICHEC) is the national high-performance computing centre in Ireland. It was established in 2005 and provides supercomputing resources, support, training and related services. ICHEC is involved in education and training, including providing courses for researchers.

The Cray CX1 is a deskside workstation designed by Cray Inc., based on the x86-64 processor architecture. It was launched on September 16, 2008, and was discontinued in early 2012. It comprises a single chassis blade server design that supports a maximum of eight modular single-width blades, giving up to 96 processor cores. Computational load can be run independently on each blade and/or combined using clustering techniques.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

<span class="mw-page-title-main">Supercomputing in Japan</span> Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer being the world's fastest from June 2011 to June 2012, and Fugaku holding the lead from June 2020 until June 2022.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Appro</span> American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

<span class="mw-page-title-main">NCAR-Wyoming Supercomputing Center</span> High performance computing center in Wyoming, US

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.

<span class="mw-page-title-main">Nvidia Tesla</span> Nvidias line of general purpose GPUs

Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.

<span class="mw-page-title-main">Volta (microarchitecture)</span> GPU microarchitecture by Nvidia

Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was Nvidia's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

The Nvidia DGX represents a series of servers and workstations designed by Nvidia, primarily geared towards enhancing deep learning applications through the use of general-purpose computing on graphics processing units (GPGPU). These systems typically come in a rackmount format featuring high-performance x86 server CPUs on the motherboard.

<span class="mw-page-title-main">AMD Instinct</span> Brand name by AMD; data center GPUs for high-performance-computing, machine learning

AMD Instinct is AMD's brand of data center GPUs. It replaced AMD's FirePro S brand in 2016. Compared to the Radeon brand of mainstream consumer/gamer products, the Instinct product line is intended to accelerate deep learning, artificial neural network, and high-performance computing/GPGPU applications.

<span class="mw-page-title-main">Taiwania (supercomputer)</span> Supercomputer of Taiwan

Taiwania is a supercomputer series in Taiwan owned by the National Applied Research Laboratories.

Inspur Server Series is a series of server computers introduced in 1993 by Inspur, an information technology company, and later expanded to the international markets. The servers were likely among the first originally manufactured by a Chinese company. It is currently developed by Inspur Information and its San Francisco-based subsidiary company - Inspur Systems, both Inspur's spinoff companies. The product line includes GPU Servers, Rack-mounted servers, Open Computing Servers and Multi-node Servers.

<span class="mw-page-title-main">JUWELS</span> Supercomputer in Germany

JUWELS is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. It is capable of a theoretical peak of 70.980 petaflops and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.

<span class="mw-page-title-main">Leonardo (supercomputer)</span> Supercomputer in Italy

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200 Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.

<span class="mw-page-title-main">Taiwania 3</span> Supercomputer of Taiwan

Taiwania 3 is one of the supercomputers made by Taiwan, and also the newest one. It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after getting permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100 Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192 GB. There's currently two Intel Xeon Platinum 8280 2.4 GHz CPU inside each node. The full calculation capability is 2.7PFLOPS. It is launched into operation in November 2020 before schedule due to the needed for COVID-19. It is currently ranked number 227 on Top 500 list of June, 2021 and number 80 on Green 500 list. It is manufactured by Quanta Computer, Taiwan Fixed Network, and ASUS Cloud.

References

  1. 1 2 Toshiaki, Konishi (3 December 2008). "The world's first GPU supercomputer! Tokyo Institute of Technology TSUBAME 1.2 released". ASCII.jp. Retrieved 20 February 2017.
  2. 1 2 Morgan, Timothy Pricket (31 May 2010). "Tokyo Tech dumps Sun super iron for HP, NEC". The Register. Retrieved 20 February 2017.
  3. Endo, Toshio; Nukada, Akira; Matsuoka, Satoshi; Maruyama, Naoya (May 2010). Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators. pp. 1–8. CiteSeerX   10.1.1.456.3880 . doi:10.1109/IPDPS.2010.5470353. ISBN   978-1-4244-6442-5. S2CID   2215916.
  4. Takenouchi, Kensuke; Yokoi, Shintaro; Muroi, Chiashi; Ishida, Junichi; Aranami, Kohei. "Research on Computational Techniques for JMA's NWP Models" (PDF). World Climate Research Program. Retrieved 20 February 2017.
  5. Tanabe, Noriyuki; Ichihashi, Yasuyuki; Nakayama, Hirotaka; Masuda, Nobuyuki; Ito, Tomoyoshi (October 2009). "Speed-up of hologram generation using ClearSpeed Accelerator board". Computer Physics Communications. 180 (10): 1870–1873. Bibcode:2009CoPhC.180.1870T. doi:10.1016/j.cpc.2009.06.001.
  6. "Acquisition of next-generation supercomputer by Tokyo Institute of Technology NEC · HP Union receives order". Global Scientific Information and Computing Center, Tokyo Institute of Technology. Tokyo Institute of Technology. Retrieved 20 February 2017.
  7. HPCWire May 2011 Archived 2011-05-08 at the Wayback Machine
  8. Hui Pan 'Research Initiatives with HP Servers', Gigabit/ATM Newsletter, December 2010, page 11
  9. Feldman, Michael (14 October 2010). "The Second Coming of TSUBAME". HPC Wire. Retrieved 20 February 2017.
  10. "TSUBAME 2.0 Upgraded to TSUBAME 2.5: Aiming Ever Higher". Tokyo Institute of Technology. Tokyo Institute of Technology. Retrieved 20 February 2017.
  11. Brueckner (14 January 2014). "Being Very Green with Tsubame 2.5 Towards 3.0 and Beyond to Exascale". Inside HPC. Retrieved 20 February 2017.
  12. Rath, John (2 July 2014). "Tokyo's Tsubame-KFC Remains World's Most Energy Efficient Supercomputer". Data Center Knowledge. Retrieved 20 February 2017.
  13. Brueckner, Rich (2 December 2015). "Green Revolution Cooling Helps Tsubame-KFC Supercomputer Top the Green500". Inside HPC. Retrieved 20 February 2017.
  14. "Heterogeneous Systems Dominate the Green500". HPCWire. November 20, 2013. Retrieved 28 December 2013.
  15. Millington, George (November 19, 2013). "Japan's Oil-Cooled "KFC" Tsubame Supercomputer May Be Headed for Green500 Greatness". NVidia. Retrieved 28 December 2013.
  16. Rath, John (November 21, 2013). "Submerged Supercomputer Named World's Most Efficient System in Green 500". datacenterknowledge.com. Retrieved 28 December 2013.
  17. Armasu, Lucian (17 February 2017). "Nvidia To Power Japan's 'Fastest AI Supercomputer' This Summer". Tom's Hardware. Retrieved 20 February 2017.
  18. Morgan, Timothy Pricket (17 February 2017). "Japan Keeps Accelerating With Tsubame 3.0 AI Supercomputer". The Next Platform. Retrieved 20 February 2017.
  19. "TOP500 List - November 2017" . Retrieved 2 October 2018.
  20. "Green500 List for June 2017" . Retrieved 2 October 2018.