Tsubame (supercomputer)

Last updated
Networking racks of TSUBAME 3.0 supercomputer TSUBAME 3.0 PA075088.jpg
Networking racks of TSUBAME 3.0 supercomputer

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

Contents

Versions

Tsubame 1.0

The Sun Microsystems-built Tsubame 1.0 began operation in 2006 achieving 85 TFLOPS of performance, it was the most powerful supercomputer in Japan at the time. [1] [2] The system consisted of 655 InfiniBand connected nodes, each with a 8 dual-core AMD Opteron 880 and 885 CPUs and 32GB of memory. [3] [4] Tsubame 1.0 also included 600 ClearSpeed X620 Advance cards. [5]

Tsubame 1.2

In 2008, Tsubame was upgraded with 170 Nvidia Tesla S1070 server racks, adding at total of 680 Tesla T10 GPU processors for GPGPU computing. [1] This increased performance to 170 TFLOPS, making it at the time the second most powerful supercomputer in Japan and 29th in the world.

Tsubame 2.0

Tsubame 2.0 was built in 2010 by HP and NEC as a replacement to Tsubame 1.0. [2] [6] With a peak of 2,288 TFLOPS, in June 2011 it was ranked 5th in the world. [7] [8] It has 1,400 nodes using six-core Xeon 5600 and eight-core Xeon 7500 processors. The system also included 4,200 of Nvidia Tesla M2050 GPGPU compute modules. In total the system had 80.6 TB of DRAM, in addition to 12.7 TB of GDDR memory on the GPU devices. [9]

Tsubame 2.5

Tsubame 2.0 was further upgrade to 2.5 in 2014, replacing all of the Nvidia M2050 GPGPU compute modules with Nvidia Tesla Kepler K20x compute modules. [10] [11] This yielded 17.1 PFLOPS of single precision performance.

Tsubame-KFC

Tsubame KFC added oil based liquid cooling to reduce power consumption. [12] [13] This allowed the system to achieve world's best performance efficiencies of 4.5 gigaflops/watt. [14] [15] [16]

Tsubame 3.0

In February 2017, Tokyo Institute of Technology announced it would add a new system Tsubame 3.0. [17] [18] It was developed with SGI and is focused on artificial intelligence and targeting 12.2 PFLOPS of double precision performance. The design is reported to utilize 2,160 Nvidia Tesla P100 GPGPU modules, in addition to Intel Xeon E5-2680 v4 processors.

Tsubame 3.0 ranked 13th at 8125 TFLOPS on the November 2017 list of the TOP500 supercomputer ranking. [19] It ranked 1st on the June 2017 list of the Green500 energy efficiency ranking at 14.110 GFLOPS/watts. [20]

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

<span class="mw-page-title-main">TOP500</span>

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Irish Centre for High-End Computing</span>

The Irish Centre for High-End Computing (ICHEC) is the national high-performance computing centre in Ireland. It was established in 2005 and provides supercomputing resources, support, training and related services. ICHEC is involved in education and training, including providing courses for researchers.

The Cray CX1 is a deskside high-performance workstation designed by Cray Inc., based on the x86-64 processor architecture. It was launched on September 16, 2008, and was discontinued in early 2012. It comprises a single chassis blade server design that supports a maximum of eight modular single-width blades, giving up to 96 processor cores. Computational load can be run independently on each blade and/or combined using clustering techniques.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

SAGA-220 is a supercomputer built by the Indian Space Research Organisation (ISRO).

<span class="mw-page-title-main">Supercomputing in Japan</span> Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer becoming the world's fastest in June 2011. and Fugaku took the lead in June 2020, and furthered it, as of November 2020, to 3 times faster than number two computer.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Xeon Phi</span> Series of x86 manycore processors from Intel

Xeon Phi was a series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP.

<span class="mw-page-title-main">Appro</span> American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

<span class="mw-page-title-main">Nvidia Tesla</span> Nvidias line of general purpose GPUs

Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs. The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket.

<span class="mw-page-title-main">AMD Instinct</span> Brand name by AMD; professional GPUs for high-performance-computing, machine learning

AMD Instinct is AMD's brand of professional GPUs. It replaced AMD's FirePro S brand in 2016. Compared to the Radeon brand of mainstream consumer/gamer products, the Instinct product line is intended to accelerate deep learning, artificial neural network, and high-performance computing/GPGPU applications.

The Cray XC50 is a massively parallel multiprocessor supercomputer manufactured by Cray. The machine can support Intel Xeon processors, as well as Cavium ThunderX2 processors, Xeon Phi processors and NVIDIA Tesla P100 GPUs. The processors are connected by Cray's proprietary "Aries" interconnect, in a dragonfly network topology. The XC50 is an evolution of the XC40, with the main difference being the support of Tesla P100 processors and the use of Cray software release CLE 6 or 7.

<span class="mw-page-title-main">Taiwania (supercomputer)</span> Supercomputer of Taiwan

Taiwania is a supercomputer series in Taiwan owned by the National Applied Research Laboratories. A second similar but much more powerful supercomputer, Taiwania 2, was unveiled soon after.

Inspur Server Series is a series of server computers introduced in 1993 by Inspur, an information technology company, and later expanded to the international markets. The servers were likely among the first originally manufactured by a Chinese company. It is currently developed by Inspur Information and its San Francisco-based subsidiary company - Inspur Systems, both Inspur's spinoff companies. The product line includes GPU Servers, Rack-mounted servers, Open Computing Servers and Multi-node Servers.

Leonardo is a petascale supercomputer currently under construction at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200Gb/s Nvidia Mellanox HDR InfiniBand connectivity. Once completed, Leonardo will be capable of 250 petaflops, which will make it one of the top five fastest supercomputers in the world. Leonardo's components arrived on site in July 2022, and it is scheduled to begin operations by the end of summer 2022.

References

  1. 1 2 Toshiaki, Konishi (3 December 2008). "The world's first GPU supercomputer! Tokyo Institute of Technology TSUBAME 1.2 released". ASCII.jp. Retrieved 20 February 2017.
  2. 1 2 Morgan, Timothy Pricket (31 May 2010). "Tokyo Tech dumps Sun super iron for HP, NEC". The Register. Retrieved 20 February 2017.
  3. Endo, Toshio; Nukada, Akira; Matsuoka, Satoshi; Maruyama, Naoya (May 2010). Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators. pp. 1–8. CiteSeerX   10.1.1.456.3880 . doi:10.1109/IPDPS.2010.5470353. ISBN   978-1-4244-6442-5. S2CID   2215916.
  4. Takenouchi, Kensuke; Yokoi, Shintaro; Muroi, Chiashi; Ishida, Junichi; Aranami, Kohei. "Research on Computational Techniques for JMA's NWP Models" (PDF). World Climate Research Program. Retrieved 20 February 2017.
  5. Tanabe, Noriyuki; Ichihashi, Yasuyuki; Nakayama, Hirotaka; Masuda, Nobuyuki; Ito, Tomoyoshi (October 2009). "Speed-up of hologram generation using ClearSpeed Accelerator board". Computer Physics Communications. 180 (10): 1870–1873. Bibcode:2009CoPhC.180.1870T. doi:10.1016/j.cpc.2009.06.001.
  6. "Acquisition of next-generation supercomputer by Tokyo Institute of Technology NEC · HP Union receives order". Global Scientific Information and Computing Center, Tokyo Institute of Technology. Tokyo Institute of Technology. Retrieved 20 February 2017.
  7. HPCWire May 2011 Archived 2011-05-08 at the Wayback Machine
  8. Hui Pan 'Research Initiatives with HP Servers', Gigabit/ATM Newsletter, December 2010, page 11
  9. Feldman, Michael (14 October 2010). "The Second Coming of TSUBAME". HPC Wire. Retrieved 20 February 2017.
  10. "TSUBAME 2.0 Upgraded to TSUBAME 2.5: Aiming Ever Higher". Tokyo Institute of Technology. Tokyo Institute of Technology. Retrieved 20 February 2017.
  11. Brueckner (14 January 2014). "Being Very Green with Tsubame 2.5 Towards 3.0 and Beyond to Exascale". Inside HPC. Retrieved 20 February 2017.
  12. Rath, John (2 July 2014). "Tokyo's Tsubame-KFC Remains World's Most Energy Efficient Supercomputer". Data Center Knowledge. Retrieved 20 February 2017.
  13. Brueckner, Rich (2 December 2015). "Green Revolution Cooling Helps Tsubame-KFC Supercomputer Top the Green500". Inside HPC. Retrieved 20 February 2017.
  14. "Heterogeneous Systems Dominate the Green500". HPCWire. November 20, 2013. Retrieved 28 December 2013.
  15. Millington, George (November 19, 2013). "Japan's Oil-Cooled "KFC" Tsubame Supercomputer May Be Headed for Green500 Greatness". NVidia. Retrieved 28 December 2013.
  16. Rath, John (November 21, 2013). "Submerged Supercomputer Named World's Most Efficient System in Green 500". datacenterknowledge.com. Retrieved 28 December 2013.
  17. Armasu, Lucian (17 February 2017). "Nvidia To Power Japan's 'Fastest AI Supercomputer' This Summer". Tom's Hardware. Retrieved 20 February 2017.
  18. Morgan, Timothy Pricket (17 February 2017). "Japan Keeps Accelerating With Tsubame 3.0 AI Supercomputer". The Next Platform. Retrieved 20 February 2017.
  19. "TOP500 List - November 2017" . Retrieved 2 October 2018.
  20. "Green500 List for June 2017" . Retrieved 2 October 2018.