Fat tree

Last updated
A fat tree Fat tree network.svg
A fat tree
A 2-level fat tree with 8-port switches Fat-tree.svg
A 2-level fat tree with 8-port switches

The fat tree network is a universal network for provably efficient communication. [1] It was invented by Charles E. Leiserson of the MIT in 1985. [1] k-ary n-trees, the type of fat-trees commonly used in most high-performance networks, were initially formalized in 1997. [2]

Contents

In a tree data structure, every branch has the same thickness (bandwidth), regardless of their place in the hierarchy—they are all "skinny" (skinny in this context means low-bandwidth). In a fat tree, branches nearer the top of the hierarchy are "fatter" (thicker) than branches further down the hierarchy. In a telecommunications network, the branches are data links; the varied thickness (bandwidth) of the data links allows for more efficient and technology-specific use.[ citation needed ]

Mesh and hypercube topologies have communication requirements that follow a rigid algorithm, and cannot be tailored to specific packaging technologies. [3]

Applications in supercomputers

Supercomputers that use a fat tree network [4] include the two fastest as of late 2018, [5] Summit [6] and Sierra, [7] as well as Tianhe-2, [8] the Meiko Scientific CS-2, Yellowstone, the Earth Simulator, the Cray X2, the Connection Machine CM-5, and various Altix supercomputers.[ citation needed ]

Mercury Computer Systems applied a variant of the fat tree topology—the hypertree network—to their multicomputers.[ citation needed ] In this architecture, 2 to 360 compute nodes are arranged in a circuit-switched fat tree network.[ citation needed ] Each node has local memory that can be mapped by any other node.[ vague ] Each node in this heterogeneous system could be an Intel i860, a PowerPC, or a group of three SHARC digital signal processors.[ citation needed ]

The fat tree network was particularly well suited to fast Fourier transform computations, which customers used for such signal processing tasks as radar, sonar, and medical imaging.[ citation needed ]

In August 2008, a team of computer scientists at UCSD published a scalable design for network architecture [9] that uses a topology inspired by the fat tree topology to realize networks that scale better than those of previous hierarchical networks. The architecture uses commodity switches that are cheaper and more power-efficient than high-end modular data center switches.

This topology is actually a special instance of a Clos network, rather than a fat-tree as described above. That is because the edges near the root are emulated by many links to separate parents instead of a single high-capacity link to a single parent. However, many authors continue to use the term in this way.

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022, the fastest machine in the series is the PARAM Siddhi-AI which ranks 63rd in world, with an Rpeak of 5.267 petaflops.

<span class="mw-page-title-main">Quadrics (company)</span>

Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.

<span class="mw-page-title-main">ASCI Red</span> Supercomputer

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.

<span class="mw-page-title-main">Charles E. Leiserson</span> American computer scientist

Charles Eric Leiserson is a computer scientist and professor at Massachusetts Institute of Technology (M.I.T.). He specializes in the theory of parallel computing and distributed computing.

<span class="mw-page-title-main">NEC SX</span> Series of supercomputers by NEC

NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA.

GPFS is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List. For example, it is the filesystem of the Summit at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 Top 500 List. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem is called Alpine.

<span class="mw-page-title-main">SGI Origin 2000</span> Series of server computers

The SGI Origin 2000 is a family of mid-range and high-end server computers developed and manufactured by Silicon Graphics (SGI). They were introduced in 1996 to succeed the SGI Challenge and POWER Challenge. At the time of introduction, these ran the IRIX operating system, originally version 6.4 and later, 6.5. A variant of the Origin 2000 with graphics capability is known as the Onyx2. An entry-level variant based on the same architecture but with a different hardware implementation is known as the Origin 200. The Origin 2000 was succeeded by the Origin 3000 in July 2000, and was discontinued on June 30, 2002.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

<span class="mw-page-title-main">Slurm Workload Manager</span> Free and open-source job scheduler for Linux and similar computers

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

<span class="mw-page-title-main">K computer</span> Supercomputer in Kobe, Japan

The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.

<span class="mw-page-title-main">Supercomputing in Japan</span> Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer being the world's fastest from June 2011 to June 2012, and Fugaku holding the lead from June 2020 until June 2022.

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

<span class="mw-page-title-main">Supercomputer operating system</span> Use of Operative System by type of extremely powerful computer

A supercomputer operating system is an operating system intended for supercomputers. Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. In 2021, top 10 computers run for instance Red Hat Enterprise Linux (RHEL), or some variant of it or other Linux distribution e.g. Ubuntu.

A data center is a pool of resources interconnected using a communication network. A data center network (DCN) holds a pivotal role in a data center, as it interconnects all of the data center resources together. DCNs need to be scalable and efficient to connect tens or even hundreds of thousands of servers to handle the growing demands of cloud computing. Today's data centers are constrained by the interconnection network.

In computing, energy proportionality is a measure of the relationship between power consumed in a computer system, and the rate at which useful work is done. If the overall power consumption is proportional to the computer's utilization, then the machine is said to be energy proportional. Equivalently stated, for an idealized energy proportional computer, the overall energy per operation is constant for all possible workloads and operating conditions.

<span class="mw-page-title-main">Summit (supercomputer)</span> Supercomputer developed by IBM

Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, United States of America. As of June 2024, it is the 9th fastest supercomputer in the world on the TOP500 list. It held the number 1 position on this list from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.

An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

In the high-performance computing environment, burst buffer is a fast intermediate storage layer positioned between the front-end computing processes and the back-end storage systems. It bridges the performance gap between the processing speed of the compute nodes and the Input/output (I/O) bandwidth of the storage systems. Burst buffers are often built from arrays of high-performance storage devices, such as NVRAM and SSD. It typically offers from one to two orders of magnitude higher I/O bandwidth than the back-end storage systems.

References

  1. 1 2 Leiserson, Charles E (October 1985). "Fat-trees: universal networks for hardware-efficient supercomputing" (PDF). IEEE Transactions on Computers. 34 (10): 892–901. doi:10.1109/TC.1985.6312192. S2CID   8927584.
  2. Petrini, Fabrizio (1997). "K-ary n-trees: High performance networks for massively parallel architectures". Proceedings 11th International Parallel Processing Symposium. Vol. doi: 10.1109/IPPS.1997.580853. pp. 87–93. doi:10.1109/IPPS.1997.580853. ISBN   0-8186-7793-7. S2CID   6608892.
  3. Leiserson, Charles E.; Abuhamdeh, Zahi S.; Douglas, David C.; Feynman, Carl R.; Ganmukhi, Mahesh N.; Hill, Jeffrey V.; Daniel Hillis, W.; Kuszmaul, Bradley C.; St. Pierre, Margaret A.; Wells, David S.; Wong, Monica C.; Yang, Shaw-Wen; Zak, Robert (1992). "The Network Architecture of the Connection Machine CM-5". SPAA '92 Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures. ACM. pp. 272–285. doi:10.1145/140901.141883. ISBN   978-0-89791-483-3. S2CID   6307237.
  4. Yuefan Deng (2013). "3.2.1 Hardware systems: Network Interconnections: Topology". Applied Parallel Computing. World Scientific. p. 25. ISBN   978-981-4307-60-4.
  5. "November 2018 TOP500". TOP500. November 2018. Retrieved 2019-02-11.
  6. "Summit - Oak Ridge National Laboratory's next High Performance Supercomputer". Oak Ridge Leadership Computing Facility . Retrieved 2019-02-11.
  7. Barney, Blaise (2019-01-18). "Using LC's Sierra Systems - Hardware - Mellanox EDR InfiniBand Network - Topology and LC Sierra Configuration". Lawrence Livermore National Laboratory . Retrieved 2019-02-11.
  8. Dongarra, Jack (2013-06-03). "Visit to the National University for Defense Technology Changsha, China" (PDF). Netlib . Retrieved 2013-06-17.
  9. Al-Fares, Mohammad; Loukissas, Alexander; Vahdat, Amin (2008). "A scalable, commodity data center network architecture" (PDF). Proceedings of the ACM SIGCOMM 2008 conference on Data communication. ACM. pp. 63–74. doi:10.1145/1402958.1402967. ISBN   978-1-60558-175-0. S2CID   65842.

Further reading