Flash mob computing

Last updated

Flash mob computing or flash mob computer is a temporary ad hoc computer cluster running specific software to coordinate the individual computers into one single supercomputer. A flash mob computer is distinct from other types of computer clusters in that it is set up and broken down on the same day or during a similar brief amount of time and involves many independent owners of computers coming together at a central physical location to work on a specific problem and/or social event.

Contents

Flash mob computer derives its name from the more general term flash mob which can mean any activity involving many people co-ordinated through virtual communities coming together for brief periods of time for a specific task or event. Flash mob computing is a more specific type of flash mob for the purpose of bringing people and their computers together to work on a single task or event.

History

The first flash mob computer was created on April 3, 2004 at the University of San Francisco using software written at USF called FlashMob (not to be confused with the more general term flash mob).

The event, called FlashMob I, was a success. There was a call for computers on the computer news website Slashdot. An article in The New York Times "Hey, Gang, Let’s Make Our Own Supercomputer" brought a lot of attention to the effort. More than 700 computers were brought to the gym at the University of San Francisco, and were wired to a network donated by Foundry Networks.

At FlashMob I the participants were able to run a benchmark on 256 of the computers, and achieved a peak rate of 180 Gflops (billions of calculations per second), though this computation stopped three quarters of the way due to a node failure.

The best, complete run used 150 computers and resulted in 77 Gflops. FlashMob I was run off a bootable CD-ROM that ran a copy of Morphix Linux, which was only available for the x86 platform.

Despite these efforts, the project was unable to achieve its original goal of running a cluster momentarily fast enough to enter the (November 2003) Top 500 list of supercomputers. The system would have had to provide at least 402.5 Gflops to match a Chinese cluster of 256 Intel Xeon nodes. For comparison, the fastest super computer at the time, Earth Simulator, provided 35,860 Gflops.

Creators of flash mob computing

Pat Miller was a research scientist at a national lab and adjunct professor at USF. His class on Do-It-Yourself Supercomputers evolved into FlashMob I from the original idea of every student bringing a commodity CPU or an Xbox to class to make an evanescent cluster at each meeting. Pat worked on all aspects of the FlashMob software.

Greg Benson, USF Associate Professor of Computer Science, invented the name "flash mob computing", and proposed the first idea of wireless flash mob computers. Greg worked on the core infrastructure of the FlashMob run time environment.

John Witchel (Stuyvesant High School '86) was a USF graduate student in computer science during the spring of 2004. After talking to Greg about the issues of networking a stadium of wireless computers and listening to Pat lecture on what it takes to break the Top 500, John asked the simple question: "Couldn't we just invite people off the street and get enough power to break the Top 500?" And flash mob supercomputing was born. FlashMob I and the FlashMob software was John's master's thesis.

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Beowulf cluster</span> Type of computing cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

<span class="mw-page-title-main">IBM Blue Gene</span> Series of supercomputers by IBM

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022 the fastest machine in the series is the PARAM Siddhi AI which ranks 120th in world, with an Rpeak of 5.267 petaflops.

Cell is a 64-bit multi-core microprocessor microarchitecture that combines a general-purpose PowerPC core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.

<span class="mw-page-title-main">ASCI Red</span> Supercomputer

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

<span class="mw-page-title-main">Advanced Simulation and Computing Program</span>

The Advanced Simulation and Computing Program is a super-computing program run by the National Nuclear Security Administration, in order to simulate, test, and maintain the United States nuclear stockpile. The program was created in 1995 in order to support the Stockpile Stewardship Program. The goal of the initiative is to extend the lifetime of the current aging stockpile.

<span class="mw-page-title-main">NEC SX</span>

NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA.

<span class="mw-page-title-main">IBM Scalable POWERparallel</span> Series of supercomputers by IBM

Scalable POWERparallel (SP) is a series of supercomputers from IBM. SP systems were part of the IBM RISC System/6000 (RS/6000) family, and were also called the RS/6000 SP. The first model, the SP1, was introduced in February 1993, and new models were introduced throughout the 1990s until the RS/6000 was succeeded by eServer pSeries in October 2000. The SP is a distributed memory system, consisting of multiple RS/6000-based nodes interconnected by an IBM-proprietary switch called the High Performance Switch (HPS). The nodes are clustered using software called PSSP, which is mainly written in Perl.

Patrick J. Miller is a computer scientist and high performance parallel applications developer with a Ph.D. in Computer Science from University of California, Davis, in run-time error detection and correction. Until recently he was with Lawrence Livermore National Laboratory.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

SiCortex was a supercomputer manufacturer founded in 2003 and headquartered in Clock Tower Place, Maynard, Massachusetts. On 27 May 2009, HPCwire reported that the company had shut down its operations, laid off most of its staff, and is seeking a buyer for its assets. The Register reported that Gerbsman Partners was hired to sell SiCortex's intellectual properties. While SiCortex had some sales, selling at least 75 prototype supercomputers to several large customers, the company had never produced an operating profit and ran out of venture capital. New funding could not be found.

Coates is a supercomputer installed at Purdue University on July 21, 2009. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Steele built in 2008, Rossmann built in 2010, and Hansen and Carter built in 2011. Coates was the largest campus supercomputer in the Big Ten outside a national center when built. It was the first native 10 Gigabit Ethernet (10GigE) cluster to be ranked in the TOP500 and placed 102nd on the June 2010 list.

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

<span class="mw-page-title-main">Supercomputer operating system</span> Use of Operative System by type of extremely powerful computer

A supercomputer operating system is an operating system intended for supercomputers. Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. In 2021, top 10 computers run for instance Red Hat Enterprise Linux (RHEL), or some variant of it or other Linux distribution e.g. Ubuntu.

Isra Vision Parsytec AG is a company of Isra Vision, founded in 1985 as Parsytec in Aachen, Germany.

<span class="mw-page-title-main">Supercomputing in Pakistan</span> Overview of supercomputing in Pakistan

The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.

The Holland Computing Center, often abbreviated HCC, is the high-performance computing core for the University of Nebraska System. HCC has locations in both the University of Nebraska-Lincoln June and Paul Schorr III Center for Computer Science & Engineering and the University of Nebraska Omaha Peter Kiewit Institute. The center was named after Omaha businessman Richard Holland who donated considerably to the university for the project.

The Sunway TaihuLight is a Chinese supercomputer which, as of November 2021, is ranked fourth in the TOP500 list, with a LINPACK benchmark rating of 93 petaflops. The name is translated as divine power, the light of Taihu Lake. This is nearly three times as fast as the previous Tianhe-2, which ran at 34 petaflops. As of June 2017, it is ranked as the 16th most energy-efficient supercomputer in the Green500, with an efficiency of 6.1 GFlops/watt. It was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi in the city of Wuxi, in Jiangsu province, China.

References