Connection Machine

Last updated
Thinking Machines CM-2 at the Computer History Museum in Mountain View, California. One of the face plates has been partly removed to show the circuit boards inside. Thinking machines cm2.jpg
Thinking Machines CM-2 at the Computer History Museum in Mountain View, California. One of the face plates has been partly removed to show the circuit boards inside.

A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science.

Contents

Origin of idea

Danny Hillis and Sheryl Handler founded Thinking Machines Corporation (TMC) in Waltham, Massachusetts, in 1983, moving in 1984 to Cambridge, MA. At TMC, Hillis assembled a team to develop what would become the CM-1 Connection Machine, a design for a massively parallel hypercube-based arrangement of thousands of microprocessors, springing from his PhD thesis work at MIT in Electrical Engineering and Computer Science (1985). [1] The dissertation won the ACM Distinguished Dissertation prize in 1985, [2] and was presented as a monograph that overviewed the philosophy, architecture, and software for the first Connection Machine, including information on its data routing between central processing unit (CPU) nodes, its memory handling, and the programming language Lisp applied in the parallel machine. [1] [3] Very early concepts contemplated just over a million processors, each connected in a 20-dimensional hypercube, [4] which was later scaled down.

Designs

Thinking Machines Connection Machine models
19841985198619871988198919901991199219931994
Custom architectureRISC-based (SPARC)
EntryCM-2a
MainstreamCM-1CM-2CM-5CM-5E
Hi-endCM-200
expansions
Storage DataVault
External design of CM-1 and CM-2 model Computer Museum of America (51).jpg
External design of CM-1 and CM-2 model

Each CM-1 microprocessor has its own 4  kilobits of random-access memory (RAM), and the hypercube-based array of them was designed to perform the same operation on multiple data points simultaneously, i.e., to execute tasks in single instruction, multiple data (SIMD) fashion. The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes. Each subcube contains 16 printed circuit boards and a main processor called a sequencer. Each circuit board contains 32 chips. Each chip contains a router, 16 processors, and 16 RAMs. The CM-1 as a whole has a 12-dimensional hypercube-based routing network (connecting the 212 chips), a main RAM, and an input-output processor (a channel controller). Each router contains five buffers to store the data being transmitted when a clear channel is not available. The engineers had originally calculated that seven buffers per chip would be needed, but this made the chip slightly too large to build. Nobel Prize-winning physicist Richard Feynman had previously calculated that five buffers would be enough, using a differential equation involving the average number of 1 bits in an address. They resubmitted the design of the chip with only five buffers, and when they put the machine together, it worked fine. Each chip is connected to a switching device called a nexus. The CM-1 uses Feynman's algorithm for computing logarithms that he had developed at Los Alamos National Laboratory for the Manhattan Project. It is well suited to the CM-1, using as it did, only shifting and adding, with a small table shared by all the processors. Feynman also discovered that the CM-1 would compute the Feynman diagrams for quantum chromodynamics (QCD) calculations faster than an expensive special-purpose machine developed at Caltech. [5] [6]

To improve its commercial viability, TMC launched the CM-2 in 1987, adding Weitek 3132 floating-point numeric coprocessors and more RAM to the system. Thirty-two of the original one-bit processors shared each numeric processor. The CM-2 can be configured with up to 512 MB of RAM, and a redundant array of independent disks (RAID) hard disk system, called a DataVault, of up to 25 GB. Two later variants of the CM-2 were also produced, the smaller CM-2a with either 4096 or 8192 single-bit processors, and the faster CM-200.

The light panels of FROSTBURG, a CM-5, on display at the National Cryptologic Museum. The panels were used to check the usage of the processing nodes, and to run diagnostics. Frostburg (CM-5) - National Cryptologic Museum - DSC07914.JPG
The light panels of FROSTBURG, a CM-5, on display at the National Cryptologic Museum. The panels were used to check the usage of the processing nodes, and to run diagnostics.

Due to its origins in AI research, the software for the CM-1/2/200 single-bit processor was influenced by the Lisp programming language and a version of Common Lisp, *Lisp (spoken: Star-Lisp), was implemented on the CM-1. Other early languages included Karl Sims' IK and Cliff Lasser's URDU. Much system utility software for the CM-1/2 was written in *Lisp. Many applications for the CM-2, however, were written in C*, a data-parallel superset of ANSI C.

With the CM-5, announced in 1991, TMC switched from the CM-2's hypercubic architecture of simple processors to a new and different multiple instruction, multiple data (MIMD) architecture based on a fat tree network of reduced instruction set computing (RISC) SPARC processors. To make programming easier, it was made to simulate a SIMD design. The later CM-5E replaces the SPARC processors with faster SuperSPARCs. A CM-5 was the fastest computer in the world in 1993 according to the TOP500 list, running 1024 cores with Rpeak of 131.0 GFLOPS, and for several years many of the top 10 fastest computers were CM-5s. [7]

Visual design

The CM-5 LED panels could show randomly generated moving patterns that served purely as eye candy, as seen in Jurassic Park. Thinking Machines CM-5 LED pattern animation.gif
The CM-5 LED panels could show randomly generated moving patterns that served purely as eye candy, as seen in Jurassic Park .

Connection Machines were noted for their striking visual design. The CM-1 and CM-2 design teams were led by Tamiko Thiel. [8] [9] [ better source needed ] The physical form of the CM-1, CM-2, and CM-200 chassis was a cube-of-cubes, referencing the machine's internal 12-dimensional hypercube network, with the red light-emitting diodes (LEDs), by default indicating the processor status, visible through the doors of each cube.

By default, when a processor is executing an instruction, its LED is on. In a SIMD program, the goal is to have as many processors as possible working the program at the same time – indicated by having all LEDs being steady on. Those unfamiliar with the use of the LEDs wanted to see the LEDs blink – or even spell out messages to visitors. The result is that finished programs often have superfluous operations to blink the LEDs.

The CM-5, in plan view, had a staircase-like shape, and also had large panels of red blinking LEDs. Prominent sculptor-architect Maya Lin contributed to the CM-5 design. [10]

Exhibits

The very first CM-1 is on permanent display in the Computer History Museum, Mountain View, California, which also has two other CM-1s and CM-5. [11] Other Connection Machines survive in the collections of the Museum of Modern Art New York [12] and the Living Computers: Museum + Labs Seattle (CM-2s with LED grids simulating the processor status LEDs), and in the Smithsonian Institution National Museum of American History, the Computer Museum of America in Roswell, Georgia, [13] and the Swedish National Museum of Science and Technology (Tekniska Museet) in Stockholm, Sweden. [14]

A CM-5 was featured in the film Jurassic Park in the control room for the island (instead of a Cray X-MP supercomputer as in the novel). Two banks, one bank of 4 Units and a single off to the right of the set could be seen in the control room. [15]

The computer mainframes in Fallout 3 were inspired heavily by the CM-5. [16]

Cyberpunk 2077 features numerous CM-1/CM-2 style units in various portions of the game.

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Single instruction, multiple data</span> Type of parallel processing

Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.

Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems.

nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server-class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

<span class="mw-page-title-main">Multiple instruction, multiple data</span> Computing technique employed to achieve parallelism

In computing, multiple instruction, multiple data (MIMD) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data.

<span class="mw-page-title-main">Danny Hillis</span> American computer scientist

William Daniel Hillis is an American inventor, entrepreneur, and computer scientist, who pioneered parallel computers and their use in artificial intelligence. He founded Thinking Machines Corporation, a parallel supercomputer manufacturer, and subsequently was Vice President of Research and Disney Fellow at Walt Disney Imagineering.

<span class="mw-page-title-main">Fat tree</span> Universal network for provably efficient communication

The fat tree network is a universal network for provably efficient communication. It was invented by Charles E. Leiserson of the Massachusetts Institute of Technology in 1985. k-ary n-trees, the type of fat-trees commonly used in most high-performance networks, were initially formalized in 1997.

ILLIAC was a series of supercomputers built at a variety of locations, some at the University of Illinois at Urbana–Champaign. In all, five computers were built in this series between 1951 and 1974. Some more modern projects also use the name.

The Cray-3/SSS was a pioneering massively parallel supercomputer project that bonded a two-processor Cray-3 to a new SIMD processing unit based entirely in the computer's main memory. It was later considered as an add-on for the Cray T90 series in the form of the T94/SSS, but there is no evidence this was ever built.

<span class="mw-page-title-main">NEC SX</span>

NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA.

*Lisp is a programming language, a dialect of the language Lisp. It was conceived of in 1985 by two employees of the Thinking Machines Corporation, Cliff Lasser and Steve Omohundro, as a way to provide an efficient yet high-level language for programming the nascent Connection Machine (CM).

<span class="mw-page-title-main">MasPar</span>

MasPar Computer Corporation was a minisupercomputer vendor that was founded in 1987 by Jeff Kalb. The company was based in Sunnyvale, California.

<span class="mw-page-title-main">Goodyear MPP</span>

The Goodyear Massively Parallel Processor (MPP) was a massively parallel processing supercomputer built by Goodyear Aerospace for the NASA Goddard Space Flight Center. It was designed to deliver enormous computational power at lower cost than other existing supercomputer architectures, by using thousands of simple processing elements, rather than one or a few highly complex CPUs. Development of the MPP began circa 1979; it was delivered in May 1983, and was in general use from 1985 until 1991.

The Caltech Cosmic Cube was a parallel computer, developed by Charles Seitz and Geoffrey C Fox from 1981 onward. It was the first working hypercube built.

In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers x0, x1, x2, ... is a second sequence of numbers y0, y1, y2, ..., the sums of prefixes of the input sequence:

<span class="mw-page-title-main">Intel iPSC</span>

The Intel Personal SuperComputer was a product line of parallel computers in the 1980s and 1990s. The iPSC/1 was superseded by the Intel iPSC/2, and then the Intel iPSC/860.

The QCDOC is a supercomputer technology focusing on using relatively cheap low power processing elements to produce a massively parallel machine. The machine is custom-made to solve small but extremely demanding problems in the fields of quantum physics.

Manycore processors are special kinds of multi-core processors designed for a high degree of parallel processing, containing numerous simpler, independent processor cores. Manycore processors are used extensively in embedded computers and high-performance computing.

<span class="mw-page-title-main">Tamiko Thiel</span> American artist (born 1957)

Tamiko Thiel is an American artist, known for her digital art. Her work often explores "the interplay of place, space, the body and cultural identity," and uses augmented reality (AR) as her platform. Thiel is based in Munich, Germany.

References

  1. 1 2 Hillis, W. Daniel (1986). The Connection Machine . MIT Press. ISBN   0262081571.
  2. "William Daniel Hillis - Award Winner". ACM Awards. Retrieved 30 April 2015.
  3. Kahle, Brewster; Hillis, W. Daniel (1989). The Connection Machine Model CM-1 Architecture (Technical report). Cambridge, MA: Thinking Machines Corp. p. 7 pp. Retrieved 25 April 2015.
  4. Hillis, W. Daniel (1989a). "Richard Feynman and the Connection Machine". Physics Today. 42 (2): 78. Bibcode:1989PhT....42b..78H. doi:10.1063/1.881196 . Retrieved 30 June 2021.
  5. Hillis, W. Daniel (1989b). "Richard Feynman and The Connection Machine". Physics Today . Institute of Physics. 42 (2): 78–83. Bibcode:1989PhT....42b..78H. doi:10.1063/1.881196. Archived from the original on 28 July 2009.
  6. Hillis 1989a - Text of Daniel Hillis' Physics Today article on Feynman and the Connection machine; also a video of Hillis *How I met Feynman *Feynman's last days.
  7. "November 1993". www.top500.org. Retrieved 16 January 2015.
  8. Design Issues, (Vol. 10, No. 1, Spring 1994) ISSN   0747-9360 MIT Press, Cambridge, MA.
  9. Thiel, Tamiko (Spring 1994). "The Design of the Connection Machine". Design Issues. 10 (1). Retrieved 16 January 2015.
  10. "Bloodless Beige Boxes: The Story of an Artist and a Thinking Machine". IT History Society. 2 September 2014. Retrieved 16 January 2015.
  11. "Computer History Museum, Catalog Search Connection Machine supercomputer" . Retrieved 16 August 2019.
  12. "Museum of Modern Art, CM-2 Supercomputer" . Retrieved 16 August 2019.
  13. "Computer Museum of America" . Retrieved 16 August 2019.
  14. "Swedish National Museum of Science and Technology, Parallelldator" . Retrieved 16 August 2019.
  15. Movie Quotes Database
  16. Linus Tech Tips

Further reading

Records
Preceded by
NEC SX-3/44
20.0 gigaflops
World's most powerful supercomputer
Thinking Machines CM-5/1024

June 1993
Succeeded by
Numerical Wind Tunnel
124.0 gigaflops