Amorphous computing

Last updated

Amorphous computing refers to computational systems that use very large numbers of identical, parallel processors each having limited computational ability and local interactions. The term Amorphous Computing was coined at MIT in 1996 in a paper entitled "Amorphous Computing Manifesto" by Abelson, Knight, Sussman, et al.

Contents

Examples of naturally occurring amorphous computations can be found in many fields, such as: developmental biology (the development of multicellular organisms from a single cell), molecular biology (the organization of sub-cellular compartments and intra-cell signaling), neural networks, and chemical engineering (non-equilibrium systems) to name a few. The study of amorphous computation is hardware agnostic—it is not concerned with the physical substrate (biological, electronic, nanotech, etc.) but rather with the characterization of amorphous algorithms as abstractions with the goal of both understanding existing natural examples and engineering novel systems.

Amorphous computers tend to have many of the following properties:

Algorithms, tools, and patterns

(Some of these algorithms have no known names. Where a name is not known, a descriptive one is given.)

Researchers and labs

See also

Documents

  1. The Amorphous Computing Home Page
    A collection of papers and links at the MIT AI lab
  2. Amorphous Computing (Communications of the ACM, May 2000)
    A review article showing examples from Coore's Growing Point Language as well as patterns created from Weiss's rule triggering language.
  3. "Amorphous computing in the presence of stochastic disturbances"
    A paper investigating the ability of Amorphous computers to deal with failing components.
  4. Amorphous Computing Slides from DARPA talk in 1998
    An overview of ideas and proposals for implementations
  5. Amorphous and Cellular Computing PPT from 2002 NASA Lecture
    Almost the same as above, in PPT format
  6. Infrastructure for Engineered Emergence on Sensor/Actuator Networks, Beal and Bachrach, 2006.
    An amorphous computing language called "Proto".
  7. Self-repairing Topological Patterns Clement, Nagpal.
    Algorithms for self-repairing and self-maintaining line.
  8. Robust Methods of Amorphous Synchronization, Joshua Grochow
    Methods for inducing global temporal synchronization.
  9. Programmable Self-Assembly: Constructing Global Shape Using Biologically-Inspired Local Interactions and Origami Mathematics and Associated Slides Nagpal PhD Thesis
    A language to compile local-interaction instructions from a high-level description of an origami-like folded structure.
  10. Towards a Programmable Material, Nagpal Associated Slides
    Similar outline to previous paper
  11. Self-Healing Structures in Amorphous Computing Zucker
    Methods for detecting and maintaining topologies inspired by biological regeneration.
  12. Resilient serial execution on amorphous machines [ permanent dead link ], Sutherland Master's Thesis
    A language for running serial processes on amorphous computers
  13. Paradigms for Structure in an Amorphous Computer, 1997 Coore, Nagpal, Weiss
    Techniques for creating hierarchical order in amorphous computers.
  14. Organizing a Global Coordinate System from Local Information on an Amorphous Computer, 1999 Nagpal.
    Techniques for creating coordinate systems by gradient formation and analyzes precision limits.
  15. Amorphous Computing: examples, mathematics and theory, 2013 W Richard Stark.
    This paper presents nearly 20 examples varying from simple to complex, standard mathematical tools are used to prove theorems and compute expected behavior, four programming styles are identified and explored, three uncomputability results are proved, and the computational foundations of a complex, dynamic intelligence system are sketched.

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. Distributed computing is a field of computer science that studies distributed systems.

<span class="mw-page-title-main">Encryption</span> Process of converting plaintext to ciphertext

In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.

<span class="mw-page-title-main">Gerald Jay Sussman</span> American computer scientist

Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He received his S.B. and Ph.D. degrees in mathematics from MIT in 1968 and 1973 respectively. He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture and in Very Large Scale Integration (VLSI) design.

In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time. It has been a tradition of computer science to describe serial algorithms in abstract machine models, often the one known as random-access machine. Similarly, many computer science researchers have used a so-called parallel random-access machine (PRAM) as a parallel abstract machine (shared-memory).

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

<span class="mw-page-title-main">Ehud Shapiro</span> Israeli computer scientist

Ehud Shapiro is a multi-disciplinary scientist, artist, entrepreneur and Professor of Computer Science and Biology at the Weizmann Institute of Science. With international reputation, he made fundamental contributions to many scientific disciplines. Shapiro was also an Internet pioneer, a successful Internet entrepreneur, and a pioneer and proponent of E-democracy. Shapiro is the founder of the Ba Rock Band and conceived its original artistic program. He is a winner of two ERC Advanced Grants.

<span class="mw-page-title-main">Concurrency (computer science)</span> Ability to execute a task in a non-serial manner

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.

Social computing is an area of computer science that is concerned with the intersection of social behavior and computational systems. It is based on creating or recreating social conventions and social contexts through the use of software and technology. Thus, blogs, email, instant messaging, social network services, wikis, social bookmarking and other instances of what is often called social software illustrate ideas from social computing.

<span class="mw-page-title-main">Neural network</span> Structure in biology and artificial intelligence

A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological neurons, or an artificial neural network, used for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled in artificial neural networks as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which give rise to indeterminacy.

<span class="mw-page-title-main">Unconventional computing</span> Computing by a wide range of new or unusual method

Unconventional computing is computing by any of a wide range of new or unusual methods. It is also known as alternative computing.

<span class="mw-page-title-main">Spiking neural network</span>

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Radhika Nagpal is an American computer scientist and researcher in the fields of self-organising computer systems, biologically-inspired robotics, and biological multi-agent systems. She is the Fred Kavli Professor of Computer Science at Harvard University and the Harvard School of Engineering and Applied Sciences. She is also a Core Faculty Member of the Harvard Wyss Institute for Biologically Inspired Engineering. In 2017, Nagpal co-founded a robotics company under the name of Root Robotics. This educational company works to create many different opportunities for those unable to code to learn how.

Robotic materials are composite materials that combine sensing, actuation, computation, and communication in a repeatable or amorphous pattern. Robotic materials can be considered computational metamaterials in that they extend the original definition of a metamaterial as "macroscopic composites having a man-made, three-dimensional, periodic cellular architecture designed to produce an optimized combination, not available in nature, of two or more responses to specific excitation" by being fully programmable. That is, unlike in a conventional metamaterial, the relationship between a specific excitation and response is governed by sensing, actuation, and a computer program that implements the desired logic.