Amorphous computing

Last updated

Amorphous computing refers to computational systems that use very large numbers of identical, parallel processors each having limited computational ability and local interactions. The term amorphous computing was coined at MIT in 1996 in a paper entitled "Amorphous Computing Manifesto" by Abelson, Knight, Sussman, et al.

Contents

Examples of naturally occurring amorphous computations can be found in many fields, such as developmental biology (the development of multicellular organisms from a single cell), molecular biology (the organization of sub-cellular compartments and intra-cell signaling), neural networks, and chemical engineering (non-equilibrium systems). The study of amorphous computation is hardware agnostic—it is not concerned with the physical substrate (biological, electronic, nanotech, etc.) but rather with the characterization of amorphous algorithms as abstractions with the goal of both understanding existing natural examples and engineering novel systems.

Amorphous computers tend to have many of the following properties:

Algorithms, tools, and patterns

(Some of these algorithms have no known names. Where a name is not known, a descriptive one is given.)

Researchers and labs

See also

Documents

  1. The Amorphous Computing Home Page
    A collection of papers and links at the MIT AI lab
  2. Amorphous Computing (Communications of the ACM, May 2000)
    A review article showing examples from Coore's Growing Point Language as well as patterns created from Weiss's rule triggering language.
  3. "Amorphous computing in the presence of stochastic disturbances"
    A paper investigating the ability of Amorphous computers to deal with failing components.
  4. Amorphous Computing Slides from DARPA talk in 1998
    An overview of ideas and proposals for implementations
  5. Amorphous and Cellular Computing PPT from 2002 NASA Lecture
    Almost the same as above, in PPT format
  6. Infrastructure for Engineered Emergence on Sensor/Actuator Networks, Beal and Bachrach, 2006.
    An amorphous computing language called "Proto".
  7. Self-repairing Topological Patterns Clement, Nagpal.
    Algorithms for self-repairing and self-maintaining line.
  8. Robust Methods of Amorphous Synchronization, Joshua Grochow
    Methods for inducing global temporal synchronization.
  9. Programmable Self-Assembly: Constructing Global Shape Using Biologically-Inspired Local Interactions and Origami Mathematics and Associated Slides Nagpal PhD Thesis
    A language to compile local-interaction instructions from a high-level description of an origami-like folded structure.
  10. Towards a Programmable Material, Nagpal Associated Slides
    Similar outline to previous paper
  11. Self-Healing Structures in Amorphous Computing Zucker
    Methods for detecting and maintaining topologies inspired by biological regeneration.
  12. Resilient serial execution on amorphous machines [ permanent dead link ], Sutherland Master's Thesis
    A language for running serial processes on amorphous computers
  13. Paradigms for Structure in an Amorphous Computer, 1997 Coore, Nagpal, Weiss
    Techniques for creating hierarchical order in amorphous computers.
  14. Organizing a Global Coordinate System from Local Information on an Amorphous Computer, 1999 Nagpal.
    Techniques for creating coordinate systems by gradient formation and analyzes precision limits.
  15. Amorphous Computing: examples, mathematics and theory, 2013 W Richard Stark.
    This paper presents nearly 20 examples varying from simple to complex, standard mathematical tools are used to prove theorems and compute expected behavior, four programming styles are identified and explored, three uncomputability results are proved, and the computational foundations of a complex, dynamic intelligence system are sketched.

Related Research Articles

<span class="mw-page-title-main">Algorithm</span> Sequence of operations for a task

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences.

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.

<span class="mw-page-title-main">Encryption</span> Process of converting plaintext to ciphertext

In cryptography, encryption is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.

<span class="mw-page-title-main">Gerald Jay Sussman</span> American computer scientist

Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.

In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time. It has been a tradition of computer science to describe serial algorithms in abstract machine models, often the one known as random-access machine. Similarly, many computer science researchers have used a so-called parallel random-access machine (PRAM) as a parallel abstract machine (shared-memory).

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation.

<span class="mw-page-title-main">Concurrency (computer science)</span> Ability to execute a task in a non-serial manner

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.

In computer science, message passing is a technique for invoking behavior on a computer. The invoking program sends a message to a process and relies on that process and its supporting infrastructure to then select and run some appropriate code. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming.

Social computing is an area of computer science that is concerned with the intersection of social behavior and computational systems. It is based on creating or recreating social conventions and social contexts through the use of software and technology. Thus, blogs, email, instant messaging, social network services, wikis, social bookmarking and other instances of what is often called social software illustrate ideas from social computing.

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a programming language, as an extension to an existing languages.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which gives rise to indeterminacy.

Unconventional computing is computing by any of a wide range of new or unusual methods.

The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm.

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Radhika Nagpal is an Indian-American computer scientist and researcher in the fields of self-organising computer systems, biologically-inspired robotics, and biological multi-agent systems. She is the Augustine Professor in Engineering in the Departments of Mechanical and Aerospace Engineering and Computer Science at Princeton University. Formerly, she was the Fred Kavli Professor of Computer Science at Harvard University and the Harvard School of Engineering and Applied Sciences. In 2017, Nagpal co-founded a robotics company under the name of Root Robotics. This educational company works to create many different opportunities for those unable to code to learn how.

Robotic materials are composite materials that combine sensing, actuation, computation, and communication in a repeatable or amorphous pattern. Robotic materials can be considered computational metamaterials in that they extend the original definition of a metamaterial as "macroscopic composites having a man-made, three-dimensional, periodic cellular architecture designed to produce an optimized combination, not available in nature, of two or more responses to specific excitation" by being fully programmable. That is, unlike in a conventional metamaterial, the relationship between a specific excitation and response is governed by sensing, actuation, and a computer program that implements the desired logic.