Artificial life

Last updated

Artificial life (ALife or A-Life) is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. [1] The discipline was named by Christopher Langton, an American theoretical biologist, in 1986. [2] In 1987, Langton organized the first conference on the field, in Los Alamos, New Mexico. [3] There are three main kinds of alife, [4] named for their approaches: soft, [5] from software; hard, [6] from hardware; and wet , from biochemistry. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena. [7] [8]

Contents

A Braitenberg vehicle simulation, programmed in breve, an artificial life simulator Braitenberg vehicle (simulation made with breve).jpg
A Braitenberg vehicle simulation, programmed in breve, an artificial life simulator

Overview

Artificial life studies the fundamental processes of living systems in artificial environments in order to gain a deeper understanding of the complex information processing that define such systems. These topics are broad, but often include evolutionary dynamics, emergent properties of collective systems, biomimicry, as well as related issues about the philosophy of the nature of life and the use of lifelike properties in artistic works.[ citation needed ]

Philosophy

The modeling philosophy of artificial life strongly differs from traditional modeling by studying not only "life-as-we-know-it" but also "life-as-it-might-be". [9]

A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems.

Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes-as-we-know-them" and "processes-as-they-could-be". [10]

At present, the commonly accepted definition of life does not consider any current alife simulations or software to be alive, and they do not constitute part of the evolutionary process of any ecosystem. However, different opinions about artificial life's potential have arisen:

Software-based ("soft")

Techniques

Program-based

Program-based simulations contain organisms with a "genome" language. This language is more often in the form of a Turing complete computer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowing self-replication. Mutations are generally implemented as random changes to the code. Use of cellular automata is common but not required. Another example could be an artificial intelligence and multi-agent system/program.

Module-based

Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion). Generally, these are simulators that emphasize user creation and accessibility over mutation and evolution.

Parameter-based

Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or other finite parameters. Each parameter controls one or several aspects of an organism in a well-defined way.

Neural net–based

These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, on learning rather than on natural selection.

Complex systems modeling

Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). [12] [13] In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden.

Mathematical models for complex systems Mathematical models for complex systems.jpg
Mathematical models for complex systems

Black-box models are completely nonmechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. Due to the non-transparent nature of the model, interactions of subsystems cannot be investigated. In contrast, a white-box model of a complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at the micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of a white-box model's evolution. In most cases, mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches.

Logical deterministic individual-based cellular automata model of single species population growth Logical deterministic individual-based cellular automata model of single species population growth.gif
Logical deterministic individual-based cellular automata model of single species population growth

Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logical cellular automata are necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physical ontology of the object under study. The white-box modeling represents an automatic hyper-logical inference from the first principles because it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsic axiomatic system of the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem. [13]

Logical deterministic individual-based cellular automata model of interspecific competition for a single limited resource Logical deterministic individual-based cellular automata model of interspecific competition for a single limited resource.gif
Logical deterministic individual-based cellular automata model of interspecific competition for a single limited resource

Notable simulators

This is a list of artificial life and digital organism simulators:

List of notable simulators
NameDriven ByStartedEnded
Polyworld neural net1990ongoing
Tierra evolvable code19912004
Avida evolvable code1993ongoing
TechnoSphere modules1995
Framsticks evolvable code1996ongoing
Creatures neural net and simulated biochemistry & genetics1996–2001Fandom still active to this day, some abortive attempts at new products[ citation needed ]
GenePool evolvable code1997ongoing
Aevol [14] evolvable code, with steps that mimick the central dogma 2006ongoing
3D Virtual Creature Evolution neural net2008NA
EcoSim Fuzzy Cognitive Map2009ongoing
OpenWorm Geppetto2011ongoing
The Bibitesneural net2015ongoing
Lenia continuous cellular automata2019ongoing

Hardware-based ("hard")

Hardware-based artificial life mainly consist of robots, that is, automatically guided machines able to do tasks on their own.

Biochemical-based ("wet")

Biochemical-based life is studied in the field of synthetic biology. It involves research such as the creation of synthetic DNA. The term "wet" is an extension of the term "wetware". Efforts toward "wet" artificial life focus on engineering live minimal cells from living bacteria Mycoplasma laboratorium and in building non-living biochemical cell-like systems from scratch.

In May 2019, researchers reported a new milestone in the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli , by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. [15] [16]

Open problems

How does life arise from the nonliving? [17] [18]
What are the potentials and limits of living systems?
How is life related to mind, machines, and culture?
  1. Agent-based modeling is used in artificial life and other fields to explore emergence in systems.
  2. Artificial intelligence has traditionally used a top down approach, while alife generally works from the bottom up. [19]
  3. Artificial chemistry started as a method within the alife community to abstract the processes of chemical reactions.
  4. Evolutionary algorithms are a practical application of the weak alife principle applied to optimization problems. Many optimization algorithms have been crafted which borrow from or closely mirror alife techniques. The primary difference lies in explicitly defining the fitness of an agent by its ability to solve a problem, instead of its ability to find food, reproduce, or avoid death.[ citation needed ] The following is a list of evolutionary algorithms closely related to and used in alife:
  5. Multi-agent system – A multi-agent system is a computerized system composed of multiple interacting intelligent agents within an environment.
  6. Evolutionary art uses techniques and methods from artificial life to create new forms of art.
  7. Evolutionary music uses similar techniques, but applied to music instead of visual art.
  8. Abiogenesis and the origin of life sometimes employ alife methodologies as well.
  9. Quantum artificial life applies quantum algorithms to artificial life systems.

History

Criticism

Artificial life has had a controversial history. John Maynard Smith criticized certain artificial life work in 1994 as "fact-free science". [20]

See also

Related Research Articles

<span class="mw-page-title-main">Hugo de Garis</span> Australian AI researcher

Hugo de Garis is an Australian retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artificial neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.

<span class="mw-page-title-main">Conway's Game of Life</span> Two-dimensional cellular automaton devised by J. H. Conway in 1970

The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.

<span class="mw-page-title-main">Cellular automaton</span> Discrete model studied in computer science

A cellular automaton is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling.

<span class="mw-page-title-main">Langton's ant</span> Two-dimensional Turing machine with emergent behavior

Langton's ant is a two-dimensional universal Turing machine with a very simple set of rules but complex emergent behavior. It was invented by Chris Langton in 1986 and runs on a square lattice of black and white cells. The universality of Langton's ant was proven in 2000. The idea has been generalized in several different ways, such as turmites which add more colors and more states.

<span class="mw-page-title-main">Computational biology</span> Branch of biology

Computational biology refers to the use of data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has foundations in applied mathematics, chemistry, and genetics. It differs from biological computing, a subfield of computer science and engineering which uses bioengineering to build computers.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.

<span class="mw-page-title-main">Multi-agent system</span> Built of multiple interacting agents

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.

An artificial society is an agent-based computational model for computer simulation in social analysis. It is mostly connected to the themes of complex systems, emergence, the Monte Carlo method, computational sociology, multi-agent systems, and evolutionary programming. While the concept was simple, actually realizing this conceptual point took a while. Complex mathematical models have been, and are, common; deceivingly simple models only have their roots in the late forties, and took the advent of the microcomputer to really get up to speed.

An artificial chemistry is a chemical-like system that usually consists of objects, called molecules, that interact according to rules resembling chemical reaction rules. Artificial chemistries are created and studied in order to understand fundamental properties of chemical systems, including prebiotic evolution, as well as for developing chemical computing systems. Artificial chemistry is a field within computer science wherein chemical reactions—often biochemical ones—are computer-simulated, yielding insights on evolution, self-assembly, and other biochemical phenomena. The field does not use actual chemicals, and should not be confused with either synthetic chemistry or computational chemistry. Rather, bits of information are used to represent the starting molecules, and the end products are examined along with the processes that led to them. The field originated in artificial life but has shown to be a versatile method with applications in many fields such as chemistry, economics, sociology and linguistics.

<i>In silico</i> Latin phrase referring to computer simulations

In biology and other experimental sciences, an in silico experiment is one performed on a computer or via computer simulation software. The phrase is pseudo-Latin for 'in silicon', referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases in vivo, in vitro, and in situ, which are commonly used in biology. The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature.

<span class="mw-page-title-main">Christopher Langton</span> American computer scientist

Christopher Gale Langton is an American computer scientist and one of the founders of the field of artificial life. He coined the term in the late 1980s when he organized the first "Workshop on the Synthesis and Simulation of Living Systems" at the Los Alamos National Laboratory in 1987. Following his time at Los Alamos, Langton joined the Santa Fe Institute (SFI), to continue his research on artificial life. He left SFI in the late 1990s, and abandoned his work on artificial life, publishing no research since that time.

Artificial creation is a field of research that studies the primary synthesis of complex lifelike structures from primordial lifeless origins.

<span class="mw-page-title-main">Langton's loops</span> Self-reproducing cellular automaton patterns

Langton's loops are a particular "species" of artificial life in a cellular automaton created in 1984 by Christopher Langton. They consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm", which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent.

Humans have considered and tried to create non-biological life for at least 3000 years. As seen in tales ranging from Pygmalion to Frankenstein, humanity has long been intrigued by the concept of artificial life.

<span class="mw-page-title-main">Von Neumann universal constructor</span> Self-replicating cellular automaton

John von Neumann's universal constructor is a self-replicating machine in a cellular automaton (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. While typically not as well known as von Neumann's other work, it is regarded as foundational for automata theory, complex systems, and artificial life. Indeed, Nobel Laureate Sydney Brenner considered Von Neumann's work on self-reproducing automata central to biological theory as well, allowing us to "discipline our thoughts about machines, both natural and artificial."

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

OpenWorm is an international open science project for the purpose of simulating the roundworm Caenorhabditis elegans at the cellular level. Although the long-term goal is to model all 959 cells of the C. elegans, the first stage is to model the worm's locomotion by simulating the 302 neurons and 95 muscle cells. This bottom up simulation is being pursued by the OpenWorm community.

References

  1. "Dictionary.com definition" . Retrieved 2007-01-19.
  2. The MIT Encyclopedia of the Cognitive Sciences, The MIT Press, p.37. ISBN   978-0-262-73144-7
  3. "The Game Industry's Dr. Frankenstein". Next Generation . No. 35. Imagine Media. November 1997. p. 10.
  4. Mark A. Bedau (November 2003). "Artificial life: organization, adaptation and complexity from the bottom up" (PDF). Trends in Cognitive Sciences. Archived from the original (PDF) on 2008-12-02. Retrieved 2007-01-19.
  5. Maciej Komosinski and Andrew Adamatzky (2009). Artificial Life Models in Software. New York: Springer. ISBN   978-1-84882-284-9.
  6. Andrew Adamatzky and Maciej Komosinski (2009). Artificial Life Models in Hardware. New York: Springer. ISBN   978-1-84882-529-1.
  7. Langton, Christopher. "What is Artificial Life?". Archived from the original on 2007-01-17. Retrieved 2007-01-19.
  8. Aguilar, W., Santamaría-Bonfil, G., Froese, T., and Gershenson, C. (2014). The past, present, and future of artificial life. Frontiers in Robotics and AI, 1(8). https://dx.doi.org/10.3389/frobt.2014.00008
  9. See Langton, C. G. 1992. Artificial Life Archived March 11, 2007, at the Wayback Machine . Addison-Wesley. ., section 1
  10. See Red'ko, V. G. 1999. Mathematical Modeling of Evolution. in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels). For the importance of ALife modeling from a cosmic perspective, see also Vidal, C. 2008.The Future of Scientific Simulations: from Artificial Life to Artificial Cosmogenesis. In Death And Anti-Death, ed. Charles Tandy, 6: Thirty Years After Kurt Gödel (1906–1978) p. 285-318. Ria University Press.)
  11. Ray, Thomas (1991). Taylor, C. C.; Farmer, J. D.; Rasmussen, S (eds.). "An approach to the synthesis of life". Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity. XI: 371–408. Archived from the original on 2015-07-11. Retrieved 24 January 2016. The intent of this work is to synthesize rather than simulate life.
  12. Kalmykov, Lev V.; Kalmykov, Vyacheslav L. (2015), "A Solution to the Biodiversity Paradox by Logical Deterministic Cellular Automata", Acta Biotheoretica, 63 (2): 1–19, doi:10.1007/s10441-015-9257-9, PMID   25980478, S2CID   2941481
  13. 1 2 Kalmykov, Lev V.; Kalmykov, Vyacheslav L. (2015), "A white-box model of S-shaped and double S-shaped single-species population growth", PeerJ, 3:e948: e948, doi: 10.7717/peerj.948 , PMC   4451025 , PMID   26038717
  14. Aevol
  15. Zimmer, Carl (15 May 2019). "Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? – In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature". The New York Times . Retrieved 16 May 2019.
  16. Fredens, Julius; et al. (15 May 2019). "Total synthesis of Escherichia coli with a recoded genome". Nature . 569 (7757): 514–518. Bibcode:2019Natur.569..514F. doi:10.1038/s41586-019-1192-5. PMC   7039709 . PMID   31092918.
  17. "Libarynth" . Retrieved 2015-05-11.
  18. "Caltech" (PDF). Retrieved 2015-05-11.
  19. "AI Beyond Computer Games". Archived from the original on 2008-07-01. Retrieved 2008-07-04.
  20. Horgan, J. (1995). "From Complexity to Perplexity". Scientific American. p. 107.