Artificial life (ALife or A-Life) is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. [1] The discipline was named by Christopher Langton, an American computer scientist, in 1986. [2] In 1987, Langton organized the first conference on the field, in Los Alamos, New Mexico. [3] There are three main kinds of alife, [4] named for their approaches: soft, [5] from software; hard, [6] from hardware; and wet , from biochemistry. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena. [7] [8]
Artificial life studies the fundamental processes of living systems in artificial environments in order to gain a deeper understanding of the complex information processing that define such systems. These topics are broad, but often include evolutionary dynamics, emergent properties of collective systems, biomimicry, as well as related issues about the philosophy of the nature of life and the use of lifelike properties in artistic works.[ citation needed ]
The modeling philosophy of artificial life strongly differs from traditional modeling by studying not only "life as we know it" but also "life as it could be". [9]
A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems.
Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes as we know them" and "processes as they could be". [10]
At present, the commonly accepted definition of life does not consider any current alife simulations or software to be alive, and they do not constitute part of the evolutionary process of any ecosystem. However, different opinions about artificial life's potential have arisen:
Program-based simulations contain organisms with a "genome" language. This language is more often in the form of a Turing complete computer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowing self-replication. Mutations are generally implemented as random changes to the code. Use of cellular automata is common but not required. Another example could be an artificial intelligence and multi-agent system/program.
Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion). Generally, these are simulators that emphasize user creation and accessibility over mutation and evolution.
Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or other finite parameters. Each parameter controls one or several aspects of an organism in a well-defined way.
These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, on learning rather than on natural selection.
Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). [12] [13] In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden.
Black-box models are completely nonmechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. Due to the non-transparent nature of the model, interactions of subsystems cannot be investigated. In contrast, a white-box model of a complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at the micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of a white-box model's evolution. In most cases, mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches.
Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logical cellular automata are necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physical ontology of the object under study. The white-box modeling represents an automatic hyper-logical inference from the first principles because it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsic axiomatic system of the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem. [13]
This is a list of artificial life and digital organism simulators:
Name | Driven By | Started | Ended |
---|---|---|---|
Polyworld | neural net | 1990 | ongoing |
Tierra | evolvable code | 1991 | 2004 |
Avida | evolvable code | 1993 | ongoing |
TechnoSphere | modules | 1995 | |
Framsticks | evolvable code | 1996 | ongoing |
Creatures | neural net and simulated biochemistry & genetics | 1996–2001 | Fandom still active to this day, some abortive attempts at new products[ citation needed ] |
GenePool | evolvable code | 1997 | ongoing |
Aevol [14] | evolvable code, with steps that mimick the central dogma | 2006 | ongoing |
3D Virtual Creature Evolution | neural net | 2008 | NA |
EcoSim | Fuzzy Cognitive Map | 2009 | ongoing |
OpenWorm | Geppetto | 2011 | ongoing |
The Bibites [15] | neural net | 2015 | ongoing |
Lenia | continuous cellular automata | 2019 | ongoing |
Hardware-based artificial life mainly consist of robots, that is, automatically guided machines able to do tasks on their own.
Biochemical-based life is studied in the field of synthetic biology. It involves research such as the creation of synthetic DNA. The term "wet" is an extension of the term "wetware". Efforts toward "wet" artificial life focus on engineering live minimal cells from living bacteria Mycoplasma laboratorium and in building non-living biochemical cell-like systems from scratch.
In May 2019, researchers reported a new milestone in the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli , by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. [16] [17]
Artificial life has had a controversial history. John Maynard Smith criticized certain artificial life work in 1994 as "fact-free science". [21]
Hugo de Garis is an Australian retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artificial neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.
The Game of Life, also known as Conway's Game of Life or simply Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.
A cellular automaton is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling.
Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
Mathematical and theoretical biology, or biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to test scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.
An artificial society is an agent-based computational model for computer simulation in social analysis. It is mostly connected to the themes of complex systems, emergence, the Monte Carlo method, computational sociology, multi-agent systems, and evolutionary programming. While the concept was simple, actually realizing this conceptual point took a while. Complex mathematical models have been, and are, common; deceivingly simple models only have their roots in the late forties, and took the advent of the microcomputer to really get up to speed.
An artificial chemistry is a chemical-like system that usually consists of objects, called molecules, that interact according to rules resembling chemical reaction rules. Artificial chemistries are created and studied in order to understand fundamental properties of chemical systems, including prebiotic evolution, as well as for developing chemical computing systems. Artificial chemistry is a field within computer science wherein chemical reactions—often biochemical ones—are computer-simulated, yielding insights on evolution, self-assembly, and other biochemical phenomena. The field does not use actual chemicals, and should not be confused with either synthetic chemistry or computational chemistry. Rather, bits of information are used to represent the starting molecules, and the end products are examined along with the processes that led to them. The field originated in artificial life but has shown to be a versatile method with applications in many fields such as chemistry, economics, sociology and linguistics.
In biology and other experimental sciences, an in silico experiment is one performed on a computer or via computer simulation software. The phrase is pseudo-Latin for 'in silicon', referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases in vivo, in vitro, and in situ, which are commonly used in biology. The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature.
Christopher Gale Langton is an American computer scientist and one of the founders of the field of artificial life. He coined the term in the late 1980s when he organized the first "Workshop on the Synthesis and Simulation of Living Systems" at the Los Alamos National Laboratory in 1987. Following his time at Los Alamos, Langton joined the Santa Fe Institute (SFI), to continue his research on artificial life. He left SFI in the late 1990s, and abandoned his work on artificial life, publishing no research since that time.
Artificial creation is a field of research that studies the primary synthesis of complex lifelike structures from primordial lifeless origins.
Langton's loops are a particular "species" of artificial life in a cellular automaton created in 1984 by Christopher Langton. They consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm", which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent.
Humans have considered and tried to create non-biological life for at least 3,000 years. As seen in tales ranging from Pygmalion to Frankenstein, humanity has long been intrigued by the concept of artificial life.
John von Neumann's universal constructor is a self-replicating machine in a cellular automaton (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. It is regarded as foundational for automata theory, complex systems, and artificial life. Indeed, Nobel Laureate Sydney Brenner considered Von Neumann's work on self-reproducing automata central to biological theory as well, allowing us to "discipline our thoughts about machines, both natural and artificial."
Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype–phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models.
Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.
OpenWorm is an international open science project for the purpose of simulating the roundworm Caenorhabditis elegans at the cellular level. Although the long-term goal is to model all 959 cells of the C. elegans, the first stage is to model the worm's locomotion by simulating the 302 neurons and 95 muscle cells. This bottom up simulation is being pursued by the OpenWorm community.
The intent of this work is to synthesize rather than simulate life.