Hypercube-based NEAT, or HyperNEAT, [1] is a generative encoding that evolves artificial neural networks (ANNs) with the principles of the widely used NeuroEvolution of Augmented Topologies (NEAT) algorithm developed by Kenneth Stanley. [2] It is a novel technique for evolving large-scale neural networks using the geometric regularities of the task domain. It uses Compositional Pattern Producing Networks [3] (CPPNs), which are used to generate the images for Picbreeder.org Archived 2011-07-25 at the Wayback Machine and shapes for EndlessForms.com Archived 2018-11-14 at the Wayback Machine . HyperNEAT has recently been extended to also evolve plastic ANNs [4] and to evolve the location of every neuron in the network. [5]
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying").
Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation with a fixed topology.
Avida is an artificial life software platform to study the evolutionary biology of self-replicating and evolving computer programs. Avida is under active development by Charles Ofria's Digital Evolution Lab at Michigan State University; the first version of Avida was designed in 1993 by Ofria, Chris Adami and C. Titus Brown at Caltech, and has been fully reengineered by Ofria on multiple occasions since then. The software was originally inspired by the Tierra system.
Generative science is an area of research that explores the natural world and its complex behaviours. It explores ways "to generate apparently unanticipated and infinite behaviour based on deterministic and finite rules and parameters reproducing or resembling the behavior of natural and social phenomena". By modelling such interactions, it can suggest that properties exist in the system that had not been noticed in the real world situation. An example field of study is how unintended consequences arise in social processes.
In natural evolution and artificial evolution the fitness of a schema is rescaled to give its effective fitness which takes into account crossover and mutation.
Recurrent neural networks (RNNs) are a class of artificial neural networks for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.
In data compression and the theory of formal languages, the smallest grammar problem is the problem of finding the smallest context-free grammar that generates a given string of characters. The size of a grammar is defined by some authors as the number of symbols on the right side of the production rules. Others also add the number of rules to that. A grammar that generates only a single string, as required for the solution to this problem, is called a straight-line grammar.
Autoconstructive evolution is a process in which the entities undergoing evolutionary change are themselves responsible for the construction of their own offspring and thus for aspects of the evolutionary process itself. Because biological evolution is always autoconstructive, this term mainly occurs in evolutionary computation, to distinguish artificial life type systems from conventional genetic algorithms where the GA performs replication artificially. The term was coined by Lee Spector.
Dr. Charles A. Ofria is a Professor in the Department of Computer Science and Engineering at Michigan State University, the director of the Digital Evolution (DEvo) Lab there, and Director of the BEACON Center for the Study of Evolution in Action. He is the son of the late Charles Ofria, who developed the first fully integrated shop management program for the automotive repair industry. Ofria attended Stuyvesant High School and graduated from Ward Melville High School in 1991. He obtained a B.S. in Computer Science, Pure Mathematics, and Applied Mathematics from Stony Brook University in 1994, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology in 1999. Ofria's research focuses on the interplay between computer science and Darwinian evolution.
Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks (ANNs) that have an architecture whose evolution is guided by genetic algorithms.
Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. and Stanley and Miikkulainen. Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming, in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.
Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype–phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models.
Dr Peter John Bentley is a British author and computer scientist based at University College London.
In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.
There are many types of artificial neural networks (ANN).
A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms. Models and general frameworks have been developed in further works since the 1990s.
Professor Emma Hart, FRSE is an English computer scientist known for her work in artificial immune systems (AIS), evolutionary computation and optimisation. She is a professor of computational intelligence at Edinburgh Napier University, editor-in-chief of the Journal of Evolutionary Computation, and D. Coordinator of the Future & Emerging Technologies (FET) Proactive Initiative, Fundamentals of Collective Adaptive Systems.
Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".
Kenneth Owen Stanley is an artificial intelligence researcher, author, and former professor of computer science at the University of Central Florida known for creating the Neuroevolution of augmenting topologies (NEAT) algorithm. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective with Joel Lehman which argues for the existence of the "objective paradox", a paradox which states that "soon as you create an objective, you ruin your ability to reach it". While a professor at the University of Central Florida, he was the director of the Evolutionary Complexity Research Group (EPlex) which led the development of Galactic Arms Race. He also developed the HyperNEAT, CPPNs, and novelty search algorithms. He also co-founded Geometric Intelligence, an AI research firm, in 2015.