HyperNEAT

Last updated
Querying the CPPN to determine the connection weight between two neurons as a function of their position in space. Note sometimes the distance between them is also passed as an argument. HyperNEAT query connection.png
Querying the CPPN to determine the connection weight between two neurons as a function of their position in space. Note sometimes the distance between them is also passed as an argument.

Hypercube-based NEAT, or HyperNEAT, [1] is a generative encoding that evolves artificial neural networks (ANNs) with the principles of the widely used NeuroEvolution of Augmented Topologies (NEAT) algorithm developed by Kenneth Stanley. [2] It is a novel technique for evolving large-scale neural networks using the geometric regularities of the task domain. It uses Compositional Pattern Producing Networks [3] (CPPNs), which are used to generate the images for Picbreeder.org Archived 2011-07-25 at the Wayback Machine and shapes for EndlessForms.com Archived 2018-11-14 at the Wayback Machine . HyperNEAT has recently been extended to also evolve plastic ANNs [4] and to evolve the location of every neuron in the network. [5]

Contents

Applications to date

Related Research Articles

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying").

Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation with a fixed topology.

<span class="mw-page-title-main">Avida (software)</span> Artificial life software platform

Avida is an artificial life software platform to study the evolutionary biology of self-replicating and evolving computer programs. Avida is under active development by Charles Ofria's Digital Evolution Lab at Michigan State University; the first version of Avida was designed in 1993 by Ofria, Chris Adami and C. Titus Brown at Caltech, and has been fully reengineered by Ofria on multiple occasions since then. The software was originally inspired by the Tierra system.

<span class="mw-page-title-main">Generative science</span> Study of how complex behaviour can be generated by deterministic and finite rules and parameters

Generative science is an area of research that explores the natural world and its complex behaviours. It explores ways "to generate apparently unanticipated and infinite behaviour based on deterministic and finite rules and parameters reproducing or resembling the behavior of natural and social phenomena". By modelling such interactions, it can suggest that properties exist in the system that had not been noticed in the real world situation. An example field of study is how unintended consequences arise in social processes.

In natural evolution and artificial evolution the fitness of a schema is rescaled to give its effective fitness which takes into account crossover and mutation.

Recurrent neural networks (RNNs) are a class of artificial neural networks for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

In data compression and the theory of formal languages, the smallest grammar problem is the problem of finding the smallest context-free grammar that generates a given string of characters. The size of a grammar is defined by some authors as the number of symbols on the right side of the production rules. Others also add the number of rules to that. A grammar that generates only a single string, as required for the solution to this problem, is called a straight-line grammar.

Autoconstructive evolution is a process in which the entities undergoing evolutionary change are themselves responsible for the construction of their own offspring and thus for aspects of the evolutionary process itself. Because biological evolution is always autoconstructive, this term mainly occurs in evolutionary computation, to distinguish artificial life type systems from conventional genetic algorithms where the GA performs replication artificially. The term was coined by Lee Spector.

Dr. Charles A. Ofria is a Professor in the Department of Computer Science and Engineering at Michigan State University, the director of the Digital Evolution (DEvo) Lab there, and Director of the BEACON Center for the Study of Evolution in Action. He is the son of the late Charles Ofria, who developed the first fully integrated shop management program for the automotive repair industry. Ofria attended Stuyvesant High School and graduated from Ward Melville High School in 1991. He obtained a B.S. in Computer Science, Pure Mathematics, and Applied Mathematics from Stony Brook University in 1994, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology in 1999. Ofria's research focuses on the interplay between computer science and Darwinian evolution.

Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks (ANNs) that have an architecture whose evolution is guided by genetic algorithms.

Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. and Stanley and Miikkulainen. Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming, in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.

Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype–phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models.

Dr Peter John Bentley is a British author and computer scientist based at University College London.

<span class="mw-page-title-main">Meta-optimization</span>

In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.

There are many types of artificial neural networks (ANN).

A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms. Models and general frameworks have been developed in further works since the 1990s.

<span class="mw-page-title-main">Emma Hart (computer scientist)</span> English computer scientist

Professor Emma Hart, FRSE is an English computer scientist known for her work in artificial immune systems (AIS), evolutionary computation and optimisation. She is a professor of computational intelligence at Edinburgh Napier University, editor-in-chief of the Journal of Evolutionary Computation, and D. Coordinator of the Future & Emerging Technologies (FET) Proactive Initiative, Fundamentals of Collective Adaptive Systems.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".

Kenneth Owen Stanley is an artificial intelligence researcher, author, and former professor of computer science at the University of Central Florida known for creating the Neuroevolution of augmenting topologies (NEAT) algorithm. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective with Joel Lehman which argues for the existence of the "objective paradox", a paradox which states that "soon as you create an objective, you ruin your ability to reach it". While a professor at the University of Central Florida, he was the director of the Evolutionary Complexity Research Group (EPlex) which led the development of Galactic Arms Race. He also developed the HyperNEAT, CPPNs, and novelty search algorithms. He also co-founded Geometric Intelligence, an AI research firm, in 2015.

References

  1. Stanley, Kenneth O.; D'Ambrosio, David B.; Gauci, Jason (2009-01-14). "A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks". Artificial Life. 15 (2): 185–212. doi:10.1162/artl.2009.15.2.15202. ISSN   1064-5462. PMID   19199382. S2CID   26390526.
  2. Stanley, Kenneth O.; Miikkulainen, Risto (2002-06-01). "Evolving Neural Networks through Augmenting Topologies". Evolutionary Computation. 10 (2): 99–127. CiteSeerX   10.1.1.638.3910 . doi:10.1162/106365602320169811. ISSN   1063-6560. PMID   12180173. S2CID   498161.
  3. Stanley, Kenneth O. (2007-05-10). "Compositional pattern producing networks: A novel abstraction of development". Genetic Programming and Evolvable Machines. 8 (2): 131–162. CiteSeerX   10.1.1.643.8179 . doi:10.1007/s10710-007-9028-8. ISSN   1389-2576. S2CID   2535195.
  4. Risi, Sebastian; Stanley, Kenneth O. (2010-08-25). "Indirectly Encoding Neural Plasticity as a Pattern of Local Rules". In Doncieux, Stéphane; Girard, Benoît; Guillot, Agnès; Hallam, John; Meyer, Jean-Arcady; Mouret, Jean-Baptiste (eds.). From Animals to Animats 11. Lecture Notes in Computer Science. Vol. 6226. Springer Berlin Heidelberg. pp. 533–543. CiteSeerX   10.1.1.365.5589 . doi:10.1007/978-3-642-15193-4_50. ISBN   9783642151927.
  5. Risi, Sebastian; Stanley, Kenneth O. (2012-08-31). "An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons". Artificial Life. 18 (4): 331–363. doi: 10.1162/ARTL_a_00071 . ISSN   1064-5462. PMID   22938563. S2CID   3256786.
  6. D'Ambrosio, David B.; Stanley, Kenneth O. (2008-01-01). "Generative encoding for multiagent learning". Proceedings of the 10th annual conference on Genetic and evolutionary computation. GECCO '08. New York, NY, USA: ACM. pp. 819–826. doi:10.1145/1389095.1389256. ISBN   9781605581309. S2CID   11507017.
  7. J. Gauci and K. O. Stanley, “A case study on the critical role of geometric regularity in machine learning,” in AAAI (D. Fox and C. P. Gomes, eds.), pp. 628–633, AAAI Press, 2008.
  8. Risi, Sebastian; Stanley, Kenneth O. (2013-01-01). "Confronting the challenge of learning a flexible neural controller for a diversity of morphologies". Proceedings of the 15th annual conference on Genetic and evolutionary computation. GECCO '13. New York, NY, USA: ACM. pp. 255–262. CiteSeerX   10.1.1.465.5068 . doi:10.1145/2463372.2463397. ISBN   9781450319638. S2CID   10308013.
  9. Clune, J.; Beckmann, B. E.; Ofria, C.; Pennock, R. T. (2009-05-01). "Evolving coordinated quadruped gaits with the HyperNEAT generative encoding". 2009 IEEE Congress on Evolutionary Computation. pp. 2764–2771. CiteSeerX   10.1.1.409.3868 . doi:10.1109/CEC.2009.4983289. ISBN   978-1-4244-2958-5. S2CID   17566887.
  10. Clune, Jeff; Ofria, Charles; Pennock, Robert T. (2009-01-01). "The sensitivity of HyperNEAT to different geometric representations of a problem". Proceedings of the 11th Annual conference on Genetic and evolutionary computation. GECCO '09. New York, NY, USA: ACM. pp. 675–682. doi:10.1145/1569901.1569995. ISBN   9781605583259. S2CID   16054567.
  11. Yosinski J, Clune J, Hidalgo D, Nguyen S, Cristobal Zagal J, Lipson H (2011) Evolving Robot Gaits in Hardware: the HyperNEAT Generative Encoding Vs. Parameter Optimization. Proceedings of the European Conference on Artificial Life. (pdf)
  12. Lee S, Yosinski J, Glette K, Lipson H, Clune J (2013) Evolving gaits for physical robots with the HyperNEAT generative encoding: the benefits of simulation. Applications of Evolutionary Computing. Springer. pdf
  13. Lee, Suchan; Yosinski, Jason; Glette, Kyrre; Lipson, Hod; Clune, Jeff (2013-04-03). "Evolving Gaits for Physical Robots with the HyperNEAT Generative Encoding: The Benefits of Simulation". In Esparcia-Alcázar, Anna I. (ed.). Applications of Evolutionary Computation. Lecture Notes in Computer Science. Vol. 7835. Springer Berlin Heidelberg. pp. 540–549. CiteSeerX   10.1.1.364.8979 . doi:10.1007/978-3-642-37192-9_54. ISBN   9783642371912.
  14. Clune, J.; Stanley, K. O.; Pennock, R. T.; Ofria, C. (2011-06-01). "On the Performance of Indirect Encoding Across the Continuum of Regularity". IEEE Transactions on Evolutionary Computation. 15 (3): 346–367. CiteSeerX   10.1.1.375.6731 . doi:10.1109/TEVC.2010.2104157. ISSN   1089-778X. S2CID   3008628.
  15. Clune, Jeff; Ofria, Charles; Pennock, Robert T. (2008-09-13). "How a Generative Encoding Fares as Problem-Regularity Decreases". In Rudolph, Günter; Jansen, Thomas; Beume, Nicola; Lucas, Simon; Poloni, Carlo (eds.). Parallel Problem Solving from Nature – PPSN X. Lecture Notes in Computer Science. Vol. 5199. Springer Berlin Heidelberg. pp. 358–367. doi:10.1007/978-3-540-87700-4_36. ISBN   9783540876991.
  16. Clune, Jeff; Beckmann, Benjamin E.; Pennock, Robert T.; Ofria, Charles (2009-09-13). "HybrID: A Hybridization of Indirect and Direct Encodings for Evolutionary Computation". In Kampis, George; Karsai, István; Szathmáry, Eörs (eds.). Advances in Artificial Life. Darwin Meets von Neumann. Lecture Notes in Computer Science. Vol. 5778. Springer Berlin Heidelberg. pp. 134–141. CiteSeerX   10.1.1.409.741 . doi:10.1007/978-3-642-21314-4_17. ISBN   9783642213137.
  17. Clune, Jeff; Beckmann, Benjamin E.; McKinley, Philip K.; Ofria, Charles (2010-01-01). "Investigating whether hyperNEAT produces modular neural networks". Proceedings of the 12th annual conference on Genetic and evolutionary computation. GECCO '10. New York, NY, USA: ACM. pp. 635–642. CiteSeerX   10.1.1.409.4870 . doi:10.1145/1830483.1830598. ISBN   9781450300728. S2CID   14826185.
  18. Suchorzewski, Marcin; Clune, Jeff (2011-01-01). "A novel generative encoding for evolving modular, regular and scalable networks". Proceedings of the 13th annual conference on Genetic and evolutionary computation. GECCO '11. New York, NY, USA: ACM. pp. 1523–1530. CiteSeerX   10.1.1.453.5744 . doi:10.1145/2001576.2001781. ISBN   9781450305570. S2CID   2542736.
  19. Verbancsics, Phillip; Stanley, Kenneth O. (2011-01-01). "Constraining connectivity to encourage modularity in HyperNEAT". Proceedings of the 13th annual conference on Genetic and evolutionary computation. GECCO '11. New York, NY, USA: ACM. pp. 1483–1490. CiteSeerX   10.1.1.379.1188 . doi:10.1145/2001576.2001776. ISBN   9781450305570. S2CID   1442181.
  20. Clune, Jeff; Lipson, Hod (2011-11-01). "Evolving 3D Objects with a Generative Encoding Inspired by Developmental Biology". ACM SIGEVOlution. 5 (4): 2–12. doi:10.1145/2078245.2078246. ISSN   1931-8499. S2CID   9566239.
  21. Risi, S.; Stanley, K. O. (2012-06-01). "A unified approach to evolving plasticity and neural geometry". The 2012 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. CiteSeerX   10.1.1.467.8366 . doi:10.1109/IJCNN.2012.6252826. ISBN   978-1-4673-1490-9. S2CID   14268194.