Memetic computing

Last updated

Memetic computing is a novel computational paradigm that incorporates the notion of meme(s) [1] as basic units of transferable information encoded in computational representations for boosting the performance of artificial evolutionary systems in the domain of search and optimization. [2] [3] [4]

Contents

The term memetic computing is often unassumingly misinterpreted to mean the same thing as memetic algorithms (MAs) [5] that typically hybridize population-based global search algorithms with one or more local search schemes. Notably, memetic computing offers a much broader scope, perpetuating the idea of memes into concepts that pave the way towards simultaneous problem learning and optimization approaches.

Methods

There are two different methods that describe the history and rise of memetics in computing. These are human-crafted memes and machine-crafted memes.

Human-crafted memes

One of the most widely recognised instantiations of the memetic computing paradigm are the first-generation memetic algorithms (MAs). In particular, MAs are referred to as hybrid algorithms, prescribing a marriage between a population-based global search coupled with one or more local search schemes (interpreted as computational manifestations of memes) such as heuristic solution refinements, gradient descent procedures, etc. The specific choice of local search heuristics are handcrafted (manually specified) by a domain expert and often require a reasonably deep understanding of the problem at hand.

The second generation MAs focus on adaptive data driven selection and integration of memes from a manually specified catalogue of multi-memes (a pool of memes); [6] gleaning patterns (knowledge) from the data generated during the course of a search/optimization run so as to ascertain promising combinations of memes at runtime. [7] [8]

Machine-crafted memes

It is only recently that the concept of memes were set free from the narrow scope of merely hand-crafted local search heuristics, paving the path towards fully automated extraction, dispersal and exploitation of knowledge memes. In this era of data-democratization with access to modern computing platforms, emerges an unmanned multi-meme setting; one in which memes, capturing diverse forms of higher-order problem-solving knowledge, are uncovered by machines. They are thereafter made available for reuse across various problems. As such, making it possible for advanced optimizers to automatically harness the transmitted memes and orchestrate custom search behaviours on the fly without human intervention.

Applications

The concept of memes have been exploited in various research fields, for example, robotics engineering, multi-agent systems, robotics, optimization, [9] software engineering, and the social sciences etc.

See also

Related Research Articles

<span class="mw-page-title-main">Genetic algorithm</span> Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.

<span class="mw-page-title-main">No free lunch in search and optimization</span> Average solution cost is the same with any method

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure that can be exploited more efficiently than random search or even has closed-form solutions that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning. Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.

In evolutionary algorithms (EA), the term of premature convergence means that a population for an optimization problem converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms in general and genetic algorithms in particular, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene.

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm (GA) or more general evolutionary algorithm (EA). It may provide a sufficiently good solution to an optimization problem. It uses a suitable heuristic or local search technique to improve the quality of solutions generated by the EA and to reduce the likelihood of premature convergence.

Cultural algorithms (CA) are a branch of evolutionary computation where there is a knowledge component that is called the belief space in addition to the population component. In this sense, cultural algorithms can be seen as an extension to a conventional genetic algorithm. Cultural algorithms were introduced by Reynolds (see references).

A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.

<span class="mw-page-title-main">Meta-optimization</span>

In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.

In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.

Kimeme is an open platform for multi-objective optimization and multidisciplinary design optimization. It is intended to be coupled with external numerical software such as computer-aided design (CAD), finite element analysis (FEM), structural analysis and computational fluid dynamics tools. It was developed by Cyber Dyne Srl and provides both a design environment for problem definition and analysis and a software network infrastructure to distribute the computational load.

The Genetic and Evolutionary Computation Conference (GECCO) is the premier conference in the area of genetic and evolutionary computation. GECCO has been held every year since 1999, when it was first established as a recombination of the International Conference on Genetic Algorithms (ICGA) and the Annual Genetic Programming Conference (GP).

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

The brain storm optimization algorithm is a heuristic algorithm that focuses on solving multi-modal problems, such as radio antennas design worked on by Yahya Rahmat-Samii, inspired by the brainstorming process, proposed by Dr. Yuhui Shi.

This is a chronological table of metaheuristic algorithms that only contains fundamental computational intelligence algorithms. Hybrid algorithms and multi-objective algorithms are not listed in the table below.

A large-scale capacitated arc routing problem (LSCARP) is a variant of the capacitated arc routing problem that covers 300 or more edges to model complex arc routing problems at large scales.

References

  1. Dawkins, R. (1976). The selfish gene. Oxford University Press.
  2. Ong, Y. S., Lim, M. H., & Chen, X. (2010). Memetic computation—past, present & future [research frontier]. IEEE Computational Intelligence Magazine, 5(2), 24-31.
  3. Chen, X., Ong, Y. S., Lim, M. H., & Tan, K. C. (2011). A multi-facet survey on memetic computation. IEEE Transactions on Evolutionary Computation, 15(5), 591-607.
  4. Gupta, A., & Ong, Y. S. (2018). Memetic Computation: The Mainspring of Knowledge Transfer in a Data-Driven Optimization Era (Vol. 21). Springer.
  5. Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Caltech concurrent computation program, C3P Report, 826, 1989.
  6. Krasnogor, N., Blackburne, B. P., Burke, E. K., & Hirst, J. D. (2002, September). Multimeme algorithms for protein structure prediction. In International Conference on Parallel Problem Solving from Nature (pp. 769-778). Springer, Berlin, Heidelberg.
  7. Krasnogor, N., Blackburne, B. P., Burke, E. K., & Hirst, J. D. (2002, September). Multimeme algorithms for protein structure prediction. In International Conference on Parallel Problem Solving from Nature (pp. 769-778). Springer, Berlin, Heidelberg.
  8. Ong, Y. S., & Keane, A. J. (2004). Meta-Lamarckian learning in memetic algorithms. IEEE transactions on evolutionary computation, 8(2), 99-110.
  9. Feng, L., Ong, Y. S., Lim, M. H., & Tsang, I. W. (2015). Memetic search with interdomain learning: A realization between CVRP and CARP. IEEE Transactions on Evolutionary Computation, 19(5), 644-658.