Mutation (genetic algorithm)

Last updated

Mutation is a genetic operator used to maintain genetic diversity of the chromosomes of a population of a genetic or, more generally, an evolutionary algorithm (EA). It is analogous to biological mutation.

Contents

The classic example of a mutation operator of a binary coded genetic algorithm (GA) involves a probability that an arbitrary bit in a genetic sequence will be flipped from its original state. A common method of implementing the mutation operator involves generating a random variable for each bit in a sequence. This random variable tells whether or not a particular bit will be flipped. This mutation procedure, based on the biological point mutation, is called single point mutation. Other types of mutation operators are commonly used for representations other than binary, such as floating-point encodings or representations for combinatorial problems.

The purpose of mutation in EAs is to introduce diversity into the sampled population. Mutation operators are used in an attempt to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping convergence to the global optimum. This reasoning also leads most EAs to avoid only taking the fittest of the population in generating the next generation, but rather selecting a random (or semi-random) set with a weighting toward those that are fitter. [1]

The following requirements apply to all mutation operators used in an EA: [2] [3]

  1. every point in the search space must be reachable by one or more mutations.
  2. there must be no preference for parts or directions in the search space (no drift).
  3. small mutations should be more probable than large ones.

For different genome types, different mutation types are suitable. Some mutations are Gaussian, Uniform, Zigzag, Scramble, Insertion, Inversion, Swap, and so on. [4] [5] [6] An overview and more operators than those presented below can be found in the introductory book by Eiben and Smith [7] or in. [8]

Bit string mutation

The mutation of bit strings ensue through bit flips at random positions.

Example:

1010010
1010110

The probability of a mutation of a bit is , where is the length of the binary vector. Thus, a mutation rate of per mutation and individual selected for mutation is reached.

Mutation of real numbers

Many EAs, such as the evolution strategy [9] [10] or the real-coded genetic algorithms, [11] [12] [8] work with real numbers instead of bit strings. This is due to the good experiences that have been made with this type of coding. [8] [13]

The value of a real-valued gene can either be changed or redetermined. A mutation that implements the latter should only ever be used in conjunction with the value-changing mutations and then only with comparatively low probability, as it can lead to large changes.

In practical applications, the decision variables to be changed of the optimisation problem to be solved are usually limited. Accordingly, the values of the associated genes are each restricted to an interval . Mutations may or may not take these restrictions into account. In the latter case, suitable post-treatment is then required as described below.

Mutation without consideration of restrictions

Example of a normally distributed random variable. Note that the given proportions of the subranges add up to 99.8 % and not 100 % due to rounding. Standard deviation diagram (decimal comma).svg
Example of a normally distributed random variable. Note that the given proportions of the subranges add up to 99.8 % and not 100 % due to rounding.

A real number can be mutated using normal distribution by adding the generated random value to the old value of the gene, resulting in the mutated value :

In the case of genes with a restricted range of values, it is a good idea to choose the step size of the mutation so that it reasonably fits the range of the gene to be changed, e.g.:

The step size can also be adjusted to the smaller permissible change range depending on the current value. In any case, however, it is likely that the new value of the gene will be outside the permissible range of values. Such a case must be considered a lethal mutation, since the obvious repair by using the respective violated limit as the new value of the gene would lead to a drift. This is because the limit value would then be selected with the entire probability of the values beyond the limit of the value range.

The evolution strategy works with real numbers and mutation based on normal distribution. The step sizes are part of the chromosome and are subject to evolution together with the actual decision variables.

Mutation with consideration of restrictions

One possible form of changing the value of a gene while taking its value range into account is the mutation relative parameter change of the evolutionary algorithm GLEAM (General Learning Evolutionary Algorithm and Method), [14] in which, as with the mutation presented earlier, small changes are more likely than large ones.

Distribution of probabilities for k=10 sub-areas of the total change interval. The sub-areas each cover 1/k of the width of the total change interval. Probabilty distribution of the muatation relative parameter change.png
Distribution of probabilities for k=10 sub-areas of the total change interval. The sub-areas each cover 1/k of the width of the total change interval.

First, an equally distributed decision is made as to whether the current value should be increased or decreased and then the corresponding total change interval is determined. Without loss of generality, an increase is assumed for the explanation and the total change interval is then . It is divided into sub-areas of equal size with the width , from which sub-change intervals of different size are formed:

-th sub-change interval: with
and

Subsequently, one of the sub-change intervals is selected in equal distribution and a random number, also equally distributed, is drawn from it as the new value of the gene. The resulting summed probabilities of the sub-change intervals result in the probability distribution of the sub-areas for the exemplary case of shown in the adjacent figure. This is not a normal distribution as before, but this distribution also clearly favours small changes over larger ones.

This mutation for larger values of , such as 10, is less well suited for tasks where the optimum lies on one of the value range boundaries. This can be remedied by significantly reducing when a gene value approaches its limits very closely.

Mutation of permutations

Mutations of permutations are specially designed for genomes that are themselves permutations of a set. These are often used to solve combinatorial tasks. [8] [15] [16] In the two mutations presented, parts of the genome are rotated or inverted.

Rotation to the right

The presentation of the procedure [16] is illustrated by an example on the right:

Procedure  Example
* Let a permutation be given.  
* Select a partial list, i.e. a start index and an end index in , which are both natural numbers between 0 and . Choose the number of positions by which the partial list should be rotated, where . The start index can also be after the end index. Then the partial list simply starts again from the beginning (periodic boundary condition). This is necessary so that the permutation probability in the genome is the same everywhere and is not greater in the middle than at the edges.  , ,
* Copy to and rotate the partial list by positions to the right.  
* is then the mutated genome.  

Inversion

The presentation of the procedure [15] is illustrated by an example on the right:

Procedure  Example
* Let a permutation be given.  
* Select a partial list, i.e. a start index and an end index in , which are both natural numbers between 0 and , where . This condition causes the mutation to always produce a genotypically altered chromosome. The start index can also be after the end index. Then the partial list simply starts again from the beginning (periodic boundary condition). This is necessary so that the permutation probability in the genome is the same everywhere and is not greater in the middle than at the edges.  ,
* Copy to and invert the partial list.  
* is then the mutated genome.  

Variants with preference for smaller changes

The requirement raised at the beginning for mutations, according to which small changes should be more probable than large ones, is only inadequately fulfilled by the two permutation mutations presented, since the lengths of the partial lists and the number of shift positions are determined in an equally distributed manner. However, the longer the partial list and the shift, the greater the change in gene order.

This can be remedied by the following modifications. The end index of the partial lists is determined as the distance to the start index :

where is determined randomly according to one of the two procedures for the mutation of real numbers from the interval and rounded.

For the rotation, is determined similarly to the distance , but the value is forbidden.

For the inversion, note that must hold, so for the value must be excluded.

See also

Related Research Articles

<span class="mw-page-title-main">Genetic algorithm</span> Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, causal inference, etc.

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and is analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be mutated before being added to the population.

In genetic algorithms (GA), or more general, evolutionary algorithms (EA), a chromosome is a set of parameters which define a proposed solution of the problem that the evolutionary algorithm is trying to solve. The set of all solutions, also called individuals according to the biological model, is known as the population. The genome of an individual consists of one, more rarely of several, chromosomes and corresponds to the genetic representation of the task to be solved. A chromosome is composed of a set of genes, where a gene consists of one or more semantically connected parameters, which are often also called decision variables. They determine one or more phenotypic characteristics of the individual or at least have an influence on them. In the basic form of genetic algorithms, the chromosome is represented as a binary string, while in later variants and in EAs in general, a wide variety of other data structures are used.

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small relative to the intrinsic dimension of the data.

In computer science, an evolution strategy (ES) is an optimization technique based on ideas of evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.

In computer programming, gene expression programming (GEP) is an evolutionary algorithm that creates computer programs or models. These computer programs are complex tree structures that learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linear chromosomes of fixed length. Thus, GEP is a genotype–phenotype system, benefiting from a simple genome to keep and transmit the genetic information and a complex phenotype to explore the environment and adapt to it.

Selection is the stage of a genetic algorithm or more general evolutionary algorithm in which individual genomes are chosen from a population for later breeding. Selection mechanisms are also used to choose candidate solutions (individuals) for the next generation. Retaining the best individuals in a generation unchanged in the next generation, is called elitism or elitist selection. It is a successful (slight) variant of the general process of constructing a new population.

Feature selection is the process of selecting a subset of relevant features for use in model construction. Stylometry and DNA microarray analysis are two cases where feature selection is used. It should be distinguished from feature extraction.

In evolutionary algorithms (EA), the term of premature convergence means that a population for an optimization problem converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms in general and genetic algorithms in particular, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

<span class="mw-page-title-main">Estimation of distribution algorithm</span>

Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima.

In computer programming, genetic representation is a way of presenting solutions/individuals in evolutionary computation methods. The term encompasses both the concrete data structures and data types used to realize the genetic material of the candidate solutions in the form of a genome, and the relationships between search space and problem space. In the simplest case, the search space corresponds to the problem space. The choice of problem representation is tied to the choice of genetic operators, both of which have a decisive effect on the efficiency of the optimization. Genetic representation can encode appearance, behavior, physical qualities of individuals. Difference in genetic representations is one of the major criteria drawing a line between known classes of evolutionary computation.

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm (GA) or more general evolutionary algorithm (EA). It may provide a sufficiently good solution to an optimization problem. It uses a suitable heuristic or local search technique to improve the quality of solutions generated by the EA and to reduce the likelihood of premature convergence.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

In computer science and machine learning, population-based incremental learning (PBIL) is an optimization algorithm, and an estimation of distribution algorithm. This is a type of genetic algorithm where the genotype of an entire population is evolved rather than individual members. The algorithm is proposed by Shumeet Baluja in 1994. The algorithm is simpler than a standard genetic algorithm, and in many cases leads to better results than a standard genetic algorithm.

In probability theory, tau-leaping, or τ-leaping, is an approximate method for the simulation of a stochastic system. It is based on the Gillespie algorithm, performing all reactions for an interval of length tau before updating the propensity functions. By updating the rates less often this sometimes allows for more efficient simulation and thus the consideration of larger systems.

References

  1. "XI. Crossover and Mutation". Marek Obitko. Retrieved 2011-04-07.
  2. Eiben, A.E.; Smith, J.E. (2015). "Variation Operators (Mutation and Recombination)". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. pp. 31–32. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  3. Bäck, Thomas; Fogel, David B.; Whitley, Darrell; Angeline, Peter J. (1999). "Mutation operators". In Bäck, Thomas; Fogel, David B.; Michalewicz, Zbigniew (eds.). Evolutionary computation. Vol. 1, Basic algorithms and operators. Boca Racon: CRC Press. pp. 237–255. ISBN   0-585-30560-9. OCLC   45730387.
  4. Mirjalili, Seyedali (2019), Mirjalili, Seyedali (ed.), "Genetic Algorithm", Evolutionary Algorithms and Neural Networks: Theory and Applications, Studies in Computational Intelligence, Cham: Springer International Publishing, vol. 780, pp. 43–55, doi:10.1007/978-3-319-93025-1_4, ISBN   978-3-319-93025-1, S2CID   242047607 , retrieved 2023-05-26
  5. Harifi, Sasan; Mohamaddoust, Reza (2023-05-01). "Zigzag mutation: a new mutation operator to improve the genetic algorithm". Multimedia Tools and Applications. doi:10.1007/s11042-023-15518-3. ISSN   1573-7721. S2CID   258446829.
  6. Katoch, Sourabh; Chauhan, Sumit Singh; Kumar, Vijay (2021-02-01). "A review on genetic algorithm: past, present, and future". Multimedia Tools and Applications. 80 (5): 8091–8126. doi:10.1007/s11042-020-10139-6. ISSN   1573-7721. PMC   7599983 . PMID   33162782.
  7. Eiben, A.E.; Smith, J.E. (2015). "Representation, Mutation, and Recombination". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. pp. 49–78. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  8. 1 2 3 4 Michalewicz, Zbigniew (1992). Genetic Algorithms + Data Structures = Evolution Programs. Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-02830-8. ISBN   978-3-662-02832-2. S2CID   33272042.
  9. Rechenberg, Ingo (1973). Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis) (in German). Frommann-Holzboog. ISBN   3-7728-0373-3.
  10. Schwefel, Hans-Paul (1977). Numerische Optimierung von Computermodellen (PhD thesis) (in German). Basel: Birkhäuser Verlag. Translation: Numerical optimization of computer models, Wiley, Chichester, 1981. ISBN   0-471-09988-0. OCLC   8011455.
  11. Wright, Alden H. (1991), Rawlins, Gregory J. E. (ed.), Genetic Algorithms for Real Parameter Optimization, Foundations of Genetic Algorithms, vol. 1, Elsevier, pp. 205–218, doi:10.1016/b978-0-08-050684-5.50016-1, ISBN   9780080506845 , retrieved 2023-01-02
  12. Eshelman, Larry J.; Schaffer, J. David (1993), "Real-Coded Genetic Algorithms and Interval-Schemata", Foundations of Genetic Algorithms, Elsevier, vol. 2, pp. 187–202, doi:10.1016/b978-0-08-094832-4.50018-0, ISBN   978-0-08-094832-4 , retrieved 2023-01-01
  13. Herrera, F.; Lozano, M.; Verdegay, J.L. (1998). "Tackling Real-Coded Genetic Algorithms: Operators and Tools for Behavioural Analysis". Artificial Intelligence Review. 12 (4): 265–319. doi:10.1023/A:1006504901164. S2CID   6798965.
  14. Blume, Christian; Jakob, Wilfried (2002), "GLEAM - An Evolutionary Algorithm for Planning and Control Based on Evolution Strategy", Conf. Proc. of Genetic and Evolutionary Computation Conference (GECCO 2002), vol. Late Breaking Papers, pp. 31–38, retrieved 2023-01-01
  15. 1 2 Eiben, A.E.; Smith, J.E. (2015). "Mutation for Permutation Representation". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. pp. 69–70. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  16. 1 2 Yu, Xinjie; Gen, Mitsuo (2010). "Mutation Operators". Introduction to Evolutionary Algorithms. Decision Engineering. London: Springer. pp. 286–288. doi:10.1007/978-1-84996-129-5. ISBN   978-1-84996-128-8.

Bibliography