MCACEA

Last updated

MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) is a general framework that uses a single evolutionary algorithm (EA) per agent sharing their optimal solutions to coordinate the evolutions of the EAs populations using cooperation objectives. This framework can be used to optimize some characteristics of multiple cooperating agents in mathematical optimization problems. More specifically, due to its nature in which both individual and cooperation objectives are optimize, MCACEA is used in multi-objective optimization problems.

In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

Mathematical optimization field in applied mathematics; the selection of a best element (with regard to some criterion) from some set of available alternatives

In mathematics, computer science and operations research, mathematical optimization or mathematical programming is the selection of a best element from some set of available alternatives.

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Contents

Description and implementation

MCACEA, uses multiple EAs (one per each agent) that evolve their own populations to find the best solution for its associated problem according to their individual and cooperation constraints and objective indexes. Each EA is an optimization problem that runs in parallel and that exchanges some information with the others during its evaluation step. This information is needed to let each EA to measure the coordination objectives of the solutions encoded in its own population, taking into account the possible optimal solutions of the remaining populations of the other EAs. With this purpose, each single EA receives information related to the best solutions of the remaining ones before evaluating the cooperative objectives of each possible solution of its own population.

As the cooperation objective values depend on the best solutions of the other populations and the optimality of a solution depends both on the individual and cooperation objectives, it is not really possible to select and send the best solution of each planner to the others. However, MCACEA divides the evaluation step inside each EA in three parts: In the first part, the EAs identify the best solution considering only its individual objective values and send it to the others EAs; in the second part, the cooperation objective values of all solutions are calculated taking into account the received information; and in the third part, the EAs calculate the fitness of the solutions considering all the individual and cooperation objective values.

Although each population can only offer a unique optimal solution, each EA maintains a pareto set of optimal solutions and selects the unique optimal solution at the end, when the last population has already been obtained. Therefore, to be able to determine a unique optimal solution according with the individual objectives in each generation (and so, using it with the MCACEA framework), a step in charge of selecting the final optimal solution must also be included in the evaluation step of each EA.

Evaluation phase in MCACEA

The complete evaluation phase of the individual cooperating EAs is divided in six steps. When searching for the solution of a single EA, only the first two steps of this new evaluation process are used. MCACEA extends this process from these two only steps to the next six:

1. Evaluating the individual objectives of each solution.

2. Calculating the fitness of each solution with the single evaluation function (containing only the individual objectives).

3. Finding the best solution of the population.

4. Sending (and receiving) the best solution to (of) the other single EAs.

5. Calculating the cooperation objectives taking into account the received information from the other EAs.

6. Calculating the fitness of each solution with the complete evaluation function (containing both the individual and the cooperation objectives), which have been obtained in steps 1 and 5.

Similar approaches

Although MCACEA may look similar to the habitual parallelization of EAs, in this case, instead of distributing the solutions of the whole problem between different EAs that share their solutions periodically, the algorithm is dividing the problem into smaller problems that are solved simultaneously by each EA taking into account the solutions of the part of the problems that the other EAs are obtaining.

Another possibility, [1] is to send the best completely evaluated solutions of the previous generation to the other EAs instead of our current best only individual objective evaluated one. Nevertheless, that approach introduces a bias toward outdated completely evaluated trajectories, while MCACEA does it toward currently good individual objective evaluated ones.

Applications

MCACEA has been used for finding and optimizing unmanned aerial vehicles (UAVs) trajectories when flying simultaneously in the same scenario. [2]

See also

Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype-phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models.

Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism.

Evolutionary computation Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

Related Research Articles

Genetic algorithm competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. John Holland introduced genetic algorithms in 1960 based on the concept of Darwin’s theory of evolution; afterwards, his student David E. Goldberg extended GA in 1989.

Fitness landscape Model used to visualise relationship between genotypes and reproductive success

In evolutionary biology, fitness landscapes or adaptive landscapes are used to visualize the relationship between genotypes and reproductive success. It is assumed that every genotype has a well-defined replication rate. This fitness is the "height" of the landscape. Genotypes which are similar are said to be "close" to each other, while those that are very different are "far" from each other. The set of all possible genotypes, their degree of similarity, and their related fitness values is then called a fitness landscape. The idea of a fitness landscape is a metaphor to help explain flawed forms in evolution by natural selection, including exploits and glitches in animals like their reactions to supernormal stimuli.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population.

Memetic algorithms (MAs) represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian evolutionary algorithms (EAs), Lamarckian EAs, cultural algorithms, or genetic local search.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

The genetic algorithm is an operational research method that may be used to solve scheduling problems in production planning.

ParadisEO is a white-box object-oriented framework dedicated to the flexible design of metaheuristics. It uses EO, a template-based, ANSI-C++ compliant computation library. ParadisEO is portable across both Windows system and sequential platforms. ParadisEO is distributed under the CeCill license and can be used under several environments.

Parallel metaheuristic is a class of techniques that are capable of reducing both the numerical effort and the run time of a metaheuristic. To this end, concepts and technologies from the field of parallelism in computer science are used to enhance and even completely modify the behavior of existing metaheuristics. Just as it exists a long list of metaheuristics like evolutionary algorithms, particle swarm, ant colony optimization, simulated annealing, etc. it also exists a large set of different techniques strongly or loosely based in these ones, whose behavior encompasses the multiple parallel execution of algorithm components that cooperate in some way to solve a problem on a given parallel hardware platform.

In function optimization, fitness approximation is a method for decreasing the number of fitness function evaluations to reach a target solution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.

Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems.

In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.

Cellular evolutionary algorithm

A cellular evolutionary algorithm (cEA) is a kind of evolutionary algorithm (EA) in which individuals cannot mate arbitrarily, but every one interacts with its closer neighbors on which a basic EA is applied.

Biogeography-based optimization (BBO) is an evolutionary algorithm (EA) that optimizes a function by stochastically and iteratively improving candidate solutions with regard to a given measure of quality, or fitness function. BBO belongs to the class of metaheuristics since it includes many variations, and since it does not make any assumptions about the problem and can therefore be applied to a wide class of problems.

The constructive cooperative coevolutionary algorithm (also called C3) is a global optimisation algorithm in artificial intelligence based on the multi-start architecture of the greedy randomized adaptive search procedure (GRASP). It incorporates the existing cooperative coevolutionary algorithm (CC). The considered problem is decomposed into subproblems. These subproblems are optimised separately while exchanging information in order to solve the complete problem. An optimisation algorithm, usually but not necessarily an evolutionary algorithm, is embedded in C3 for optimising those subproblems. The nature of the embedded optimisation algorithm determines whether C3's behaviour is deterministic or stochastic.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are learned.

In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.

References

  1. C. Zheng, L. Li, F. Xu, F. Sun, and M. Ding, Evolutionary route planner for unmanned air vehicles, IEEE Transactions on Robotics, vol. 21, no. 4, pp. 609–620, Aug. 2005.
  2. J. M. de la Cruz, E. Besada-Portas, L. de la Torre, B. Andrés-Toro, and J. A. Lopez-Orozco, Evolutionary path planner for UAVs in realistic environments, in Proceedings of the Genetic and Evolutionary Compututation Conference, 2008, pp. 1447–1155.

Bibliography

L. de la Torre, J. M. de la Cruz, and B. Andrés-Toro. Evolutionary trajectory planner for multiple UAVs in realistic scenarios. IEEE Transactions on Robotics, vol. 26, no. 4, pp. 619–634, August 2010.