Constructive cooperative coevolution

Last updated

The constructive cooperative coevolutionary algorithm (also called C3) is a global optimisation algorithm in artificial intelligence based on the multi-start architecture of the greedy randomized adaptive search procedure (GRASP). [1] [2] It incorporates the existing cooperative coevolutionary algorithm (CC). [3] The considered problem is decomposed into subproblems. These subproblems are optimised separately while exchanging information in order to solve the complete problem. An optimisation algorithm, usually but not necessarily an evolutionary algorithm, is embedded in C3 for optimising those subproblems. The nature of the embedded optimisation algorithm determines whether C3's behaviour is deterministic or stochastic.

Contents

The C3 optimisation algorithm was originally designed for simulation-based optimisation [4] [5] but it can be used for global optimisation problems in general. [6] Its strength over other optimisation algorithms, specifically cooperative coevolution, is that it is better able to handle non-separable optimisation problems. [4] [7]

An improved version was proposed later, called the Improved Constructive Cooperative Coevolutionary Differential Evolution (C3iDE), which removes several limitations with the previous version. A novel element of C3iDE is the advanced initialisation of the subpopulations. C3iDE initially optimises the subpopulations in a partially co-adaptive fashion. During the initial optimisation of a subpopulation, only a subset of the other subcomponents is considered for the co-adaptation. This subset increases stepwise until all subcomponents are considered. This makes C3iDE very effective on large-scale global optimisation problems (up to 1000 dimensions) compared to cooperative coevolutionary algorithm (CC) and Differential evolution. [8]

The improved algorithm has then been adapted for multi-objective optimization. [9]

Algorithm

As shown in the pseudo code below, an iteration of C3 exists of two phases. In Phase I, the constructive phase, a feasible solution for the entire problem is constructed in a stepwise manner. Considering a different subproblem in each step. After the final step, all subproblems are considered and a solution for the complete problem has been constructed. This constructed solution is then used as the initial solution in Phase II, the local improvement phase. The CC algorithm is employed to further optimise the constructed solution. A cycle of Phase II includes optimising the subproblems separately while keeping the parameters of the other subproblems fixed to a central blackboard solution. When this is done for each subproblem, the found solution are combined during a "collaboration" step, and the best one among the produced combinations becomes the blackboard solution for the next cycle. In the next cycle, the same is repeated. Phase II, and thereby the current iteration, are terminated when the search of the CC algorithm stagnates and no significantly better solutions are being found. Then, the next iteration is started. At the start of the next iteration, a new feasible solution is constructed, utilising solutions that were found during the Phase I of the previous iteration(s). This constructed solution is then used as the initial solution in Phase II in the same way as in the first iteration. This is repeated until one of the termination criteria for the optimisation is reached, e.g. a maximum number of evaluations.

{Sphase1} ← ∅ while termination criteria not satisfied doif {Sphase1} = ∅ then         {Sphase1} ← SubOpt(∅, 1)     end ifwhilepphase1 not completely constructed dopphase1 ← GetBest({Sphase1})         {Sphase1} ← SubOpt(pphase1, inext subproblem)     end whilepphase2 ← GetBest({Sphase1})     while not stagnate do         {Sphase2} ← ∅          for each subproblem i do             {Sphase2} ← SubOpt(pphase2,i)         end for         {Sphase2} ← Collab({Sphase2})         pphase2 ← GetBest({Sphase2})     end whileend while

Multi-objective optimisation

The multi-objective version of the C3 algorithm [9] is a Pareto-based algorithm which uses the same divide-and-conquer strategy as the single-objective C3 optimisation algorithm . The algorithm again starts with the advanced constructive initial optimisations of the subpopulations, considering an increasing subset of subproblems. The subset increases until the entire set of all subproblems is included. During these initial optimisations, the subpopulation of the latest included subproblem is evolved by a multi-objective evolutionary algorithm. For the fitness calculations of the members of the subpopulation, they are combined with a collaborator solution from each of the previously optimised subpopulations. Once all subproblems' subpopulations have been initially optimised, the multi-objective C3 optimisation algorithm continues to optimise each subproblem in a round-robin fashion, but now collaborator solutions from all other subproblems' subspopulations are combined with the member of the subpopulation that is being evaluated. The collaborator solution is selected randomly from the solutions that make up the Pareto-optimal front of the subpopulation. The fitness assignment to the collaborator solutions is done in an optimistic fashion (i.e. an "old" fitness value is replaced when the new one is better).

Applications

The constructive cooperative coevolution algorithm has been applied to different types of problems, e.g. a set of standard benchmark functions, [4] [6] optimisation of sheet metal press lines [4] [5] and interacting production stations. [5] The C3 algorithm has been embedded with, amongst others, the differential evolution algorithm [10] and the particle swarm optimiser [11] for the subproblem optimisations.

See also

Related Research Articles

Genetic algorithm Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. John Holland introduced genetic algorithms in 1960 based on the concept of Darwin’s theory of evolution, and his student David E. Goldberg further extended GA in 1989.

Mathematical optimization study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

Greedy algorithm algorithm that makes locally optimal choices in a sequence of steps with the goal of reaching a global optimum

A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not usually produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time.

Evolutionary computation Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

Particle swarm optimization Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

The greedy randomized adaptive search procedure is a metaheuristic algorithm commonly applied to combinatorial optimization problems. GRASP typically consists of iterations made up from successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search. The greedy randomized solutions are generated by adding elements to the problem's solution set from a list of elements ranked by a greedy function according to the quality of the solution they will achieve. To obtain variability in the candidate set of greedy solutions, well-ranked candidate elements are often placed in a restricted candidate list (RCL), and chosen at random when building up the solution. This kind of greedy randomized construction method is also known as a semi-greedy heuristic, first described in Hart and Shogan (1987).

Differential evolution

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

Extremal optimization (EO) is an optimization heuristic inspired by the Bak–Sneppen model of self-organized criticality from the field of statistical physics. This heuristic was designed initially to address combinatorial optimization problems such as the travelling salesman problem and spin glasses, although the technique has been demonstrated to function in optimization domains.

Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

ECJ is a freeware evolutionary computation research system written in Java. It is a framework that supports a variety of evolutionary computation techniques, such as genetic algorithms, genetic programming, evolution strategies, coevolution, particle swarm optimization, and differential evolution. The framework models iterative evolutionary processes using a series of pipelines arranged to connect one or more subpopulations of individuals with selection, breeding (such as crossover, and mutation operators that produce new individuals. The framework is open source and is distributed under the Academic Free License. ECJ was created by Sean Luke, a computer science professor at George Mason University, and is maintained by Sean Luke and a variety of contributors.

Parallel metaheuristic is a class of techniques that are capable of reducing both the numerical effort and the run time of a metaheuristic. To this end, concepts and technologies from the field of parallelism in computer science are used to enhance and even completely modify the behavior of existing metaheuristics. Just as it exists a long list of metaheuristics like evolutionary algorithms, particle swarm, ant colony optimization, simulated annealing, etc. it also exists a large set of different techniques strongly or loosely based in these ones, whose behavior encompasses the multiple parallel execution of algorithm components that cooperate in some way to solve a problem on a given parallel hardware platform.

A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.

Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have sections dedicated to discussing this decomposition algorithm.

Cooperative Coevolution (CC) is an evolutionary computation method that divides a large problem into subcomponents and solves them independently in order to solve the large problem.

In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.

MINOS is a Fortran software package for solving linear and nonlinear mathematical optimization problems. MINOS may be used for linear programming, quadratic programming, and more general objective functions and constraints, and for finding a feasible point for a set of linear or nonlinear equalities and inequalities.

In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.

References

  1. T.A. Feo and M.G.C. Resende (1989) "A probabilistic heuristic for a computationally difficult set covering problem". Operations Research Letters, 8:6771, 1989.
  2. T.A. Feo and M.G.C. Resende (1995) "Greedy randomized adaptive search procedures". Journal of Global Optimization, 6:109133, 1995.
  3. M. A. Potter and K. A. D. Jong, "A cooperative coevolutionary approach to function optimization", in PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature London, UK:Springer-Verlag, 1994, pp. 249–257.
  4. 1 2 3 4 Glorieux E., Danielsson F., Svensson B., Lennartson B., "Optimisation of interacting production stations using a Constructive Cooperative Coevolutionary approach", 2014 IEEE International Conference on Automation Science and Engineering (CASE), pp.322-327, August 2014, Taipei, Taiwan
  5. 1 2 3 Glorieux E., Svensson B., Danielsson F., Lennartson B., "A Constructive Cooperative Coevolutionary Algorithm Applied to Press Line Optimisation", Proceedings of the 24th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM), pp.909-917, May 2014, San Antonio, Texas, USA
  6. 1 2 Glorieux E., Svensson B., Danielsson F., Lennartson B.: "Constructive cooperative coevolution for large-scale global optimisation", Journal of Heuristics, 2017.
  7. Glorieux E., Danielsson F., Svensson B., Lennartson B.: "Constructive cooperative coevolutionary optimisation for interacting production stations", International Journal of Advanced Manufacturing Technology, 2015.
  8. Glorieux E., Svensson B., Danielsson F., Lennartson B., "Improved Constructive Cooperative Coevolutionary Differential Evolution for Large-Scale Optimisation", 2015 IEEE Symposium Series on Computational Intelligence, December 2015
  9. 1 2 Glorieux E., Svensson B., Danielsson F., Lennartson B., "Multi-objective constructive cooperative coevolutionary optimization of robotic press-line tending", Engineering Optimization, Vol. 49, Iss. 10, 2017, pp 1685-1703
  10. Storn, Rainer, and Kenneth Price. "Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces", Journal of global optimization 11.4 (1997): 341-359.
  11. Eberhart, Russ C., and James Kennedy. "A new optimizer using particle swarm theory", Proceedings of the sixth international symposium on micro machine and human science. Vol. 1. 1995.