Iterated local search

Last updated
Iterated local search kicks a solution out from a local optimum Iterated local search.png
Iterated local search kicks a solution out from a local optimum

Iterated Local Search [1] [2] (ILS) is a term in applied mathematics and computer science defining a modification of local search or hill climbing methods for solving discrete optimization problems.

Contents

Local search methods can get stuck in a local minimum, where no improving neighbors are available.

A simple modification consists of iterating calls to the local search routine, each time starting from a different initial configuration. This is called repeated local search, and implies that the knowledge obtained during the previous local search phases is not used. Learning implies that the previous history, for example the memory about the previously found local minima, is mined to produce better and better starting points for local search.

The implicit assumption is that of a clustered distribution of local minima : when minimizing a function, determining good local minima is easier when starting from a local minimum with a low value than when starting from a random point. The only caveat is to avoid confinement in a given attraction basin, so that the kick to transform a local minimizer into the starting point for the next run has to be appropriately strong, but not too strong to avoid reverting to memory-less random restarts.

Iterated Local Search is based on building a sequence of locally optimal solutions by:

  1. perturbing the current local minimum;
  2. applying local search after starting from the modified solution.

The perturbation strength has to be sufficient to lead the trajectory to a different attraction basin leading to a different local optimum.

Perturbation Algorithm

Finding the perturbation algorithm for ILS is not an easy task. The main aim is not to get stuck at the same local minimum and in order to ensure this property, the undo operation is forbidden. Despite this, a good permutation has to consider a lot of values, since there exist two kind of bad permutations:

  1. too weak: fall back to the same local minimum
  2. too strong: random restart

Benchmark Perturbation

The procedure consists in fixing a number of values for the perturbation such that these values are significant for the instance: on average probability and not rare. After that, on runtime it will be possible to check the benchmark plot in order to get an average idea on the instances passed.

Adaptive Perturbation

Since there is no function a priori that tells which one is the most suitable value for a given perturbation, the best criterion is to get it adaptive. For instance Battiti and Protasi proposed [3] a reactive search algorithm for MAX-SAT which fits perfectly into the ILS framework. They perform a "directed" perturbation scheme which is implemented by a tabu search algorithm and after each perturbation they apply a standard local descent algorithm. Another way of adapting the perturbation is to change deterministically its strength during the search.

Optimizing Perturbation

Another procedure is to optimize a sub-part of the problem while keeping the not-undo property active. If this procedure is possible, all solutions generated after the perturbations tend to be very good. Furthermore the new parts are optimized too.

Applications

The method has been applied to several combinatorial optimization problems including the Job Shop Scheduling problems, [4] [5] Flow-Shop Problems, [6] Vehicle Routing Problems [7] as well as many others.

Related Research Articles

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Simulated annealing</span> Probabilistic optimization technique and metaheuristic

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optima. It is often used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.

In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.

<span class="mw-page-title-main">Hill climbing</span> Optimization algorithm

In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.

Tabu search (TS) is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986 and formalized in 1989.

Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.

The greedy randomized adaptive search procedure is a metaheuristic algorithm commonly applied to combinatorial optimization problems. GRASP typically consists of iterations made up from successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search. The greedy randomized solutions are generated by adding elements to the problem's solution set from a list of elements ranked by a greedy function according to the quality of the solution they will achieve. To obtain variability in the candidate set of greedy solutions, well-ranked candidate elements are often placed in a restricted candidate list (RCL), and chosen at random when building up the solution. This kind of greedy randomized construction method is also known as a semi-greedy heuristic, first described in Hart and Shogan (1987).

In computer science, the min-conflicts algorithm is a search algorithm or heuristic method to solve constraint satisfaction problems.

Boender-Rinnooy-Stougie-Timmer algorithm (BRST) is an optimization algorithm suitable for finding global optimum of black box functions. In their paper Boender et al. describe their method as a stochastic method involving a combination of sampling, clustering and local search, terminating with a range of confidence intervals on the value of the global minimum.

Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems.

Guided local search is a metaheuristic search method. A meta-heuristic method is a method that sits on top of a local search algorithm to change its behavior.

<span class="mw-page-title-main">Roberto Battiti</span>

Roberto Battiti is an Italian computer scientist, Professor of computer science at the University of Trento, director of the LIONlab, and deputy director of the DISI Department and delegate for technology transfer.

In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generates a sequence of points that converges to an optimal solution. However, when applied to a twice continuously differentiable function, the LJ heuristic is a proper iterative method, that generates a sequence that has a convergent subsequence; for this class of problems, Newton's method is recommended and enjoys a quadratic rate of convergence, while no convergence rate analysis has been given for the LJ heuristic. In practice, the LJ heuristic has been recommended for functions that need be neither convex nor differentiable nor locally Lipschitz: The LJ heuristic does not use a gradient or subgradient when one be available, which allows its application to non-differentiable and non-convex problems.

Variable neighborhood search (VNS), proposed by Mladenović & Hansen in 1997, is a metaheuristic method for solving a set of combinatorial optimization and global optimization problems. It explores distant neighborhoods of the current incumbent solution, and moves from there to a new one if and only if an improvement was made. The local search method is applied repeatedly to get from solutions in the neighborhood to local optima. VNS was designed for approximating solutions of discrete and continuous optimization problems and according to these, it is aimed for solving linear program problems, integer program problems, mixed integer program problems, nonlinear program problems, etc.

<span class="mw-page-title-main">Simulation-based optimization</span>

Simulation-based optimization integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques.

<span class="mw-page-title-main">Basin-hopping</span>

In applied mathematics, Basin-hopping is a global optimization technique that iterates by performing random perturbation of coordinates, performing local optimization, and accepting or rejecting new coordinates based on a minimized function value. The algorithm was described in 1997 by David J. Wales and Jonathan Doye. It is a particularly useful algorithm for global optimization in very high-dimensional landscapes, such as finding the minimum energy structure for molecules. The method is inspired from Monte-Carlo Minimization first suggested by Li and Scheraga.

References

  1. Lourenço, H.R.; Martin O.; Stützle T. (2010). Iterated Local Search: Framework and Applications. pp. 363–397. CiteSeerX   10.1.1.187.2089 . doi:10.1007/978-1-4419-1665-5_12. ISBN   978-1-4419-1663-1.{{cite book}}: |journal= ignored (help)
  2. Lourenço, H.R.; Martin O.; Stützle T. (2003). "Iterated Local Search". Handbook of Metaheuristics. Kluwer Academic Publishers, International Series in Operations Research & Management Science. 57: 321–353.
  3. Battiti, Roberto; Protasi, Marco (1997-01-01). "Reactive search, a history-sensitive heuristic for MAX-SAT". ACM Journal of Experimental Algorithmics. 2: 2–es. doi: 10.1145/264216.264220 . ISSN   1084-6654.
  4. Lourenço, H.R.; Zwijnenburg M. (1996). Combining the large-step optimization with tabu-search: application to the job-shop scheduling problem. pp. 219–236. doi:10.1007/978-1-4613-1361-8_14. ISBN   9780792397007.{{cite book}}: |journal= ignored (help)
  5. Lourenço, H.R. (1995). "Job-Shop Scheduling: computational study of local search and large-step optimization methods". European Journal of Operational Research. 83 (2): 347–364. doi:10.1016/0377-2217(95)00012-F.
  6. Juan, A.A.; Lourenço, H.; Mateo, M.; Luo, R.; Castella, Q. (2013). "Using Iterated Local Search for solving the Flow-Shop Problem: parametrization, randomization and parallelization issues". International Transactions in Operational Research.
  7. Penna, Puca H.V.; Satori Ochi, L.; Subramanian, A. (2013). "An Iterated Local Search heuristic for the Heterogeneous Fleet Vehicle Routing Problem". Journal of Heuristics. 19 (2): 201–232. doi:10.1007/s10732-011-9186-y.

[1]

  1. "Roberto Cordone - Corsi - Algoritmi euristici (A.A. 2016/17)".