3-opt

Last updated

In optimization, 3-opt is a simple local search algorithm for solving the travelling salesperson problem and related network optimization problems. Compared to the simpler 2-opt algorithm, it is slower but can generate higher-quality solutions.

3-opt analysis involves deleting 3 connections (or edges) in a network (or tour), to create 3 sub-tours. Then the 7 different ways of reconnecting the network are analysed to find the optimum one. This process is then repeated for a different set of 3 connections, until all possible combinations have been tried in a network. A single execution of 3-opt has a time complexity of . [1] Iterated 3-opt has a higher time complexity.

This is the mechanism by which the 3-opt swap manipulates a given route:

defreverse_segment_if_better(tour,i,j,k):"""If reversing tour[i:j] would make the tour shorter, then do it."""# Given tour [...A-B...C-D...E-F...]A,B,C,D,E,F=tour[i-1],tour[i],tour[j-1],tour[j],tour[k-1],tour[k%len(tour)]d0=distance(A,B)+distance(C,D)+distance(E,F)d1=distance(A,C)+distance(B,D)+distance(E,F)d2=distance(A,B)+distance(C,E)+distance(D,F)d3=distance(A,D)+distance(E,B)+distance(C,F)d4=distance(F,B)+distance(C,D)+distance(E,A)ifd0>d1:tour[i:j]=reversed(tour[i:j])return-d0+d1elifd0>d2:tour[j:k]=reversed(tour[j:k])return-d0+d2elifd0>d4:tour[i:k]=reversed(tour[i:k])return-d0+d4elifd0>d3:tmp=tour[j:k]+tour[i:j]tour[i:k]=tmpreturn-d0+d3return0

The principle is pretty simple. You compute the original distance and you compute the cost of each modification. If you find a better cost, apply the modification and return (relative cost). This is the complete 3-opt swap making use of the above mechanism:

defthree_opt(tour):"""Iterative improvement based on 3 exchange."""whileTrue:delta=0for(a,b,c)inall_segments(len(tour)):delta+=reverse_segment_if_better(tour,a,b,c)ifdelta>=0:breakreturntourdefall_segments(n:int):"""Generate all segments combinations"""return((i,j,k)foriinrange(n)forjinrange(i+2,n)forkinrange(j+2,n+(i>0)))

For the given tour, you generate all segments combinations and for each combinations, you try to improve the tour by reversing segments. While you find a better result, you restart the process, otherwise finish.

See also

Related Research Articles

<span class="mw-page-title-main">Brian Kernighan</span> Canadian computer scientist, co-creator of the Unix operating system

Brian Wilson Kernighan is a Canadian computer scientist.

<span class="mw-page-title-main">Travelling salesman problem</span> NP-hard problem in combinatorial optimization

The travelling salesman problem asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

<span class="mw-page-title-main">Dynamic programming</span> Problem optimization method

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

<span class="mw-page-title-main">Simulated annealing</span> Probabilistic optimization technique and metaheuristic

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It is often used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in FPGA semiconductor chip design.

In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

<span class="mw-page-title-main">Inverse kinematics</span> Computing joint values of a kinematic chain from a known end position

In computer animation and robotics, inverse kinematics is the mathematical process of calculating the variable joint parameters needed to place the end of a kinematic chain, such as a robot manipulator or animation character's skeleton, in a given position and orientation relative to the start of the chain. Given joint parameters, the position and orientation of the chain's end, e.g. the hand of the character or robot, can typically be calculated directly using multiple applications of trigonometric formulas, a process known as forward kinematics. However, the reverse operation is, in general, much more challenging.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

In computer science, the Boyer–Moore string-search algorithm is an efficient string-searching algorithm that is the standard benchmark for practical string-search literature. It was developed by Robert S. Boyer and J Strother Moore in 1977. The original paper contained static tables for computing the pattern shifts without an explanation of how to produce them. The algorithm for producing the tables was published in a follow-on paper; this paper contained errors which were later corrected by Wojciech Rytter in 1980.

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

<span class="mw-page-title-main">2-opt</span>

In optimization, 2-opt is a simple local search algorithm for solving the traveling salesman problem. The 2-opt algorithm was first proposed by Croes in 1958, although the basic move had already been suggested by Flood. The main idea behind it is to take a route that crosses over itself and reorder it so that it does not. A complete 2-opt local search will compare every possible valid combination of the swapping mechanism. This technique can be applied to the traveling salesman problem as well as many related problems. These include the vehicle routing problem (VRP) as well as the capacitated VRP, which require minor modification of the algorithm.

In combinatorial optimization, Lin–Kernighan is one of the best heuristics for solving the symmetric travelling salesman problem. It belongs to the class of local search algorithms, which take a tour as part of the input and attempt to improve it by searching in the neighbourhood of the given tour for one that is shorter, and upon finding one repeats the process from that new one, until encountering a local minimum. As in the case of the related 2-opt and 3-opt algorithms, the relevant measure of "distance" between two tours is the number of edges which are in one but not the other; new tours are built by reassembling pieces of the old tour in a different order, sometimes changing the direction in which a sub-tour is traversed. Lin–Kernighan is adaptive and has no fixed number of edges to replace at a step, but favours small numbers such as 2 or 3.

In graph theory, the metric k-center problem is a combinatorial optimization problem studied in theoretical computer science. Given n cities with specified distances, one wants to build k warehouses in different cities and minimize the maximum distance of a city to a warehouse. In graph theory, this means finding a set of k vertices for which the largest distance of any point to its closest vertex in the k-set is minimum. The vertices must be in a metric space, providing a complete graph that satisfies the triangle inequality.

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

Time Warp Edit Distance (TWED) is a measure of similarity for discrete time series matching with time 'elasticity'. In comparison to other distance measures,, TWED is a metric. Its computational time complexity is , but can be drastically reduced in some specific situations by using a corridor to reduce the search space. Its memory space complexity can be reduced to . It was first proposed in 2009 by P.-F. Marteau.

<span class="mw-page-title-main">Spiral optimization algorithm</span> Optimization algorithm

In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.

References

  1. Blazinskas, Andrius; Misevicius, Alfonsas (2011). Combining 2-OPT, 3-OPT and 4-OPT with K-SWAP-KICK perturbations for the traveling salesman problem (PDF). 17th International Conference on Information and Software Technologies. Kaunas, Lithuania. S2CID   15324387.