Bitonic tour

Last updated
A bitonic tour Bitonic tour.svg
A bitonic tour

In computational geometry, a bitonic tour of a set of point sites in the Euclidean plane is a closed polygonal chain that has each site as one of its vertices, such that any vertical line crosses the chain at most twice.

Contents

Optimal bitonic tours

The optimal bitonic tour is a bitonic tour of minimum total length. It is a standard exercise in dynamic programming to devise a polynomial time algorithm that constructs the optimal bitonic tour. [1] [2] Although the usual method for solving it in this way takes time , a faster algorithm with time is known. [3]

The problem of constructing optimal bitonic tours is often credited to Jon L. Bentley, who published in 1990 an experimental comparison of many heuristics for the traveling salesman problem; [4] however, Bentley's experiments do not include bitonic tours. The first publication that describes the bitonic tour problem appears to be a different 1990 publication, the first edition of the textbook Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, and Ron Rivest, which lists Bentley as the originator of the problem.

Properties

The optimal bitonic tour has no self-crossings, because any two edges that cross can be replaced by an uncrossed pair of edges with shorter total length due to the triangle inequality. Therefore, it forms a polygonalization of the input.

When compared to other tours that might not be bitonic, the optimal bitonic tour is the one that minimizes the total amount of horizontal motion, with ties broken by Euclidean distance. [5]

For points in the plane with distinct integer -coordinates and with real-number -coordinates that lie within an interval of length or less, the optimal bitonic tour is an optimal traveling salesperson tour. [6]

Other optimization criteria

The same dynamic programming algorithm that finds the optimal bitonic tour may be used to solve other variants of the traveling salesman problem that minimize lexicographic combinations of motion in a fixed number of coordinate directions. [5]

At the 5th International Olympiad in Informatics, in Mendoza, Argentina in 1993, one of the contest problems involved bitonic tours: the contestants were to devise an algorithm that took as input a set of sites and a collection of allowed edges between sites and construct a bitonic tour using those edges that included as many sites as possible. As with the optimal bitonic tour, this problem may be solved by dynamic programming. [7] [8]

Related Research Articles

<span class="mw-page-title-main">Travelling salesman problem</span> NP-hard problem in combinatorial optimization

The travelling salesman problem, also known as the travelling salesperson problem (TSP), asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

<span class="mw-page-title-main">Greedy algorithm</span> Sequence of locally optimal choices

A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time.

The Bottleneck traveling salesman problem is a problem in discrete or combinatorial optimization. The problem is to find the Hamiltonian cycle in a weighted graph which minimizes the weight of the highest-weight edge of the cycle. It was first formulated by Gilmore & Gomory (1964) with some additional constraints, and in its full generality by Garfinkel & Gilbert (1978).

In computer science, a soft heap is a variant on the simple heap data structure that has constant amortized time complexity for 5 types of operations. This is achieved by carefully "corrupting" (increasing) the keys of at most a constant number of values in the heap.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

In computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

<span class="mw-page-title-main">Closest pair of points problem</span>

The closest pair of points problem or closest pair problem is a problem of computational geometry: given points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.

<span class="mw-page-title-main">Laman graph</span>

In graph theory, the Laman graphs are a family of sparse graphs describing the minimally rigid systems of rods and joints in the plane. Formally, a Laman graph is a graph on n vertices such that, for all k, every k-vertex subgraph has at most 2k − 3 edges, and such that the whole graph has exactly 2n − 3 edges. Laman graphs are named after Gerard Laman, of the University of Amsterdam, who in 1970 used them to characterize rigid planar structures. However, this characterization, the Geiringer–Laman theorem, had already been discovered in 1927 by Hilda Geiringer.

<span class="mw-page-title-main">Cartesian tree</span> Binary tree derived from a sequence of numbers

In computer science, a Cartesian tree is a binary tree derived from a sequence of distinct numbers. To construct the Cartesian tree, set its root to be the minimum number in the sequence, and recursively construct its left and right subtrees from the subsequences before and after this number. It is uniquely defined as a min-heap whose symmetric (in-order) traversal returns the original sequence.

<span class="mw-page-title-main">Knot tabulation</span> Attempt to classify and tabulate all possible knots

Ever since Sir William Thomson's vortex theory, mathematicians have tried to classify and tabulate all possible knots. As of May 2008, all prime knots up to 16 crossings have been tabulated. The major challenge of the process is that many apparently different knots may actually be different geometrical presentations of the same topological entity, and that proving or disproving knot equivalence is much more difficult than it at first seems.

The European Symposium on Algorithms (ESA) is an international conference covering the field of algorithms. It has been held annually since 1993, typically in early Autumn in a different European location each year. Like most theoretical computer science conferences its contributions are strongly peer-reviewed; the articles appear in proceedings published in Springer Lecture Notes in Computer Science. Acceptance rate of ESA is 24% in 2012 in both Design and Analysis and Engineering and Applications tracks.

In computer science, the method of contraction hierarchies is a speed-up technique for finding the shortest-path in a graph. The most intuitive applications are car-navigation systems: a user wants to drive from to using the quickest possible route. The metric optimized here is the travel time. Intersections are represented by vertices, the road sections connecting them by edges. The edge weights represent the time it takes to drive along this segment of the road. A path from to is a sequence of edges ; the shortest path is the one with the minimal sum of edge weights among all possible paths. The shortest path in a graph can be computed using Dijkstra's algorithm but, given that road networks consist of tens of millions of vertices, this is impractical. Contraction hierarchies is a speed-up method optimized to exploit properties of graphs representing road networks. The speed-up is achieved by creating shortcuts in a preprocessing phase which are then used during a shortest-path query to skip over "unimportant" vertices. This is based on the observation that road networks are highly hierarchical. Some intersections, for example highway junctions, are "more important" and higher up in the hierarchy than for example a junction leading into a dead end. Shortcuts can be used to save the precomputed distance between two important junctions such that the algorithm doesn't have to consider the full path between these junctions at query time. Contraction hierarchies do not know about which roads humans consider "important", but they are provided with the graph as input and are able to assign importance to vertices using heuristics.

<span class="mw-page-title-main">Blaschke–Lebesgue theorem</span> Plane geometry theorem on least area of all curves of given constant width

In plane geometry the Blaschke–Lebesgue theorem states that the Reuleaux triangle has the least area of all curves of given constant width. In the form that every curve of a given width has area at least as large as the Reuleaux triangle, it is also known as the Blaschke–Lebesgue inequality. It is named after Wilhelm Blaschke and Henri Lebesgue, who published it separately in the early 20th century.

In computer science, an optimal binary search tree (Optimal BST), sometimes called a weight-balanced binary tree, is a binary search tree which provides the smallest possible search time (or expected search time) for a given sequence of accesses (or access probabilities). Optimal BSTs are generally divided into two types: static and dynamic.

In automata theory, an unambiguous finite automaton (UFA) is a nondeterministic finite automaton (NFA) such that each word has at most one accepting path. Each deterministic finite automaton (DFA) is an UFA, but not vice versa. DFA, UFA, and NFA recognize exactly the same class of formal languages. On the one hand, an NFA can be exponentially smaller than an equivalent DFA. On the other hand, some problems are easily solved on DFAs and not on UFAs. For example, given an automaton A, an automaton A which accepts the complement of A can be computed in linear time when A is a DFA, whereas it is known that this cannot be done in polynomial time for UFAs. Hence UFAs are a mix of the worlds of DFA and of NFA; in some cases, they lead to smaller automata than DFA and quicker algorithms than NFA.

<span class="mw-page-title-main">Polygonalization</span> Polygon through a set of points

In computational geometry, a polygonalization of a finite set of points in the Euclidean plane is a simple polygon with the given points as its vertices. A polygonalization may also be called a polygonization, simple polygonalization, Hamiltonian polygon, non-crossing Hamiltonian cycle, or crossing-free straight-edge spanning cycle.

The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins, vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.

A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability.

Melanie Schmidt is a German computer scientist whose research involves algorithms for cluster analysis, including approximation algorithms, coresets, algorithmic fairness, and inapproximability. She holds the chair for Algorithms and Data Structures in the Computer Science Department at Heinrich Heine University Düsseldorf.

References

  1. Introduction to Algorithms , 3rd ed., T. H. Cormen, C. E. Leiserson, R. Rivest, and C. Stein, MIT Press, 2009. Problem 15-3, p. 405.
  2. Bird, Richard S.; De Moor, Oege (1997), The Algebra of Programming, Prentice Hall, p. 213, ISBN   9780135072455 .
  3. de Berg, Mark; Buchin, Kevin; Jansen, Bart M. P.; Woeginger, Gerhard (2016), "Fine-Grained Complexity Analysis of Two Classic TSP Variants", in Chatzigiannakis, Ioannis; Mitzenmacher, Michael; Rabani, Yuval; Sangiorgi, Davide (eds.), 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), Leibniz International Proceedings in Informatics (LIPIcs), vol. 55, Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, pp. 5:1–5:14, doi: 10.4230/LIPIcs.ICALP.2016.5 , ISBN   978-3-95977-013-2
  4. Bentley, Jon L. (1990), "Experiments on traveling salesman heuristics", Proc. 1st ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 91–99, ISBN   9780898712513 .
  5. 1 2 Sourd, Francis (2010), "Lexicographically minimizing axial motions for the Euclidean TSP", Journal of Combinatorial Optimization, 19 (1): 1–15, doi:10.1007/s10878-008-9154-0, MR   2579501, S2CID   42168298 .
  6. Alkema, Henk; de Berg, Mark; Kisfaludi-Bak, Sándor (2020), "Euclidean TSP in Narrow Strips", in Cabello, Sergio; Chen, Danny Z. (eds.), 36th International Symposium on Computational Geometry (SoCG 2020), Leibniz International Proceedings in Informatics (LIPIcs), vol. 164, Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum für Informatik, pp. 4:1–4:16, doi: 10.4230/LIPIcs.SoCG.2020.4 , ISBN   978-3-95977-143-6, S2CID   219554488
  7. IOI'93 contest problems and report.
  8. Guerreiro, Pedro (December 2003), The Canadian Airline Problem and the Bitonic Tour: Is This Dynamic Programming?, Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa.