Klee's measure problem

Last updated
A set of rectangular ranges ('trellis') whose area has to be measured. Set a rectangles (Klee's Trevis).svg
A set of rectangular ranges ('trellis') whose area has to be measured.

In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd.

Contents

The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem.

History and algorithms

In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal.

Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity algorithm, now known as Bentley's algorithm, based on reducing the problem to n1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas.

These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of nd-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem.

When generalized to the d-dimensional case, Bentley's algorithm has a running time of . This turns out not to be optimal, because it only decomposes the d-dimensional problem into n (d-1)-dimensional problems, and does not further decompose those subproblems. In 1981, Jan van Leeuwen and Derek Wood improved the running time of this algorithm to for d ≥ 3 by using dynamic quadtrees.

In 1988, Mark Overmars and Chee Yap proposed an algorithm for d ≥ 3. Their algorithm uses a particular data structure similar to a kd-tree to decompose the problem into 2-dimensional components and aggregate those components efficiently; the 2-dimensional problems themselves are solved efficiently using a trellis structure. Although asymptotically faster than Bentley's algorithm, its data structures use significantly more space, so it is only used in problems where either n or d is large. In 1998, Bogdan Chlebus proposed a simpler algorithm with the same asymptotic running time for the common special cases where d is 3 or 4.

In 2013, Timothy M. Chan developed a simpler algorithm that avoids the need for dynamic data structures and eliminates the logarithmic factor, lowering the best known running time for d ≥ 3 to .

Known bounds

The only known lower bound for any d is , and optimal algorithms with this running time are known for d=1 and d=2. The Chan algorithm provides an upper bound of for d ≥ 3, so for d ≥ 3, it remains an open question whether faster algorithms are possible, or alternatively whether tighter lower bounds can be proven. In particular, it remains open whether the algorithm's running time must depend on d. In addition, the question of whether there are faster algorithms that can deal with special cases (for example, when the input coordinates are integers within a bounded range) remains open.

The 1D Klee's measure problem (union of intervals) can be solved in where p denotes the number of piercing points required to stab all intervals [1] (the union of intervals pierced by a common point can be calculated in linear time by computing the extrema). Parameter p is an adaptive parameter that depends on the input configuration, and the piercing algorithm [2] yields an adaptive algorithm for Klee's measure problem.

References and further reading

Important papers

Secondary literature

See also

Related Research Articles

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to time and memory requirements.

Fast Fourier transform O(N logN) divide and conquer algorithm to calculate the discrete Fourier transforms

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

Time complexity An estimate of time taken for running an algorithm

In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.

Hidden-line removal

Solid objects are usually modeled by polyhedra in a computer representation. A face of a polyhedron is a planar polygon bounded by straight line segments, called edges. Curved surfaces are usually approximated by a polygon mesh. Computer programs for line drawings of opaque objects must be able to decide which edges or which parts of the edges are hidden by an object itself or by other objects. This problem is known as hidden-line removal.

Bounding sphere

In mathematics, given a non-empty set of objects of finite extension in -dimensional space, for example a set of points, a bounding sphere, enclosing sphere or enclosing ball for that set is an -dimensional solid sphere containing all of these objects.

Iterated logarithm The inverse function to a tower of powers

In computer science, the iterated logarithm of , written log* , is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation:

<i>k</i>-d tree Multidimensional search tree for points in k dimensional space

In computer science, a k-d tree is a space-partitioning data structure for organizing points in a k-dimensional space. k-d trees are a useful data structure for several applications, such as searches involving a multidimensional search key. k-d trees are a special case of binary space partitioning trees.

Chans algorithm

In computational geometry, Chan's algorithm, named after Timothy M. Chan, is an optimal output-sensitive algorithm to compute the convex hull of a set of points, in 2- or 3-dimensional space. The algorithm takes time, where is the number of vertices of the output. In the planar case, the algorithm combines an algorithm with Jarvis march, in order to obtain an optimal time. Chan's algorithm is notable because it is much simpler than the Kirkpatrick–Seidel algorithm, and it naturally extends to 3-dimensional space. This paradigm has been independently developed by Frank Nielsen in his Ph.D. thesis.

Closest pair of points problem the problem of finding the two points with minimum distance from a larger finite set of points

The closest pair of points problem or closest pair problem is a problem of computational geometry: given n points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.

Range searching

In data structures, the range searching problem most generally consists of preprocessing a set S of objects, in order to determine which objects from S intersect with a query object, called a range. For example, if S is a set of points corresponding to the coordinates of several cities, a geometric variant of the problem is to find cities within a certain latitude and longitude range.

Algorithms that construct convex hulls of various objects have a broad range of applications in mathematics and computer science.

Visibility polygon

In computational geometry, the visibility polygon or visibility region for a point p in the plane among obstacles is the possibly unbounded polygonal region of all points of the plane visible from p. The visibility polygon can also be defined for visibility from a segment, or a polygon. Visibility polygons are useful in robotics, video games, and in determining positions to locate facilities, such as the best placement of security guards in an art gallery.

In computer science, a range tree is an ordered tree data structure to hold a list of points. It allows all points within a given range to be reported efficiently, and is typically used in two or higher dimensions. Range trees were introduced by Jon Louis Bentley in 1979. Similar data structures were discovered independently by Lueker, Lee and Wong, and Willard. The range tree is an alternative to the k-d tree. Compared to k-d trees, range trees offer faster query times of but worse storage of , where n is the number of points stored in the tree, d is the dimension of each point and k is the number of points reported by a given query.

In computational geometry, the Bentley–Ottmann algorithm is a sweep line algorithm for listing all crossings in a set of line segments, i.e. it finds the intersection points of line segments. It extends the Shamos–Hoey algorithm, a similar previous algorithm for testing whether or not a set of line segments has any crossings. For an input consisting of line segments with crossings, the Bentley–Ottmann algorithm takes time . In cases where , this is an improvement on a naïve algorithm that tests every pair of segments, which takes .

Kenneth L. Clarkson American computer scientist

Kenneth Lee Clarkson is an American computer scientist known for his research in computational geometry. He is a researcher at the IBM Almaden Research Center, and co-editor-in-chief of the Journal of Computational Geometry.

In the analysis of algorithms, several authors have studied the computation of the volume of high-dimensional convex bodies, a problem that can also be used to model many other problems in combinatorial enumeration. Often these works use a black box model of computation in which the input is given by a subroutine for testing whether a point is inside or outside of the convex body, rather than by an explicit listing of the vertices or faces of a convex polytope. It is known that, in this model, no deterministic algorithm can achieve an accurate approximation, and even for an explicit listing of faces or vertices the problem is #P-hard. However, a joint work by Martin Dyer, Alan M. Frieze and Ravindran Kannan provided a randomized polynomial time approximation scheme for the problem, providing a sharp contrast between the capabilities of randomized and deterministic algorithms.

Largest empty rectangle

In computational geometry, the largest empty rectangle problem,maximal empty rectangle problem or maximum empty rectangle problem, is the problem of finding a rectangle of maximal size to be placed among obstacles in the plane. There are a number of variants of the problem, depending on the particularities of this generic formulation, in particular, depending on the measure of the "size", domain, and the orientation of the rectangle.

David Mount is a professor at the University of Maryland, College Park department of computer science whose research is in computational geometry.

The geometric set cover problem is the special case of the set cover problem in geometric settings. The input is a range space where is a universe of points in and is a family of subsets of called ranges, defined by the intersection of and geometric shapes such as disks and axis-parallel rectangles. The goal is to select a minimum-size subset of ranges such that every point in the universe is covered by some range in .

In the design and analysis of algorithms for combinatorial optimization, parametric search is a technique invented by Nimrod Megiddo (1983) for transforming a decision algorithm into an optimization algorithm. It is frequently used for solving optimization problems in computational geometry.

References

  1. "Adaptive Computational Geometry", F. Nielsen, pdf
  2. "Fast stabbing of boxes in high dimensions", F. Nielsen, Theoretical Computer Science Volume 246, Issues 1–2, 6 September 2000, Pages 53-72 pdf