Bounding sphere

Last updated
Some instances of the smallest bounding circle, the case of the bounding sphere in 2 dimensions. Smallest circle problem.svg
Some instances of the smallest bounding circle, the case of the bounding sphere in 2 dimensions.

In mathematics, given a non-empty set of objects of finite extension in -dimensional space, for example a set of points, a bounding sphere, enclosing sphere or enclosing ball for that set is a -dimensional solid sphere containing all of these objects.

Contents

Used in computer graphics and computational geometry, a bounding sphere is a special type of bounding volume. There are several fast and simple bounding sphere construction algorithms with a high practical value in real-time computer graphics applications. [1]

In statistics and operations research, the objects are typically points, and generally the sphere of interest is the minimal bounding sphere, that is, the sphere with minimal radius among all bounding spheres. It may be proven that such a sphere is unique: If there are two of them, then the objects in question lie within their intersection. But an intersection of two non-coinciding spheres of equal radius is contained in a sphere of smaller radius.

The problem of computing the center of a minimal bounding sphere is also known as the "unweighted Euclidean 1-center problem".

Applications

Clustering

Such spheres are useful in clustering, where groups of similar data points are classified together.

In statistical analysis the scattering of data points within a sphere may be attributed to measurement error or natural (usually thermal) processes, in which case the cluster represents a perturbation of an ideal point. In some circumstances this ideal point may be used as a substitute for the points in the cluster, advantageous in reducing calculation time.

In operations research the clustering of values to an ideal point may also be used to reduce the number of inputs in order to obtain approximate values for NP-hard problems in a reasonable time. The point chosen is not usually the center of the sphere, as this can be biased by outliers, but instead some form of average location such as a least squares point is computed to represent the cluster.

Algorithms

There are exact and approximate algorithms for the solving bounding sphere problem.

Linear programming

Nimrod Megiddo studied the 1-center problem extensively and published on it at least five times in the 1980s. [2] In 1983, he proposed a "prune and search" algorithm which finds the optimum bounding sphere and runs in linear time if the dimension is fixed as a constant. When the dimension is taken into account, the execution time complexity is , [3] [4] which is impractical for high-dimensional applications.

In 1991, Emo Welzl proposed a much simpler randomized algorithm, generalizing a randomized linear programming algorithm by Raimund Seidel. The expected running time of Welzl's algorithm is , which again reduces to for any fixed dimension . The paper provides experimental results demonstrating its practicality in higher dimensions. [5] A more recent deterministic algorithm of Timothy Chan also runs in time, with a smaller (but still exponential) dependence on the dimension. [4]

The open-source Computational Geometry Algorithms Library (CGAL) contains an implementation of Welzl's algorithm. [6]

Ritter's bounding sphere

In 1990, Jack Ritter proposed a simple algorithm to find a non-minimal bounding sphere. [7] It is widely used in various applications for its simplicity. The algorithm works in this way:

  1. Pick a point from , search a point in , which has the largest distance from ;
  2. Search a point in , which has the largest distance from . Set up an initial ball , with its centre as the midpoint of and , the radius as half of the distance between and ;
  3. If all points in are within ball , then we get a bounding sphere. Otherwise, let be a point outside the ball, construct a new ball covering both point and previous ball. Repeat this step until all points are covered.

Ritter's algorithm runs in time on inputs consisting of points in -dimensional space, which makes it very efficient. However it gives only a coarse result which is usually 5% to 20% larger than the optimum. [7]

Core-set based approximation

Bădoiu et al. presented a approximation to the bounding sphere problem, [8] where a approximation means that the constructed sphere has radius at most , where is the smallest possible radius of a bounding sphere.

A coreset is a small subset, that a expansion of the solution on the subset is a bounding sphere of the whole set. The coreset is constructed incrementally by adding the farthest point into the set in each iteration.

Kumar et al. improved this approximation algorithm [9] so that it runs in time .

Fischer's exact solver

Fischer et al. (2003) proposed an exact solver, though the algorithm does not have a polynomial running time in the worst case. [10] The algorithm is purely combinatorial and implements a pivoting scheme similar to the simplex method for linear programming, used earlier in some heuristics. It starts with a large sphere that covers all points and gradually shrinks it until it cannot be shrunk further. The algorithm features correct termination rules in cases of degeneracies, overlooked by prior authors; and efficient handling of partial solutions, which produces a major speed-up. The authors verified that the algorithm is efficient in practice in low and moderately low (up to 10,000) dimensions and claim it does not exhibit numerical stability problems in its floating-point operations. [10] A C++ implementation of the algorithm is available as an open-source project. [11]

Extremal points optimal sphere

Larsson (2008) proposed the "extremal points optimal sphere" method with controllable speed to accuracy approximation to solve the bounding sphere problem. This method works by taking a set of direction vectors and projecting all points onto each vector in ; serves as a speed-accuracy trade-off variable. An exact solver is applied to the extremal points of these projections. The algorithm then iterates over the remaining points, if any, growing the sphere if necessary. For large this method is orders of magnitude faster than exact methods, while giving comparable results. It has a worst case time of . [1]

See also

Related Research Articles

In mathematics, the isoperimetric inequality is a geometric inequality involving the perimeter of a set and its volume. In -dimensional space the inequality lower bounds the surface area or perimeter of a set by its volume ,

<span class="mw-page-title-main">Kakeya set</span> Shape containing unit line segments in all directions

In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero.

<span class="mw-page-title-main">Minkowski–Bouligand dimension</span> Method of determining fractal dimension

In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set in a Euclidean space , or more generally in a metric space . It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.

<span class="mw-page-title-main">Euclidean minimum spanning tree</span> Shortest network connecting points

A Euclidean minimum spanning tree of a finite set of points in the Euclidean plane or higher-dimensional Euclidean space connects the points by a system of line segments with the points as endpoints, minimizing the total length of the segments. In it, any two points can reach each other along a path through the line segments. It can be found as the minimum spanning tree of a complete graph with the points as vertices and the Euclidean distances between points as edge weights.

<span class="mw-page-title-main">K-set (geometry)</span> Points separated from others by a line

In discrete geometry, a -set of a finite point set in the Euclidean plane is a subset of elements of that can be strictly separated from the remaining points by a line. More generally, in Euclidean space of higher dimensions, a -set of a finite point set is a subset of elements that can be separated from the remaining points by a hyperplane. In particular, when , the line or hyperplane that separates a -set from the rest of is a halving line or halving plane.

The study of facility location problems (FLP), also known as location analysis, is a branch of operations research and computational geometry concerned with the optimal placement of facilities to minimize transportation costs while considering factors like avoiding placing hazardous materials near housing, and competitors' facilities. The techniques also apply to cluster analysis.

Density-based spatial clustering of applications with noise (DBSCAN) is a data clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaowei Xu in 1996. It is a density-based clustering non-parametric algorithm: given a set of points in some space, it groups together points that are closely packed, and marks as outliers points that lie alone in low-density regions . DBSCAN is one of the most common, and most commonly cited, clustering algorithms.

<span class="mw-page-title-main">Smallest-circle problem</span> Finding the smallest circle that contains all given points

The smallest-circle problem is a computational geometry problem of computing the smallest circle that contains all of a given set of points in the Euclidean plane. The corresponding problem in n-dimensional space, the smallest bounding sphere problem, is to compute the smallest n-sphere that contains all of a given set of points. The smallest-circle problem was initially proposed by the English mathematician James Joseph Sylvester in 1857.

In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. In the classical proof of the lemma, the embedding is a random orthogonal projection.

Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding density-based clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander. Its basic idea is similar to DBSCAN, but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. To do so, the points of the database are (linearly) ordered such that spatially closest points become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that must be accepted for a cluster so that both points belong to the same cluster. This is represented as a dendrogram.

In computational geometry, an ε-net is the approximation of a general set by a collection of simpler subsets. In probability theory it is the approximation of one probability distribution by another.

<span class="mw-page-title-main">Sauer–Shelah lemma</span> Notion in combinatorics

In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972. The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named. In his paper containing the lemma, Shelah gives credit also to Micha Perles, and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma and the Sauer–Shelah–Perles lemma.

In the study of algorithms, an LP-type problem is an optimization problem that shares certain properties with low-dimensional linear programs and that may be solved by similar algorithms. LP-type problems include many important optimization problems that are not themselves linear programs, such as the problem of finding the smallest circle containing a given set of planar points. They may be solved by a combination of randomized algorithms in an amount of time that is linear in the number of elements defining the problem, and subexponential in the dimension of the problem.

In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols.

A kinetic smallest enclosing disk data structure is a kinetic data structure that maintains the smallest enclosing disk of a set of moving points.

<span class="mw-page-title-main">Delone set</span> Well-spaced set of points in a metric space

In the mathematical theory of metric spaces, ε-nets, ε-packings, ε-coverings, uniformly discrete sets, relatively dense sets, and Delone sets are several closely related definitions of well-spaced sets of points, and the packing radius and covering radius of these sets measure how well-spaced they are. These sets have applications in coding theory, approximation algorithms, and the theory of quasicrystals.

In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.

In the design and analysis of algorithms for combinatorial optimization, parametric search is a technique invented by Nimrod Megiddo for transforming a decision algorithm into an optimization algorithm. It is frequently used for solving optimization problems in computational geometry.

The highway dimension is a graph parameter modelling transportation networks, such as road networks or public transportation networks. It was first formally defined by Abraham et al. based on the observation by Bast et al. that any road network has a sparse set of "transit nodes", such that driving from a point A to a sufficiently far away point B along the shortest route will always pass through one of these transit nodes. It has also been proposed that the highway dimension captures the properties of public transportation networks well, given that longer routes using busses, trains, or airplanes will typically be serviced by larger transit hubs. This relates to the spoke–hub distribution paradigm in transport topology optimization.

Many problems in mathematical programming can be formulated as problems on convex sets or convex bodies. Six kinds of problems are particularly important: optimization, violation, validity, separation, membership and emptiness. Each of these problems has a strong (exact) variant, and a weak (approximate) variant.

References

  1. 1 2 Larsson, Thomas (2008), "Fast and tight fitting bounding spheres", SIGRAD 2008: The Annual SIGRAD Conference, Special Theme: Interaction, November 27-28, 2008, Stockholm, Sweden, Linköping Electronic Conference Proceedings, vol. 34, Linköping, Sweden: Linköping University
  2. "Nimrod Megiddo's resume and publications".
  3. Megiddo, Nimrod (1988). "Linear programming in linear time when the dimension is fixed". Journal of the ACM. 33 (1): 114–147. doi: 10.1145/2422.322418 . S2CID   12686747.
  4. 1 2 Chan, Timothy (2018). "Improved deterministic algorithms for linear programming in low dimensions". ACM Transactions on Algorithms. 14 (3): Article 30, 10 pages. doi:10.1145/3155312. S2CID   9345339.
  5. Welzl, Emo (1991), "Smallest enclosing disks (balls and ellipsoids)", in Maurer, Hermann (ed.), New Results and New Trends in Computer Science: Graz, Austria, June 20–21, 1991, Proceedings, Lecture Notes in Computer Science, vol. 555, Berlin, Germany: Springer, pp. 359–370, doi:10.1007/BFb0038202, ISBN   3-540-54869-6, MR   1254721
  6. CGAL 4.3 - Bounding Volumes - Min_sphere_of_spheres_d, retrieved 2014-03-30.
  7. 1 2 Ritter, Jack (1990), "An efficient bounding sphere", in Glassner, Andrew S. (ed.), Graphics Gems, San Diego, CA, US: Academic Press Professional, Inc., pp. 301–303, ISBN   0-12-286166-3
  8. Bādoiu, Mihai; Har-Peled, Sariel; Indyk, Piotr (2002), "Approximate clustering via core-sets" (PDF), Proceedings of the Thirty-Fourth Annual ACM Symposium on Theory of Computing, New York, NY, US: ACM, pp. 250–257, CiteSeerX   10.1.1.4.9395 , doi:10.1145/509907.509947, MR   2121149, S2CID   5409535
  9. Kumar, Piyush; Mitchell, Joseph S. B.; Yıldırım, E. Alper (2003), "Computing core-sets and approximate smallest enclosing hyperspheres in high dimensions", in Ladner, Richard E. (ed.), Proceedings of the Fifth Workshop on Algorithm Engineering and Experiments, Baltimore, MD, USA, January 11, 2003, Philadelphia, PA, US: SIAM, pp. 45–55
  10. 1 2 Fischer, Kaspar; Gärtner, Bernd; Kutz, Martin (2003), "Fast smallest-enclosing-ball computation in high dimensions" (PDF), in Battista, Giuseppe Di; Zwick, Uri (eds.), Algorithms: ESA 2003, 11th Annual European Symposium, Budapest, Hungary, September 16-19, 2003, Proceedings (PDF), Lecture Notes in Computer Science, vol. 2832, Springer, Berlin, pp. 630–641, doi:10.1007/978-3-540-39658-1_57, ISBN   978-3-540-20064-2
  11. miniball open-source project