In geometry, space partitioning is the process of dividing an entire space (usually a Euclidean space) into two or more disjoint subsets (see also partition of a set). In other words, space partitioning divides a space into non-overlapping regions. Any point in the space can then be identified to lie in exactly one of the regions.
Space-partitioning systems are often hierarchical, meaning that a space (or a region of space) is divided into several regions, and then the same space-partitioning system is recursively applied to each of the regions thus created. The regions can be organized into a tree, called a space-partitioning tree.
Most space-partitioning systems use planes (or, in higher dimensions, hyperplanes) to divide space: points on one side of the plane form one region, and points on the other side form another. Points exactly on the plane are usually arbitrarily assigned to one or the other side. Recursively partitioning space using planes in this way produces a BSP tree, one of the most common forms of space partitioning.
Space partitioning is particularly important in computer graphics, especially heavily used in ray tracing, where it is frequently used to organize the objects in a virtual scene. A typical scene may contain millions of polygons. Performing a ray/polygon intersection test with each would be a very computationally expensive task.
Storing objects in a space-partitioning data structure (k-d tree or BSP tree for example) makes it easy and fast to perform certain kinds of geometry queries—for example in determining whether a ray intersects an object, space partitioning can reduce the number of intersection test to just a few per primary ray, yielding a logarithmic time complexity with respect to the number of polygons. [1] [2] [3]
Space partitioning is also often used in scanline algorithms to eliminate the polygons out of the camera's viewing frustum, limiting the number of polygons processed by the pipeline. There is also a usage in collision detection: determining whether two objects are close to each other can be much faster using space partitioning.
In integrated circuit design, an important step is design rule check. This step ensures that the completed design is manufacturable. The check involves rules that specify widths and spacings and other geometry patterns. A modern design can have billions of polygons that represent wires and transistors. Efficient checking relies heavily on geometry query. For example, a rule may specify that any polygon must be at least n nanometers from any other polygon. This is converted into a geometry query by enlarging a polygon by n/2 at all sides and query to find all intersecting polygons.
The number of components in a space partition plays a central role in some results in probability theory. See Growth function for more details.
There are many studies and applications where Geographical Spatial Reality is partitioned by hydrological criteria, administrative criteria, mathematical criteria or many others.
In the context of Cartography and GIS - Geographic Information System, is common to identify cells of the partition by standard codes. For example the for HUC code identifying hydrographical basins and sub-basins, ISO 3166-2 codes identifying countries and its subdivisions, or arbitrary DGGs - discrete global grids identifying quadrants or locations.
Common space-partitioning systems include:
Suppose the n-dimensional Euclidean space is partitioned by hyperplanes that are -dimensional. What is the number of components in the partition? The largest number of components is attained when the hyperplanes are in general position, i.e, no two are parallel and no three have the same intersection. Denote this maximum number of components by . Then, the following recurrence relation holds: [4] [5]
And its solution is:
which is upper-bounded as:
In machine learning, support vector machines are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
In computer science, binary space partitioning (BSP) is a method for space partitioning which recursively subdivides a Euclidean space into two convex sets by using hyperplanes as partitions. This process of subdividing gives rise to a representation of objects within the space in the form of a tree data structure known as a BSP tree.
In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at the point. A normal vector may have length one or its length may represent the curvature of the object. Multiplying a normal vector by -1 results in the opposite vector, which may be used for indicating sides.
In geometry, inversive geometry is the study of inversion, a transformation of the Euclidean plane that maps circles or lines to other circles or lines and that preserves the angles between crossing curves. Many difficult problems in geometry become much more tractable when an inversion is applied. Inversion seems to have been discovered by a number of people contemporaneously, including Steiner (1824), Quetelet (1825), Bellavitis (1836), Stubbs and Ingram (1842-3) and Kelvin (1845).
In Vapnik–Chervonenkis theory, the Vapnik–Chervonenkis (VC) dimension is a measure of the capacity of a set of functions that can be learned by a statistical binary classification algorithm. It is defined as the cardinality of the largest set of points that the algorithm can shatter, which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined by Vladimir Vapnik and Alexey Chervonenkis.
The point location problem is a fundamental topic of computational geometry. It finds applications in areas that deal with processing geometrical data: computer graphics, geographic information systems (GIS), motion planning, and computer aided design (CAD).
In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane.
In computer graphics and computational geometry, a bounding volume for a set of objects is a closed region that completely contains the union of the objects in the set. Bounding volumes are used to improve the efficiency of geometrical operations, such as by using simple regions, having simpler ways to test for overlap.
In geometry, an arrangement of lines is the subdivision of the plane formed by a collection of lines. Problems of counting the features of arrangements have been studied in discrete geometry, and computational geometers have found algorithms for the efficient construction of arrangements.
In geometry and combinatorics, an arrangement of hyperplanes is an arrangement of a finite set A of hyperplanes in a linear, affine, or projective space S. Questions about a hyperplane arrangement A generally concern geometrical, topological, or other properties of the complement, M(A), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these properties are related to the arrangement and its intersection semilattice. The intersection semilattice of A, written L(A), is the set of all subspaces that are obtained by intersecting some of the hyperplanes; among these subspaces are S itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These intersection subspaces of A are also called the flats ofA. The intersection semilattice L(A) is partially ordered by reverse inclusion.
The concept of shattered sets plays an important role in Vapnik–Chervonenkis theory, also known as VC-theory. Shattering and VC-theory are used in the study of empirical processes as well as in statistical computational learning theory.
In computer science, a k-d tree is a space-partitioning data structure for organizing points in a k-dimensional space. K-dimensional is that which concerns exactly k orthogonal axes or a space of any number of dimensions. k-d trees are a useful data structure for several applications, such as:
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the -dimensional Euclidean space . Most texts use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary.
Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.
In probability theory, an empirical measure is a random measure arising from a particular realization of a sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.
Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.
In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972. The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named. In his paper containing the lemma, Shelah gives credit also to Micha Perles, and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma.
In discrete geometry, an arrangement is the decomposition of the d-dimensional linear, affine, or projective space into connected cells of different dimensions, induced by a finite collection of geometric objects, which are usually of dimension one less than the dimension of the space, and often of the same type as each other, such as hyperplanes or spheres.
The growth function, also called the shatter coefficient or the shattering number, measures the richness of a set family. It is especially used in the context of statistical learning theory, where it measures the complexity of a hypothesis class. The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties. It is a basic concept in machine learning.