In mathematics, the Hutchinson metric is a function which measures "the discrepancy between two images for use in fractal image processing" and "can also be applied to describe the similarity between DNA sequences expressed as real or complex genomic signals". [1] [2]
Consider only nonempty, compact, and finite metric spaces. For such a space , let denote the space of Borel probability measures on , with
the embedding associating to the point measure . The support of a measure in is the smallest closed subset of measure 1.
If is Borel measurable then the induced map
associates to the measure defined by
for all Borel in .
Then the Hutchinson metric is given by
where the is taken over all real-valued functions with Lipschitz constant
Then is an isometric embedding of into , and if is Lipschitz then is Lipschitz with the same Lipschitz constant. [3]
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets. Some authors require additional restrictions on the measure, as described below.
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.
In calculus, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure.
In mathematics, the Radon–Nikodym theorem is a result in measure theory that expresses the relationship between two measures defined on the same measurable space. A measure is a set function that assigns a consistent magnitude to the measurable subsets of a measurable space. Examples of a measure include area and volume, where the subsets are sets of points; or the probability of an event, which is a subset of possible outcomes within a wider probability space.
In measure-theoretic analysis and related branches of mathematics, Lebesgue–Stieltjes integration generalizes Riemann–Stieltjes and Lebesgue integration, preserving the many advantages of the former in a more general measure-theoretic framework. The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind.
In the mathematical field of measure theory, an outer measure or exterior measure is a function defined on all subsets of a given set with values in the extended real numbers satisfying some additional technical conditions. The theory of outer measures was first introduced by Constantin Carathéodory to provide an abstract basis for the theory of measurable sets and countably additive measures. Carathéodory's work on outer measures found many applications in measure-theoretic set theory, and was used in an essential way by Hausdorff to define a dimension-like metric invariant now called Hausdorff dimension. Outer measures are commonly used in the field of geometric measure theory.
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.
In mathematics, Hausdorff measure is a generalization of the traditional notions of area and volume to non-integer dimensions, specifically fractals and their Hausdorff dimensions. It is a type of outer measure, named for Felix Hausdorff, that assigns a number in [0,∞] to each set in or, more generally, in any metric space.
In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet (1948) who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”
In mathematics, the support of a measure μ on a measurable topological space is a precise notion of where in the space X the measure "lives". It is defined to be the largest (closed) subset of X for which every open neighbourhood of every point of the set has positive measure.
In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence in measure, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.
In mathematics, a metric outer measure is an outer measure μ defined on the subsets of a given metric space (X, d) such that
A product integral is any product-based counterpart of the usual sum-based integral of calculus. The first product integral was developed by the mathematician Vito Volterra in 1887 to solve systems of linear differential equations. Other examples of product integrals are the geometric integral, the bigeometric integral, and some other integrals of non-Newtonian calculus.
In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in Lp in terms of convergence in measure and a condition related to uniform integrability.
In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.
In probability theory, a Laplace functional refers to one of two possible mathematical functions of functions or, more precisely, functionals that serve as mathematical tools for studying either point processes or concentration of measure properties of metric spaces. One type of Laplace functional, also known as a characteristic functional is defined in relation to a point process, which can be interpreted as random counting measures, and has applications in characterizing and deriving results on point processes. Its definition is analogous to a characteristic function for a random variable.
In mathematics, the dyadic cubes are a collection of cubes in Rn of different sizes or scales such that the set of cubes of each scale partition Rn and each cube in one scale may be written as a union of cubes of a smaller scale. These are frequently used in mathematics as a way of discretizing objects in order to make computations or analysis easier. For example, to study an arbitrary subset of A of Euclidean space, one may instead replace it by a union of dyadic cubes of a particular size that cover the set. One can consider this set as a pixelized version of the original set, and as smaller cubes are used one gets a clearer image of the set A. Most notable appearances of dyadic cubes include the Whitney extension theorem and the Calderón–Zygmund lemma.
In mathematics, a metric space X with metric d is said to be doubling if there is some doubling constant M > 0 such that for any x ∈ X and r > 0, it is possible to cover the ball B(x, r) = {y | d(x, y) < r} with the union of at most M balls of radius r/2. The base-2 logarithm of M is often referred to as the doubling dimension of X. Euclidean spaces equipped with the usual Euclidean metric are examples of doubling spaces where the doubling constant M depends on the dimension d. For example, in one dimension, M = 2; and in two dimensions, M = 7.