Probability |
---|
In probability and statistics, a moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of (raw) moments of random variables, hence arise often in the study of point processes and related fields. [1]
An example of a moment measure is the first moment measure of a point process, often called mean measure or intensity measure, which gives the expected or average number of points of the point process being located in some region of space. [2] In other words, if the number of points of a point process located in some region of space is a random variable, then the first moment measure corresponds to the first moment of this random variable. [3]
Moment measures feature prominently in the study of point processes [1] [4] [5] as well as the related fields of stochastic geometry [3] and spatial statistics [5] [6] whose applications are found in numerous scientific and engineering disciplines such as biology, geology, physics, and telecommunications. [3] [4] [7]
Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in physical space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by , but they can be defined on more abstract mathematical spaces. [1]
Point processes have a number of interpretations, which is reflected by the various types of point process notation. [3] [7] For example, if a point belongs to or is a member of a point process, denoted by , then this can be written as: [3]
and represents the point process being interpreted as a random set. Alternatively, the number of points of located in some Borel set is often written as: [2] [3] [6]
which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably. [2] [3] [6]
For some integer , the -th power of a point process is defined as: [2]
where is a collection of not necessarily disjoint Borel sets (in ), which form a -fold Cartesian product of sets denoted by . The symbol denotes standard multiplication.
The notation reflects the interpretation of the point process as a random measure. [3]
The -th power of a point process can be equivalently defined as: [3]
where summation is performed over all -tuples of (possibly repeating) points, and denotes an indicator function such that is a Dirac measure. This definition can be contrasted with the definition of the n-factorial power of a point process for which each n-tuples consists of n distinct points.
The -th moment measure is defined as:
where the E denotes the expectation (operator) of the point process . In other words, the n-th moment measure is the expectation of the n-th power of some point process.
The th moment measure of a point process is equivalently defined [3] as:
where is any non-negative measurable function on and the sum is over -tuples of points for which repetition is allowed.
For some Borel set B, the first moment of a point process N is:
where is known, among other terms, as the intensity measure [3] or mean measure, [8] and is interpreted as the expected or average number of points of found or located in the set .
The second moment measure for two Borel sets and is:
which for a single Borel set becomes
where denotes the variance of the random variable .
The previous variance term alludes to how moments measures, like moments of random variables, can be used to calculate quantities like the variance of point processes. A further example is the covariance of a point process for two Borel sets and , which is given by: [2]
For a general Poisson point process with intensity measure the first moment measure is: [2]
which for a homogeneous Poisson point process with constant intensity means:
where is the length, area or volume (or more generally, the Lebesgue measure) of .
For the Poisson case with measure the second moment measure defined on the product set is: [5]
which in the homogeneous case reduces to
In mathematical analysis and in probability theory, a σ-algebra on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair is called a measurable space.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a data set. MDS is used to translate distances between each pair of objects in a set into a configuration of points mapped into an abstract Cartesian space.
In probability theory, the Borel–Kolmogorov paradox is a paradox relating to conditional probability with respect to an event of probability zero. It is named after Émile Borel and Andrey Kolmogorov.
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function s → s2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential
The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaussian process on , that starts at zero, has expectation zero for all in , and has the following covariance function:
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.
In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.
In probability theory, a Laplace functional refers to one of two possible mathematical functions of functions or, more precisely, functionals that serve as mathematical tools for studying either point processes or concentration of measure properties of metric spaces. One type of Laplace functional, also known as a characteristic functional is defined in relation to a point process, which can be interpreted as random counting measures, and has applications in characterizing and deriving results on point processes. Its definition is analogous to a characteristic function for a random variable.
In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, physics, machine learning, and wireless network modeling.
In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. There also exist equations involving moment measures and factorial moment measures that are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory of point processes and queueing theory as well as the related fields stochastic geometry, continuum percolation theory, and spatial statistics.
In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both.
In probability and statistics, a point process operation or point process transformation is a type of mathematical operation performed on a random object known as a point process, which are often used as mathematical models of phenomena that can be represented as points randomly located in space. These operations can be purely random, deterministic or both, and are used to construct new point processes, which can be then also used as mathematical models. The operations may include removing or thinning points from a point process, combining or superimposing multiple point processes into one point process or transforming the underlying space of the point process into another space. Point process operations and the resulting point processes are used in the theory of point processes and related fields such as stochastic geometry and spatial statistics.
In probability and statistics, a factorial moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of factorial moments, which are useful for studying non-negative integer-valued random variables.
In probability and statistics, a spherical contact distribution function, first contact distribution function, or empty space function is a mathematical function that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, a spherical contact distribution function is defined as probability distribution of the radius of a sphere when it first encounters or makes contact with a point in a point process. This function can be contrasted with the nearest neighbour function, which is defined in relation to some point in the point process as being the probability distribution of the distance from that point to its nearest neighbouring point in the same point process.
In probability theory, statistics and related fields, a Poisson point process is a type of mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The process's name derives from the fact that the number of points in any given finite region follows a Poisson distribution. The process and the distribution are named after French mathematician Siméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments on radioactive decay, telephone call arrivals and actuarial science.
In probability and statistics, a nearest neighbor function, nearest neighbor distance distribution, nearest-neighbor distribution function or nearest neighbor distribution is a mathematical function that is defined in relation to mathematical objects known as point processes, which are often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, nearest neighbor functions are defined with respect to some point in the point process as being the probability distribution of the distance from this point to its nearest neighboring point in the same point process, hence they are used to describe the probability of another point existing within some distance of a point. A nearest neighbor function can be contrasted with a spherical contact distribution function, which is not defined in reference to some initial point but rather as the probability distribution of the radius of a sphere when it first encounters or makes contact with a point of a point process.