Factorial moment measure

Last updated

In probability and statistics, a factorial moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of factorial moments, which are useful for studying non-negative integer-valued random variables. [1]

Contents

The first factorial moment measure of a point process coincides with its first moment measure or intensity measure, [2] which gives the expected or average number of points of the point process located in some region of space. In general, if the number of points in some region is considered as a random variable, then the moment factorial measure of this region is the factorial moment of this random variable. [3] Factorial moment measures completely characterize a wide class of point processes, which means they can be used to uniquely identify a point process.

If a factorial moment measure is absolutely continuous, then with respect to the Lebesgue measure it is said to have a density (which is a generalized form of a derivative), and this density is known by a number of names such as factorial moment density and product density, as well as coincidence density, [1] joint intensity [4] , correlation function or multivariate frequency spectrum [5] The first and second factorial moment densities of a point process are used in the definition of the pair correlation function, which gives a way to statistically quantify the strength of interaction or correlation between points of a point process. [6]

Factorial moment measures serve as useful tools in the study of point processes [1] [6] [7] as well as the related fields of stochastic geometry [3] and spatial statistics, [6] [8] which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications. [1] [3] [9]

Point process notation

Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by Rd, but they can be defined on more abstract mathematical spaces. [7]

Point processes have a number of interpretations, which is reflected by the various types of point process notation. [3] [9] For example, if a point belongs to or is a member of a point process, denoted by N, then this can be written as: [3]

and represents the point process being interpreted as a random set. Alternatively, the number of points of N located in some Borel set B is often written as: [2] [3] [8]

which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably. [3] [8] [2]

Definitions

n th factorial power of a point process

For some positive integer , the -th factorial power of a point process on is defined as: [2]

where is a collection of not necessarily disjoint Borel sets in , which form an -fold Cartesian product of sets denoted by:

The symbol denotes an indicator function such that is a Dirac measure for the set . The summation in the above expression is performed over all -tuples of distinct points, including permutations, which can be contrasted with the definition of the n-th power of a point process. The symbol denotes multiplication while the existence of various point process notation means that the n-th factorial power of a point process is sometimes defined using other notation. [2]

n th factorial moment measure

The n th factorial moment measure or n th order factorial moment measure is defined as:

where the E denotes the expectation (operator) of the point process N. In other words, the n-th factorial moment measure is the expectation of the n th factorial power of some point process.

The n th factorial moment measure of a point process N is equivalently defined [3] by:

where is any non-negative measurable function on , and the above summation is performed over all tuples of distinct points, including permutations. Consequently, the factorial moment measure is defined such that there are no points repeating in the product set, as opposed to the moment measure. [7]

First factorial moment measure

The first factorial moment measure coincides with the first moment measure: [2]

where is known, among other terms, as the intensity measure [3] or mean measure, [10] and is interpreted as the expected number of points of found or located in the set

Second factorial moment measure

The second factorial moment measure for two Borel sets and is:

Name explanation

For some Borel set , the namesake of this measure is revealed when the th factorial moment measure reduces to:

which is the -th factorial moment of the random variable . [3]

Factorial moment density

If a factorial moment measure is absolutely continuous, then it has a density (or more precisely, a Radon–Nikodym derivative or density) with respect to the Lebesgue measure and this density is known as the factorial moment density or product density, joint intensity, correlation function, or multivariate frequency spectrum. Denoting the -th factorial moment density by , it is defined in respect to the equation: [3]

Furthermore, this means the following expression

where is any non-negative bounded measurable function defined on .

Pair correlation function

In spatial statistics and stochastic geometry, to measure the statistical correlation relationship between points of a point process, the pair correlation function of a point process is defined as: [3] [6]

where the points . In general, whereas corresponds to no correlation (between points) in the typical statistical sense. [6]

Examples

Poisson point process

For a general Poisson point process with intensity measure the -th factorial moment measure is given by the expression: [3]

where is the intensity measure or first moment measure of , which for some Borel set is given by:

For a homogeneous Poisson point process the -th factorial moment measure is simply: [2]

where is the length, area, or volume (or more generally, the Lebesgue measure) of . Furthermore, the -th factorial moment density is: [3]

The pair-correlation function of the homogeneous Poisson point process is simply

which reflects the lack of interaction between points of this point process.

Factorial moment expansion

The expectations of general functionals of simple point processes, provided some certain mathematical conditions, have (possibly infinite) expansions or series consisting of the corresponding factorial moment measures. [11] [12] In comparison to the Taylor series, which consists of a series of derivatives of some function, the nth factorial moment measure plays the roll as that of the n th derivative the Taylor series. In other words, given a general functional f of some simple point process, then this Taylor-like theorem for non-Poisson point processes means an expansion exists for the expectation of the function E, provided some mathematical condition is satisfied, which ensures convergence of the expansion.

See also

Related Research Articles

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

<span class="mw-page-title-main">Multivariate random variable</span> Random variable with multiple component dimensions

In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.

In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

<span class="mw-page-title-main">Cross-correlation</span> Covariance and correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.

An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and

<span class="mw-page-title-main">Logit-normal distribution</span>

In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).

In probability theory, a Laplace functional refers to one of two possible mathematical functions of functions or, more precisely, functionals that serve as mathematical tools for studying either point processes or concentration of measure properties of metric spaces. One type of Laplace functional, also known as a characteristic functional is defined in relation to a point process, which can be interpreted as random counting measures, and has applications in characterizing and deriving results on point processes. Its definition is analogous to a characteristic function for a random variable.

In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, physics, and wireless network modeling.

In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. There also exist equations involving moment measures and factorial moment measures that are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory of point processes and queueing theory as well as the related fields stochastic geometry, continuum percolation theory, and spatial statistics.

In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both.

In probability and statistics, a moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of (raw) moments of random variables, hence arise often in the study of point processes and related fields.

In probability and statistics, a spherical contact distribution function, first contact distribution function, or empty space function is a mathematical function that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, a spherical contact distribution function is defined as probability distribution of the radius of a sphere when it first encounters or makes contact with a point in a point process. This function can be contrasted with the nearest neighbour function, which is defined in relation to some point in the point process as being the probability distribution of the distance from that point to its nearest neighbouring point in the same point process.

<span class="mw-page-title-main">Poisson point process</span> Type of random mathematical object

In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is also called a Poisson random measure, Poisson random point field or Poisson point field. When the process is defined on the real line, it is often called simply the Poisson process.

In probability and statistics, a nearest neighbor function, nearest neighbor distance distribution, nearest-neighbor distribution function or nearest neighbor distribution is a mathematical function that is defined in relation to mathematical objects known as point processes, which are often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. More specifically, nearest neighbor functions are defined with respect to some point in the point process as being the probability distribution of the distance from this point to its nearest neighboring point in the same point process, hence they are used to describe the probability of another point existing within some distance of a point. A nearest neighbor function can be contrasted with a spherical contact distribution function, which is not defined in reference to some initial point but rather as the probability distribution of the radius of a sphere when it first encounters or makes contact with a point of a point process.

In statistics, functional correlation is a dimensionality reduction technique used to quantify the correlation and dependence between two variables when the data is functional. Several approaches have been developed to quantify the relation between two functional variables.

References

    1. 1 2 3 4 D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. I. Probability and its Applications (New York). Springer, New York, second edition, 2003.
    2. 1 2 3 4 5 6 7 Baccelli, François (2009). "Stochastic Geometry and Wireless Networks: Volume I Theory" (PDF). Foundations and Trends in Networking. 3 (3–4): 249–449. doi:10.1561/1300000006. ISSN   1554-057X.
    3. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 D. Stoyan, W. S. Kendall, J. Mecke, and L. Ruschendorf. Stochastic geometry and its applications, volume 2. Wiley Chichester, 1995.
    4. Hough, J. Ben; Krishnapur, Manjunath; Peres, Yuval; Virág, Bálint (2006). "Determinantal processes and independence". Probability Surveys. 3: 206–229. arXiv: math/0503110 . doi:10.1214/154957806000000078. S2CID   9604112.
    5. K. Handa. The two-parameter {Poisson-Dirichlet} point process. Bernoulli, 15(4):1082–1116, 2009.
    6. 1 2 3 4 5 A. Baddeley, I. B{\'a}r{\'a}ny, and R. Schneider. Spatial point processes and their applications. Stochastic Geometry: Lectures given at the CIME Summer School held in Martina Franca, Italy, September 13–18, 2004, pages 1–75, 2007.
    7. 1 2 3 D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. {II}. Probability and its Applications (New York). Springer, New York, second edition, 2008
    8. 1 2 3 Møller, Jesper; Waagepetersen, Rasmus Plenge (2003). Statistical Inference and Simulation for Spatial Point Processes. C&H/CRC Monographs on Statistics & Applied Probability. Vol. 100. CiteSeerX   10.1.1.124.1275 . doi:10.1201/9780203496930. ISBN   978-1-58488-265-7.
    9. 1 2 F. Baccelli and B. Błaszczyszyn. Stochastic Geometry and Wireless Networks, Volume II – Applications, volume 4, No 1–2 of Foundations and Trends in Networking. NoW Publishers, 2009.
    10. J. F. C. Kingman. Poisson processes, volume 3. Oxford university press, 1992.
    11. B. Blaszczyszyn. Factorial-moment expansion for stochastic systems. Stoch. Proc. Appl., 56:321–335, 1995.
    12. Kroese, Dirk P.; Schmidt, Volker (1996). "Light-traffic analysis for queues with spatially distributed arrivals". Mathematics of Operations Research . 21 (1): 135–157. doi:10.1287/moor.21.1.135.