In mathematics, the Poisson boundary is a probability space associated to a random walk. It is an object designed to encode the asymptotic behaviour of the random walk, i.e. how trajectories diverge when the number of steps goes to infinity. Despite being called a boundary it is in general a purely measure-theoretical object and not a boundary in the topological sense. However, in the case where the random walk is on a topological space the Poisson boundary can be related to the Martin boundary, which is an analytic construction yielding a genuine topological boundary. Both boundaries are related to harmonic functions on the space via generalisations of the Poisson formula.
The Poisson formula states that given a positive harmonic function on the unit disc (that is, where is the Laplace–Beltrami operator associated to the Poincaré metric on ) there exists a unique measure on the boundary such that the equality
holds for all . One way to interpret this is that the functions for are up to scaling all the extreme points in the cone of nonnegative harmonic functions. This analytical interpretation of the set leads to the more general notion of minimal Martin boundary (which in this case is the full Martin boundary).
This fact can also be interpreted in a probabilistic manner. If is the Markov process associated to (i.e. the Brownian motion on the disc with the Poincaré Riemannian metric), then the process is a continuous-time martingale, and as such converges almost everywhere to a function on the Wiener space of possible (infinite) trajectories for . Thus the Poisson formula identifies this measured space with the Martin boundary constructed above, and ultimately to endowed with the class of Lebesgue measure (note that this identification can be made directly since a path in Wiener space converges almost surely to a point on ). This interpretation of as the space of trajectories for a Markov process is a special case of the construction of the Poisson boundary.
Finally, the constructions above can be discretised, i.e. restricted to the random walks on the orbits of a Fuchsian group acting on . This gives an identification of the extremal positive harmonic functions on the group, and to the space of trajectories of the random walk on the group (both with respect to a given probability measure), with the topological/measured space .
Let be a discrete group and a probability measure on , which will be used to define a random walk on (a discrete-time Markov process whose transition probabilities are ); the measure is called the step distribution for the random walk. Let be another measure on , which will be the initial state for the random walk. The space of trajectories for is endowed with a measure whose marginales are (where denotes convolution of measures; this is the distribution of the random walk after steps). There is also an equivalence relation on , which identifies to if there exists such that for all (the two trajectories have the same "tail"). The Poisson boundary of is then the measured space obtained as the quotient of by the equivalence relation . [1]
If is the initial distribution of a random walk with step distribution then the measure on obtained as the pushforward of . It is a stationary measure for , meaning that for every measurable set in the Poisson boundary
It is possible to give an implicit definition of the Poisson boundary as the maximal -set with a -stationary measure , satisfying the additional condition that almost surely weakly converges to a Dirac mass. [2]
Let be a -harmonic function on , meaning that . Then the random variable is a discrete-time martingale and so it converges almost surely. Denote by the function on obtained by taking the limit of the values of along a trajectory (this is defined almost everywhere on and shift-invariant). Let and let be the measure obtained by the construction above with (the Dirac mass at ). If is either positive or bounded then is as well and we have the Poisson formula:
This establishes a bijection between -harmonic bounded functions and essentially bounded measurable functions on . In particular the Poisson boundary of is trivial, that is reduced to a point, if and only if the only bounded -harmonic functions on are constant.
The general setting is that of a Markov operator on a measured space, a notion which generalises the Markov operator associated to a random walk. Much of the theory can be developed in this abstract and very general setting.
Let be a random walk on a discrete group. Let be the probability to get from to in steps, i.e. . The Green kernel is by definition:
If the walk is transient then this series is convergent for all . Fix a point and define the Martin kernel by: . The embedding has a relatively compact image for the topology of pointwise convergence, and the Martin compactification is the closure of this image. A point is usually represented by the notation .
The Martin kernels are positive harmonic functions and every positive harmonic function can be expressed as an integral of functions on the boundary, that is for every positive harmonic function there is a measure on such that a Poisson-like formula holds:
The measures are supported on the minimal Martin boundary, whose elements can also be characterised by being minimal. A positive harmonic function is said to be minimal if for any harmonic function with there exists such that . [3]
There is actually a whole family of Martin compactifications. Define the Green generating series as
Denote by the radius of convergence of this power series and define for the -Martin kernel by . The closure of the embedding is called the -Martin compactification.
For a Riemannian manifold the Martin boundary is constructed, when it exists, in the same way as above, using the Green function of the Laplace–Beltrami operator . In this case there is again a whole family of Martin compactifications associated to the operators for where is the bottom of the spectrum. Examples where this construction can be used to define a compactification are bounded domains in the plane and symmetric spaces of non-compact type. [4]
The measure corresponding to the constant function is called the harmonic measure on the Martin boundary. With this measure the Martin boundary is isomorphic to the Poisson boundary.
The Poisson and Martin boundaries are trivial for symmetric random walks on nilpotent groups. [5] On the other hand, when the random walk is non-centered, the study of the full Martin boundary, including the minimal functions, is far less conclusive.
For random walks on a semisimple Lie group (with step distribution absolutely continuous with respect to the Haar measure) the Poisson boundary is equal to the Furstenberg boundary. [6] The Poisson boundary of the Brownian motion on the associated symmetric space is also the Furstenberg boundary. [7] The full Martin boundary is also well-studied in these cases and can always be described in a geometric manner. For example, for groups of rank one (for example the isometry groups of hyperbolic spaces) the full Martin boundary is the same as the minimal Martin boundary (the situation in higher-rank groups is more complicated). [8]
The Poisson boundary of a Zariski-dense subgroup of a semisimple Lie group, for example a lattice, is also equal to the Furstenberg boundary of the group. [9]
For random walks on a hyperbolic group, under the finite entropy assumption on the step distribution which always hold for a simple walk (a more general condition is that the first moment be finite) the Poisson boundary is always equal to the Gromov boundary when equipped with the hitting probability measure. For example, the Poisson boundary of a free group is the space of ends of its Cayley tree. [10] The identification of the full Martin boundary is more involved; in case the random walk has finite range (the step distribution is supported on a finite set) the Martin boundary coincides with the minimal Martin boundary and both coincide with the Gromov boundary.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.
In probability and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics, the Newtonian potential or Newton potential is an operator in vector calculus that acts as the inverse to the negative Laplacian, on functions that are smooth and decay rapidly enough at infinity. As such, it is a fundamental object of study in potential theory. In its general nature, it is a singular integral operator, defined by convolution with a function having a mathematical singularity at the origin, the Newtonian kernel which is the fundamental solution of the Laplace equation. It is named for Isaac Newton, who first discovered it and proved that it was a harmonic function in the special case of three variables, where it served as the fundamental gravitational potential in Newton's law of universal gravitation. In modern potential theory, the Newtonian potential is instead thought of as an electrostatic potential.
In mathematics, a Killing vector field, named after Wilhelm Killing, is a vector field on a Riemannian manifold that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point of an object the same distance in the direction of the Killing vector will not distort distances on the object.
A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.
In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.
An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.
In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.
In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.
Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.
Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states. The idea is well known in the context of lattice Yang–Mills theory. Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.
Poisson-type random measures are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under thinning. They are the only distributions in the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The PT family of distributions is also known as the Katz family of distributions, the Panjer or (a,b,0) class of distributions and may be retrieved through the Conway–Maxwell–Poisson distribution.
The Fréchet inception distance (FID) is a metric used to assess the quality of images created by a generative model, like a generative adversarial network (GAN). Unlike the earlier inception score (IS), which evaluates only the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images. The FID metric does not completely replace the IS metric. Classifiers that achieve the best (lowest) FID score tend to have greater sample variety while classifiers achieving the best (highest) IS score tend to have better quality within individual images.
Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is a convex space, it is not a vector space.
This is a glossary of concepts and results in real analysis and complex analysis in mathematics.