Young measure

Last updated

In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear [ disambiguation needed ] partial differential equations, as well as in various optimization (or optimal control problems). They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942. [1]

Contents

Young measures provide a solution to Hilbert’s twentieth problem, as a broad class of problems in the calculus of variations have solutions in the form of Young measures. [2]

Definition

Intuition

Young constructed the Young measure in order to complete sets of ordinary curves in the calculus of variations. That is, Young measures are "generalized curves". [2]

Consider the problem of , where is a function such that , and continuously differentiable. It is clear that we should pick to have value close to zero, and its slope close to . That is, the curve should be a tight jagged line hugging close to the x-axis. No function can reach the minimum value of , but we can construct a sequence of functions that are increasingly jagged, such that .

The pointwise limit is identically zero, but the pointwise limit does not exist. Instead, it is a fine mist that has half of its weight on , and the other half on .

Suppose that is a functional defined by , where is continuous, then

so in the weak sense, we can define to be a "function" whose value is zero and whose derivative is . In particular, it would mean that .

Motivation

The definition of Young measures is motivated by the following theorem: Let m, n be arbitrary positive integers, let be an open bounded subset of and be a bounded sequence in [ clarification needed ]. Then there exists a subsequence and for almost every a Borel probability measure on such that for each we have

weakly in if the limit exists (or weakly* in in case of ). The measures are called the Young measures generated by the sequence .

A partial converse is also true: If for each we have a Borel measure on such that , then there exists a sequence , bounded in , that has the same weak convergence property as above.

More generally, for any Carathéodory function , the limit

if it exists, will be given by [3]

.

Young's original idea in the case was to consider for each integer the uniform measure, let's say concentrated on graph of the function (Here, is the restriction of the Lebesgue measure on ) By taking the weak* limit of these measures as elements of we have

where is the mentioned weak limit. After a disintegration of the measure on the product space we get the parameterized measure .

General definition

Let be arbitrary positive integers, let be an open and bounded subset of , and let . A Young measure (with finite p-moments) is a family of Borel probability measures on such that .

Examples

Pointwise converging sequence

A trivial example of Young measure is when the sequence is bounded in and converges pointwise almost everywhere in to a function . The Young measure is then the Dirac measure

Indeed, by dominated convergence theorem, converges weakly* in to

for any .

Sequence of sines

A less trivial example is a sequence

The corresponding Young measure satisfies [4]

for any measurable set , independent of . In other words, for any :

in . Here, the Young measure does not depend on and so the weak* limit is always a constant.

To see this intuitively, consider that at the limit of large , a rectangle of would capture a part of the curve of . Take that captured part, and project it down to the x-axis. The length of that projection is , which means that should look like a fine mist that has probability density at all .

Minimizing sequence

For every asymptotically minimizing sequence of

subject to (that is, the sequence satisfies ), and perhaps after passing to a subsequence, the sequence of derivatives generates Young measures of the form . This captures the essential features of all minimizing sequences to this problem, namely, their derivatives will tend to concentrate along the minima of the integrand .

If we take , then its limit has value zero, and derivative , which means .

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.

In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

<span class="mw-page-title-main">Propagator</span> Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

In mathematics, the Hankel transform expresses any given function f(r) as the weighted sum of an infinite number of Bessel functions of the first kind Jν(kr). The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor k along the r axis. The necessary coefficient Fν of each Bessel function in the sum, as a function of the scaling factor k constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.

In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:

In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.

An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables.

Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.

References

  1. Young, L. C. (1942). "Generalized Surfaces in the Calculus of Variations". Annals of Mathematics. 43 (1): 84–103. doi:10.2307/1968882. ISSN   0003-486X. JSTOR   1968882.
  2. 1 2 Balder, Erik J. "Lectures on Young measures." Cahiers de Mathématiques de la Décision 9517 (1995).
  3. Pedregal, Pablo (1997). Parametrized measures and variational principles. Basel: Birkhäuser Verlag. ISBN   978-3-0348-8886-8. OCLC   812613013.
  4. Dacorogna, Bernard (2006). Weak continuity and weak lower semicontinuity of non-linear functionals. Springer.