In mathematical analysis, epi-convergence is a type of convergence for real-valued and extended real-valued functions.
Epi-convergence is important because it is the appropriate notion of convergence with which to approximate minimization problems in the field of mathematical optimization. The symmetric notion of hypo-convergence is appropriate for maximization problems. Mosco convergence is a generalization of epi-convergence to infinite dimensional spaces.
Let be a metric space, and a real-valued function for each natural number . We say that the sequence epi-converges to a function if for each
The following extension allows epi-convergence to be applied to a sequence of functions with non-constant domain.
Denote by the extended real numbers. Let be a function for each . The sequence epi-converges to if for each
In fact, epi-convergence coincides with the -convergence in first countable spaces.
Epi-convergence is the appropriate topology with which to approximate minimization problems. For maximization problems one uses the symmetric notion of hypo-convergence. hypo-converges to if
and
Assume we have a difficult minimization problem
where and . We can attempt to approximate this problem by a sequence of easier problems
for functions and sets .
Epi-convergence provides an answer to the question: In what sense should the approximations converge to the original problem in order to guarantee that approximate solutions converge to a solution of the original?
We can embed these optimization problems into the epi-convergence framework by defining extended real-valued functions
So that the problems and are equivalent to the original and approximate problems, respectively.
If epi-converges to , then . Furthermore, if is a limit point of minimizers of , then is a minimizer of . In this sense,
Epi-convergence is the weakest notion of convergence for which this result holds.
L'Hôpital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the rule is often attributed to l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
In mathematics, the infimum of a subset of a partially ordered set is the greatest element in that is less than or equal to each element of if such an element exists. In other words, it is the greatest element of that is lower or equal to the lowest element of . Consequently, the term greatest lower bound is also commonly used. The supremum of a subset of a partially ordered set is the least element in that is greater than or equal to each element of if such an element exists. In other words, it is the least element of that is greater than or equal to the greatest element of . Consequently, the supremum is also referred to as the least upper bound.
In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting bounds on the sequence. They can be thought of in a similar fashion for a function. For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit.
In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law.
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, the limit of a sequence of sets is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.
In mathematics, the arguments of the maxima and arguments of the minima are the input points at which a function output value is maximized and minimized, respectively. While the arguments are defined over the domain of a function, the output is part of its codomain.
In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.
In the field of mathematical analysis for the calculus of variations, Γ-convergence (Gamma-convergence) is a notion of convergence for functionals. It was introduced by Ennio de Giorgi.
In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue.
In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero.
In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.
In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space X. In finite dimensional spaces, Mosco convergence coincides with epi-convergence, while in infinite-dimensional ones, Mosco convergence is strictly stronger property.
In mathematics, Kuratowski convergence or Painlevé-Kuratowski convergence is a notion of convergence for subsets of a topological space. First introduced by Paul Painlevé in lectures on mathematical analysis in 1902, the concept was popularized in texts by Felix Hausdorff and Kazimierz Kuratowski. Intuitively, the Kuratowski limit of a sequence of sets is where the sets "accumulate".
In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of
This article is supplemental for “Convergence of random variables” and provides proofs for selected results.
In mathematics, a càdlàg, RCLL, or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space.
In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.
In mathematical optimization, the proximal operator is an operator associated with a proper, lower semi-continuous convex function from a Hilbert space to , and is defined by: