Uniform norm

Last updated
The perimeter of the square is the set of points in R where the sup norm equals a fixed positive constant. Vector norm sup.svg
The perimeter of the square is the set of points in R where the sup norm equals a fixed positive constant.

In mathematical analysis, the uniform norm (or sup norm) assigns to real- or complex-valued bounded functions f defined on a set S the non-negative number

This norm is also called the supremum norm, the Chebyshev norm, the infinity norm, or, when the supremum is in fact the maximum, the max norm. The name "uniform norm" derives from the fact that a sequence of functions converges to under the metric derived from the uniform norm if and only if converges to uniformly. [1]

The metric generated by this norm is called the Chebyshev metric, after Pafnuty Chebyshev, who was first to systematically study it.

If we allow unbounded functions, this formula does not yield a norm or metric in a strict sense, although the obtained so-called extended metric still allows one to define a topology on the function space in question.

If f is a continuous function on a closed interval, or more generally a compact set, then it is bounded and the supremum in the above definition is attained by the Weierstrass extreme value theorem, so we can replace the supremum by the maximum. In this case, the norm is also called the maximum norm. In particular, for the case of a vector in finite dimensional coordinate space, it takes the form

The reason for the subscript "∞" is that whenever f is continuous

where

where D is the domain of f (and the integral amounts to a sum if D is a discrete set).

The binary function

is then a metric on the space of all bounded functions (and, obviously, any of its subsets) on a particular domain. A sequence { fn : n = 1, 2, 3, ... } converges uniformly to a function f if and only if

We can define closed sets and closures of sets with respect to this metric topology; closed sets in the uniform norm are sometimes called uniformly closed and closures uniform closures. The uniform closure of a set of functions A is the space of all functions that can be approximated by a sequence of uniformly-converging functions on A. For instance, one restatement of the Stone–Weierstrass theorem is that the set of all continuous functions on is the uniform closure of the set of polynomials on .

For complex continuous functions over a compact space, this turns it into a C* algebra.

See also

Related Research Articles

In mathematics, a continuous function is a function that does not have any abrupt changes in value, known as discontinuities. More precisely, sufficiently small changes in the input of a continuous function result in arbitrarily small changes in its output. If not continuous, a function is said to be discontinuous. Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, during which attempts such as the epsilon–delta definition were made to formalize it.

In mathematical analysis, a metric space M is called complete if every Cauchy sequence of points in M has a limit that is also in M or, alternatively, if every Cauchy sequence in M converges in M.

In mathematics, a metric space is a set together with a metric on the set. The metric is a function that defines a concept of distance between any two members of the set, which are usually called points. The metric satisfies a few simple properties. Informally:

Real analysis Mathematics of real numbers and real functions

In mathematics, real analysis is the branch of mathematical analysis that studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

In mathematics, a topological space is called separable if it contains a countable, dense subset; that is, there exists a sequence of elements of the space such that every nonempty open subset of the space contains at least one element of the sequence.

Limit superior and limit inferior

In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting bounds on the sequence. They can be thought of in a similar fashion for a function. For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz. Lp spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, finance, engineering, and other disciplines.

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set if, given any arbitrarily small positive number , a number can be found such that each of the functions differ from by no more than at every pointin. Described in an informal way, if converges to uniformly, then the rate at which approaches is "uniform" throughout its domain in the following sense: in order to determine how large needs to be to guarantee that falls within a certain distance of , we do not need to know the value of in question — there is a single value of independent of , such that choosing to be larger than will suffice.

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

In mathematical analysis, semi-continuity is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is uppersemi-continuous at a point x0 if, roughly speaking, the function values for arguments near x0 are not much higher than f(x0).

Runges phenomenon problem of oscillation at the edges of an interval when using polynomial interpolation

In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.

In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.

In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function. It is weaker than uniform convergence, to which it is often compared.

In mathematics, the limit of a sequence of sets A1, A2, ... is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the sequence that converge monotonically to the same set and (2) by convergence of a sequence of indicator functions which are themselves real-valued. As is the case with sequences of other objects, convergence is not necessary or even usual.

The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.

In mathematics, with special application to complex analysis, a normal family is a pre-compact subset of the space of continuous functions. Informally, this means that the functions in the family are not widely spread out, but rather stick together in a somewhat "clustered" manner. Sometimes, if each function in a normal family F satisfies a particular property , then the property also holds for each limit point of the set F.

In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants C, α>0, such that

Classical Wiener space

In mathematics, classical Wiener space is the collection of all continuous functions on a given domain, taking values in a metric space. Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions. It is named after the American mathematician Norbert Wiener.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence in measure, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In mathematics, a càdlàg, RCLL, or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space.

References

  1. Rudin, Walter (1964). Principles of Mathematical Analysis . New York: McGraw-Hill. pp.  151. ISBN   0-07-054235-X.