Vanish at infinity

Last updated

In mathematics, a function is said to vanish at infinity if its values approach 0 as the input grows without bounds. There are two different ways to define this with one definition applying to functions defined on normed vector spaces and the other applying to functions defined on locally compact spaces. Aside from this difference, both of these notions correspond to the intuitive notion of adding a point at infinity, and requiring the values of the function to get arbitrarily close to zero as one approaches it. This definition can be formalized in many cases by adding an (actual) point at infinity.

Contents

Definitions

A function on a normed vector space is said to vanish at infinity if the function approaches as the input grows without bounds (that is, as ). Or,

in the specific case of functions on the real line.

For example, the function

defined on the real line vanishes at infinity.

Alternatively, a function on a locally compact space vanishes at infinity, if given any positive number ε, there exists a compact subset such that

whenever the point lies outside of [1] [2] [3] In other words, for each positive number ε the set is compact. For a given locally compact space the set of such functions

valued in which is either or forms a -vector space with respect to pointwise scalar multiplication and addition, which is often denoted

As an example, the function

where and are reals greater or equal 1 and correspond to the point on vanishes at infinity.

A normed space is locally compact if and only if it is finite-dimensional so in this particular case, there are two different definitions of a function "vanishing at infinity". The two definitions could be inconsistent with each other: if in an infinite dimensional Banach space, then vanishes at infinity by the definition, but not by the compact set definition.

Rapidly decreasing

Refining the concept, one can look more closely to the rate of vanishing of functions at infinity. One of the basic intuitions of mathematical analysis is that the Fourier transform interchanges smoothness conditions with rate conditions on vanishing at infinity. The rapidly decreasing test functions of tempered distribution theory are smooth functions that are

for all , as , and such that all their partial derivatives satisfy the same condition too. This condition is set up so as to be self-dual under Fourier transform, so that the corresponding distribution theory of tempered distributions will have the same property.

See also

Citations

  1. "Function vanishing at infinity - Encyclopedia of Mathematics". www.encyclopediaofmath.org. Retrieved 2019-12-15.
  2. "vanishing at infinity in nLab". ncatlab.org. Retrieved 2019-12-15.
  3. "vanish at infinity". planetmath.org. Retrieved 2019-12-15.

Related Research Articles

In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space.

Real analysis Mathematics of real numbers and real functions

In mathematics, real analysis is the branch of mathematical analysis that studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

Uniform continuity Uniform restraint of the change in functions

In mathematics, a function f is uniformly continuous if, roughly speaking, it is possible to guarantee that f(x) and f(y) be as close to each other as we please by requiring only that x and y be sufficiently close to each other; unlike ordinary continuity, where the maximum distance between f(x) and f(y) may depend on x and y themselves.

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In the area of mathematics known as functional analysis, a reflexive space is a locally convex topological vector space (TVS) such that the canonical evaluation map from into its bidual is an isomorphism of TVSs. Since a normable TVS is reflexive if and only if it is semi-reflexive, every normed space is reflexive if and only if the canonical evaluation map from into its bidual is surjective; in this case the normed space is necessarily also a Banach space. In 1951, R. C. James discovered a Banach space, now known as James' space, that is not reflexive but is nevertheless isometrically isomorphic to its bidual.

In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input.

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space.

In mathematics, the support of a real-valued function is the subset of the domain containing the elements which are not mapped to zero. If the domain of is a topological space, the support of is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used very widely in mathematical analysis.

In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined.

The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.

In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded.

In mathematics, nuclear spaces are topological vector spaces that can be viewed as a generalization of finite dimensional Euclidean spaces and share many of their desirable properties. Nuclear spaces are however quite different from Hilbert spaces, another generalization of finite dimensional Euclidean spaces. They were introduced by Alexander Grothendieck.

In mathematics, more particularly in functional analysis, differential topology, and geometric measure theory, a k-current in the sense of Georges de Rham is a functional on the space of compactly supported differential k-forms, on a smooth manifold M. Currents formally behave like Schwartz distributions on a space of differential forms, but in a geometric setting, they can represent integration over a submanifold, generalizing the Dirac delta function, or more generally even directional derivatives of delta functions (multipoles) spread out along subsets of M.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

In mathematics, a Berkovich space, introduced by Berkovich (1990), is a version of an analytic space over a non-Archimedean field, refining Tate's notion of a rigid analytic space.

In mathematics, the limiting absorption principle (LAP) is a concept from operator theory and scattering theory that consists of choosing the "correct" resolvent of a linear operator at the essential spectrum based on the behavior of the resolvent near the essential spectrum. The term is often used to indicate that the resolvent, when considered not in the original space, but in certain weighted spaces, has a limit as the spectral parameter approaches the essential spectrum. This concept developed from the idea of introducing complex parameter into the Helmholtz equation for selecting a particular solution. This idea is credited to Vladimir Ignatowski, who was considering the propagation and absorption of the electromagnetic waves in a wire. It is closely related to the Sommerfeld radiation condition and the limiting amplitude principle (1948). The terminology -- both the limiting absorption principle and the limiting amplitude principle -- was introduced by Aleksei Sveshnikov.

In mathematics, the injective tensor product of two topological vector spaces (TVSs) was introduced by Alexander Grothendieck and was used by him to define nuclear spaces. An injective tensor product is in general not necessarily complete, so its completion is called the completed injective tensor products. Injective tensor products have applications outside of nuclear spaces. In particular, as described below, up to TVS-isomorphism, many TVSs that are defined for real or complex valued functions, for instance, the Schwartz space or the space of continuously differentiable functions, can be immediately extended to functions valued in a Hausdorff locally convex TVS Y without any need to extend definitions from real/complex-valued functions to Y-valued functions.

In the mathematical discipline of functional analysis, differentiable vector-valued functions from Euclidean space are differentiable TVS-valued functions whose domains are subset of finite-dimensional Euclidean space. It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways. But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case. This article presents the theory of -times continuously differentiable functions on an open subset of Euclidean space , which is an important special case of differentiation between arbitrary TVSs. This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces.

References