Limit (mathematics)

Last updated

In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. [1] Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.


The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.

In formulas, a limit of a function is usually written as

and is read as "the limit of f of x as x approaches c equals L". The fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or ), as in:

which reads " of tends to as tends to ". [2]

Limit of a function

Whenever a point x is within a distance d of c, the value f(x) is within a distance e of L. Limite 01.svg
Whenever a point x is within a distance δ of c, the value f(x) is within a distance ε of L.
For all x > S, the value f(x) is within a distance e of L. Limit-at-infinity-graph.png
For all x > S, the value f(x) is within a distance ε of L.

Suppose f is a real-valued function and c is a real number. Intuitively speaking, the expression

means that f(x) can be made to be as close to L as desired, by making x sufficiently close to c. [3] In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L".

Augustin-Louis Cauchy in 1821, [4] followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit. The definition uses ε (the lowercase Greek letter epsilon) [2] to represent any small positive number, so that "f(x) becomes arbitrarily close to L" means that f(x) eventually lies in the interval (Lε, L + ε), which can also be written using the absolute value sign as |f(x) − L| < ε. [4] The phrase "as x approaches c" then indicates that we refer to values of x, whose distance from c is less than some positive number δ (the lower case Greek letter delta)—that is, values of x within either (cδ, c) or (c, c + δ), which can be expressed with 0 < |xc| < δ. The first inequality means that the distance between x and c is greater than 0 and that xc, while the second indicates that x is within distance δ of c. [4]

The above definition of a limit is true even if f(c) ≠ L. Indeed, the function f need not even be defined at c.

For example, if

then f(1) is not defined (see indeterminate forms), yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2: [5]


Thus, f(x) can be made arbitrarily close to the limit of 2—just by making x sufficiently close to 1.

In other words,

This can also be calculated algebraically, as for all real numbers x ≠ 1.

Now, since x + 1 is continuous in x at 1, we can now plug in 1 for x, leading to the equation

In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function


As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish—by making x sufficiently large. So in this case, the limit of f(x) as x approaches infinity is 2, or in mathematical notation,

Limit of a sequence

Consider the following sequence: 1.79, 1.799, 1.7999, … It can be observed that the numbers are "approaching" 1.8, the limit of the sequence.

Formally, suppose a1, a2, … is a sequence of real numbers. One can state that the real number L is the limit of this sequence, namely:

which is read as

"The limit of an as n approaches infinity equals L"

if and only if

For every real number ε > 0, there exists a natural number N such that for all n > N, we have |anL| < ε. [6]

Intuitively, this means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value |anL| is the distance between an and L. Not every sequence has a limit; if it does, then it is called convergent , and if it does not, then it is divergent. One can show that a convergent sequence has only one limit.

The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n approaches infinity of a sequence {an} is simply the limit at infinity of a function a(n)—defined on the natural numbers {n}. On the other hand, if X is the domain of a function f(x) and if the limit as n approaches infinity of f(xn) is L for every arbitrary sequence of points {xn} in {X – {x0}} which converges to x0, then the limit of the function f(x) as x approaches x0 is L. [7] One such sequence would be {x0 + 1/n}.

Limit as "standard part"

In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence can be expressed as the standard part of the value of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,

Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal represented in the ultrapower construction by a Cauchy sequence , is simply the limit of that sequence:

In this sense, taking the limit and taking the standard part are equivalent procedures.

Convergence and fixed point

A formal definition of convergence can be stated as follows. Suppose as goes from to is a sequence that converges to , with for all . If positive constants and exist with

then as goes from to converges to of order , with asymptotic error constant .

Given a function with a fixed point , there is a nice checklist for checking the convergence of the sequence .

  1. First check that p is indeed a fixed point:
  2. Check for linear convergence. Start by finding . If…
then there is linear convergence
series diverges
then there is at least linear convergence and maybe something better, the expression should be checked for quadratic convergence
  1. If it is found that there is something better than linear, the expression should be checked for quadratic convergence. Start by finding If…
then there is quadratic convergence provided that is continuous
then there is something even better than quadratic convergence
does not existthen there is convergence that is better than linear but still not quadratic


Computability of the limit

Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits. [9]

See also


  1. Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN   978-0-495-01166-8.
  2. 1 2 "List of Calculus and Analysis Symbols". Math Vault. 2020-05-11. Retrieved 2020-08-18.
  3. Weisstein, Eric W. "Epsilon-Delta Definition". Retrieved 2020-08-18.
  4. 1 2 3 Larson, Ron; Edwards, Bruce H. (2010). Calculus of a single variable (Ninth ed.). Brooks/Cole, Cengage Learning. ISBN   978-0-547-20998-2.
  5. "limit | Definition, Example, & Facts". Encyclopedia Britannica. Retrieved 2020-08-18.
  6. Weisstein, Eric W. "Limit". Retrieved 2020-08-18.
  7. Apostol (1974 , pp. 75–76)
  8. Numerical Analysis, 8th Edition, Burden and Faires, Section 2.4 Error Analysis for Iterative Methods
  9. Recursively enumerable sets and degrees, Soare, Robert I.

Related Research Articles

In mathematics, a continuous function is a function that does not have any abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its output can be assured by restricting to sufficiently small changes in its input. If not continuous, a function is said to be discontinuous. Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, during which attempts such as the epsilon–delta definition were made to formalize it.

LHôpitals rule Mathematical rule for evaluating certain limits

In mathematics, more specifically calculus, L'Hôpital's rule or L'Hospital's rule provides a technique to evaluate limits of indeterminate forms. Application of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.

Riemann integral Basic Integral in Elementary Calculus

In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration.

Real analysis Mathematics of real numbers and real functions

In mathematics, real analysis is the branch of mathematical analysis that studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

Sequence Finite or infinite ordered list of elements

In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members. The number of elements is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers, or the set of the first n natural numbers. Sequences are one type of indexed families as an indexed family is defined as a function which domain is called the index set, and the elements of the index set are the indices for the elements of the function image.

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set if, given any arbitrarily small positive number , a number can be found such that each of the functions differ from by no more than at every pointin. Described in an informal way, if converges to uniformly, then the rate at which approaches is "uniform" throughout its domain in the following sense: in order to guarantee that falls within a certain distance of , we do not need to know the value of in question — there can be found a single value of independent of , such that choosing will ensure that is within of for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find so that, for that particular, falls within of whenever .

In mathematics, the affinely extended real number system is obtained from the real number system by adding two infinity elements: and , where the infinities are treated as actual numbers. It is useful in describing the algebra on infinities and the various limiting behaviors in calculus and mathematical analysis, especially in the theory of measure and integration. The affinely extended real number system is denoted or or . It is the Dedekind–MacNeille completion of the real numbers.

Heaviside step function Whose value is zero for negative numbers and one for positive numbers

The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input.

Limit of a sequence Value that the terms of a sequence "tend to"

In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.

Squeeze theorem On calculating limits by bounding a function between two other functions

In calculus, the squeeze theorem, also known as the pinching theorem, the sandwich theorem, the sandwich rule, the police theorem, the between theorem and sometimes the squeeze lemma, is a theorem regarding the limit of a function. In Italy, the theorem is also known as theorem of carabinieri.

In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined.

Oscillation (mathematics)

In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval.

In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.

In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.

Consistent estimator Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

In real analysis, a branch of mathematics, a slowly varying function is a function of a real variable whose behaviour at infinity is in some sense similar to the behaviour of a function converging at infinity. Similarly, a regularly varying function is a function of a real variable whose behaviour at infinity is similar to the behaviour of a power law function near infinity. These classes of functions were both introduced by Jovan Karamata, and have found several important applications, for example in probability theory.

(<i>ε</i>, <i>δ</i>)-definition of limit Mathematical definition of a limit

In calculus, the (εδ)-definition of limit is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave a formal definition of limit in his Cours d'Analyse, but occasionally used ε, δ arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass. It provides rigor to the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.