# Real analysis

Last updated

In mathematics, real analysis is the branch of mathematical analysis that studies the behavior of real numbers, sequences and series of real numbers, and real functions. [1] Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

## Contents

Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions.

## Scope

### Construction of the real numbers

The theorems of real analysis rely on the properties of the real number system, which must be established. The real number system consists of an uncountable set (${\displaystyle \mathbb {R} }$), together with two binary operations denoted + and , and an order denoted <. The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' in the real numbers. This property distinguishes the real numbers from other ordered fields (e.g., the rational numbers ${\displaystyle \mathbb {Q} }$) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below).

### Order properties of the real numbers

The real numbers have various lattice-theoretic properties that are absent in the complex numbers. Also, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property:

Every nonempty subset of ${\displaystyle \mathbb {R} }$ that has an upper bound has a least upper bound that is also a real number.

These order-theoretic properties lead to a number of fundamental results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem.

However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences.[ clarification needed ]

### Topological properties of the real numbers

Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order ${\displaystyle <}$. Alternatively, by defining the metric or distance function${\displaystyle d:\mathbb {R} \times \mathbb {R} \to \mathbb {R} _{\geq 0}}$ using the absolute value function as ${\displaystyle d(x,y)=|x-y|}$, the real numbers become the prototypical example of a metric space. The topology induced by metric ${\displaystyle d}$ turns out to be identical to the standard topology induced by order ${\displaystyle <}$. Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in ${\displaystyle \mathbb {R} }$ only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods.

### Sequences

A sequence is a function whose domain is a countable, totally ordered set. The domain is usually taken to be the natural numbers, [2] although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices.

Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map ${\displaystyle a:\mathbb {N} \to \mathbb {R} :n\mapsto a_{n}}$. Each ${\displaystyle a(n)=a_{n}}$ is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses: [3]

${\displaystyle (a_{n})=(a_{n})_{n\in \mathbb {N} }=(a_{1},a_{2},a_{3},\dots ).}$

A sequence that tends to a limit (i.e., ${\textstyle \lim _{n\to \infty }a_{n}}$ exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence ${\displaystyle (a_{n})}$ is bounded if there exists ${\displaystyle M\in \mathbb {R} }$ such that ${\displaystyle |a_{n}| for all ${\displaystyle n\in \mathbb {N} }$. A real-valued sequence ${\displaystyle (a_{n})}$ is monotonically increasing or decreasing if

${\displaystyle a_{1}\leq a_{2}\leq a_{3}\leq \cdots }$

or

${\displaystyle a_{1}\geq a_{2}\geq a_{3}\geq \cdots }$

holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with ${\displaystyle \leq }$ or ${\displaystyle \geq }$ replaced by < or >.

Given a sequence ${\displaystyle (a_{n})}$, another sequence ${\displaystyle (b_{k})}$ is a subsequence of ${\displaystyle (a_{n})}$ if ${\displaystyle b_{k}=a_{n_{k}}}$ for all positive integers ${\displaystyle k}$ and ${\displaystyle (n_{k})}$ is a strictly increasing sequence of natural numbers.

### Limits and convergence

Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value. [4] (This value can include the symbols ${\displaystyle \pm \infty }$ when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.)

The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of the 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of the 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows.

Definition. Let ${\displaystyle f}$ be a real-valued function defined on ${\displaystyle E\subset \mathbb {R} }$. We say that ${\displaystyle f(x)}$ tends to ${\displaystyle L}$ as ${\displaystyle x}$ approaches ${\displaystyle x_{0}}$, or that the limit of ${\displaystyle f(x)}$ as ${\displaystyle x}$ approaches ${\displaystyle x_{0}}$ is ${\displaystyle L}$ if, for any ${\displaystyle \varepsilon >0}$, there exists ${\displaystyle \delta >0}$ such that for all ${\displaystyle x\in E}$, ${\displaystyle 0<|x-x_{0}|<\delta }$ implies that ${\displaystyle |f(x)-L|<\varepsilon }$. We write this symbolically as

${\displaystyle f(x)\to L\ \ {\text{as}}\ \ x\to x_{0},}$

or as

${\displaystyle \lim _{x\to x_{0}}f(x)=L.}$

Intuitively, this definition can be thought of in the following way: We say that ${\displaystyle f(x)\to L}$ as ${\displaystyle x\to x_{0}}$, when, given any positive number ${\displaystyle \varepsilon }$, no matter how small, we can always find a ${\displaystyle \delta }$, such that we can guarantee that ${\displaystyle f(x)}$ and ${\displaystyle L}$ are less than ${\displaystyle \varepsilon }$ apart, as long as ${\displaystyle x}$ (in the domain of ${\displaystyle f}$) is a real number that is less than ${\displaystyle \delta }$ away from ${\displaystyle x_{0}}$ but distinct from ${\displaystyle x_{0}}$. The purpose of the last stipulation, which corresponds to the condition ${\displaystyle 0<|x-x_{0}|}$ in the definition, is to ensure that ${\textstyle \lim _{x\to x_{0}}f(x)=L}$ does not imply anything about the value of ${\displaystyle f(x_{0})}$ itself. Actually, ${\displaystyle x_{0}}$ does not even need to be in the domain of ${\displaystyle f}$ in order for ${\textstyle \lim _{x\to x_{0}}f(x)}$ to exist.

In a slightly different but related context, the concept of a limit applies to the behavior of a sequence ${\displaystyle (a_{n})}$ when ${\displaystyle n}$ becomes large.

Definition. Let ${\displaystyle (a_{n})}$ be a real-valued sequence. We say that ${\displaystyle (a_{n})}$converges to${\displaystyle a}$ if, for any ${\displaystyle \varepsilon >0}$, there exists a natural number ${\displaystyle N}$ such that ${\displaystyle n\geq N}$ implies that ${\displaystyle |a-a_{n}|<\varepsilon }$. We write this symbolically as

${\displaystyle a_{n}\to a\ \ {\text{as}}\ \ n\to \infty ,}$

or as

${\displaystyle \lim _{n\to \infty }a_{n}=a;}$

if ${\displaystyle (a_{n})}$ fails to converge, we say that ${\displaystyle (a_{n})}$diverges.

Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence ${\displaystyle (a_{n})}$ and term ${\displaystyle a_{n}}$ by function ${\displaystyle f}$ and value ${\displaystyle f(x)}$ and natural numbers ${\displaystyle N}$ and ${\displaystyle n}$ by real numbers ${\displaystyle M}$ and ${\displaystyle x}$, respectively) yields the definition of the limit of ${\displaystyle f(x)}$ as ${\displaystyle x}$ increases without bound, notated ${\textstyle \lim _{x\to \infty }f(x)}$. Reversing the inequality ${\displaystyle x\geq M}$ to ${\displaystyle x\leq M}$ gives the corresponding definition of the limit of ${\displaystyle f(x)}$ as ${\displaystyle x}$decreaseswithout bound, ${\textstyle \lim _{x\to -\infty }f(x)}$.

Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful.

Definition. Let ${\displaystyle (a_{n})}$ be a real-valued sequence. We say that ${\displaystyle (a_{n})}$ is a Cauchy sequence if, for any ${\displaystyle \varepsilon >0}$, there exists a natural number ${\displaystyle N}$ such that ${\displaystyle m,n\geq N}$ implies that ${\displaystyle |a_{m}-a_{n}|<\varepsilon }$.

It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, ${\displaystyle (\mathbb {R} ,|\cdot |)}$, is a complete metric space . In a general metric space, however, a Cauchy sequence need not converge.

In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent.

#### Uniform and pointwise convergence for sequences of functions

In addition to sequences of numbers, one may also speak of sequences of functionson${\displaystyle E\subset \mathbb {R} }$, that is, infinite, ordered families of functions ${\displaystyle f_{n}:E\to \mathbb {R} }$, denoted ${\displaystyle (f_{n})_{n=1}^{\infty }}$, and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished.

Roughly speaking, pointwise convergence of functions ${\displaystyle f_{n}}$ to a limiting function ${\displaystyle f:E\to \mathbb {R} }$, denoted ${\displaystyle f_{n}\rightarrow f}$, simply means that given any ${\displaystyle x\in E}$, ${\displaystyle f_{n}(x)\to f(x)}$ as ${\displaystyle n\to \infty }$. In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, ${\displaystyle f_{n}}$, to fall within some error ${\displaystyle \varepsilon >0}$ of ${\displaystyle f}$ for every value of ${\displaystyle x\in E}$, whenever ${\displaystyle n\geq N}$, for some integer ${\displaystyle N}$. For a family of functions to uniformly converge, sometimes denoted ${\displaystyle f_{n}\rightrightarrows f}$, such a value of ${\displaystyle N}$ must exist for any ${\displaystyle \varepsilon >0}$ given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough ${\displaystyle N}$, the functions ${\displaystyle f_{N},f_{N+1},f_{N+2},\ldots }$ are all confined within a 'tube' of width ${\displaystyle 2\varepsilon }$ about ${\displaystyle f}$ (that is, between ${\displaystyle f-\varepsilon }$ and ${\displaystyle f+\varepsilon }$) for every value in their domain${\displaystyle E}$.

The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications.

### Compactness

Compactness is a concept from general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In ${\displaystyle \mathbb {R} }$, sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set ${\displaystyle \{1/n:n\in \mathbb {N} \}\cup \{0}\}$ is a compact set; the Cantor ternary set ${\displaystyle {\mathcal {C}}\subset [0,1]}$ is another example of a compact set. On the other hand, the set ${\displaystyle \{1/n:n\in \mathbb {N} \}}$ is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set ${\displaystyle [0,\infty )}$ is also not compact because it is closed but not bounded.

For subsets of the real numbers, there are several equivalent definitions of compactness.

Definition. A set ${\displaystyle E\subset \mathbb {R} }$ is compact if it is closed and bounded.

This definition also holds for Euclidean space of any finite dimension, ${\displaystyle \mathbb {R} ^{n}}$, but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem.

A more general definition that applies to all metric spaces uses the notion of a subsequence (see above).

Definition. A set ${\displaystyle E}$ in a metric space is compact if every sequence in ${\displaystyle E}$ has a convergent subsequence.

This particular property is known as subsequential compactness. In ${\displaystyle \mathbb {R} }$, a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general.

The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and ${\displaystyle \mathbb {R} }$ as special cases). In brief, a collection of open sets ${\displaystyle U_{\alpha }}$ is said to be an open cover of set ${\displaystyle X}$ if the union of these sets is a superset of ${\displaystyle X}$. This open cover is said to have a finite subcover if a finite subcollection of the ${\displaystyle U_{\alpha }}$ could be found that also covers ${\displaystyle X}$.

Definition. A set ${\displaystyle X}$ in a topological space is compact if every open cover of ${\displaystyle X}$ has a finite subcover.

Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact.

### Continuity

A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps".

There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, ${\displaystyle f:I\to \mathbb {R} }$ is a function defined on a non-degenerate interval ${\displaystyle I}$ of the set of real numbers as its domain. Some possibilities include ${\displaystyle I=\mathbb {R} }$, the whole set of real numbers, an open interval ${\displaystyle I=(a,b)=\{x\in \mathbb {R} \mid a or a closed interval ${\displaystyle I=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\}.}$ Here, ${\displaystyle a}$ and ${\displaystyle b}$ are distinct real numbers, and we exclude the case of ${\displaystyle I}$ being empty or consisting of only one point, in particular.

Definition. If ${\displaystyle I\subset \mathbb {R} }$ is a non-degenerate interval, we say that ${\displaystyle f:I\to \mathbb {R} }$ is continuous at${\displaystyle p\in I}$ if ${\textstyle \lim _{x\to p}f(x)=f(p)}$. We say that ${\displaystyle f}$ is a continuous map if ${\displaystyle f}$ is continuous at every ${\displaystyle p\in I}$.

In contrast to the requirements for ${\displaystyle f}$ to have a limit at a point ${\displaystyle p}$, which do not constrain the behavior of ${\displaystyle f}$ at ${\displaystyle p}$ itself, the following two conditions, in addition to the existence of ${\textstyle \lim _{x\to p}f(x)}$, must also hold in order for ${\displaystyle f}$ to be continuous at ${\displaystyle p}$: (i)${\displaystyle f}$ must be defined at ${\displaystyle p}$, i.e., ${\displaystyle p}$ is in the domain of ${\displaystyle f}$; and(ii)${\displaystyle f(x)\to f(p)}$ as ${\displaystyle x\to p}$. The definition above actually applies to any domain ${\displaystyle E}$ that does not contain an isolated point, or equivalently, ${\displaystyle E}$ where every ${\displaystyle p\in E}$ is a limit point of ${\displaystyle E}$. A more general definition applying to ${\displaystyle f:X\to \mathbb {R} }$ with a general domain ${\displaystyle X\subset \mathbb {R} }$ is the following:

Definition. If ${\displaystyle X}$ is an arbitrary subset of ${\displaystyle \mathbb {R} }$, we say that ${\displaystyle f:X\to \mathbb {R} }$ is continuous at${\displaystyle p\in X}$ if, for any ${\displaystyle \varepsilon >0}$, there exists ${\displaystyle \delta >0}$ such that for all ${\displaystyle x\in X}$, ${\displaystyle |x-p|<\delta }$ implies that ${\displaystyle |f(x)-f(p)|<\varepsilon }$. We say that ${\displaystyle f}$ is a continuous map if ${\displaystyle f}$ is continuous at every ${\displaystyle p\in X}$.

A consequence of this definition is that ${\displaystyle f}$ is trivially continuous at any isolated point${\displaystyle p\in X}$. This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and ${\displaystyle \mathbb {R} }$ in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness.

Definition. If ${\displaystyle X}$ and ${\displaystyle Y}$ are topological spaces, we say that ${\displaystyle f:X\to Y}$ is continuous at${\displaystyle p\in X}$ if ${\displaystyle f^{-1}(V)}$ is a neighborhood of ${\displaystyle p}$ in ${\displaystyle X}$ for every neighborhood ${\displaystyle V}$ of ${\displaystyle f(p)}$ in ${\displaystyle Y}$. We say that ${\displaystyle f}$ is a continuous map if ${\displaystyle f^{-1}(U)}$ is open in ${\displaystyle X}$ for every ${\displaystyle U}$ open in ${\displaystyle Y}$.

(Here, ${\displaystyle f^{-1}(S)}$ refers to the preimage of ${\displaystyle S\subset Y}$ under ${\displaystyle f}$.)

#### Uniform continuity

Definition. If ${\displaystyle X}$ is a subset of the real numbers, we say a function ${\displaystyle f:X\to \mathbb {R} }$ is uniformly continuouson${\displaystyle X}$ if, for any ${\displaystyle \varepsilon >0}$, there exists a ${\displaystyle \delta >0}$ such that for all ${\displaystyle x,y\in X}$, ${\displaystyle |x-y|<\delta }$ implies that ${\displaystyle |f(x)-f(y)|<\varepsilon }$.

Explicitly, when a function is uniformly continuous on ${\displaystyle X}$, the choice of ${\displaystyle \delta }$ needed to fulfill the definition must work for all of${\displaystyle X}$ for a given ${\displaystyle \varepsilon }$. In contrast, when a function is continuous at every point ${\displaystyle p\in X}$ (or said to be continuous on ${\displaystyle X}$), the choice of ${\displaystyle \delta }$ may depend on both ${\displaystyle \varepsilon }$and${\displaystyle p}$. In contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point ${\displaystyle p}$ is meaningless.

On a compact set, it is easily shown that all continuous functions are uniformly continuous. If ${\displaystyle E}$ is a bounded noncompact subset of ${\displaystyle \mathbb {R} }$, then there exists ${\displaystyle f:E\to \mathbb {R} }$ that is continuous but not uniformly continuous. As a simple example, consider ${\displaystyle f:(0,1)\to \mathbb {R} }$ defined by ${\displaystyle f(x)=1/x}$. By choosing points close to 0, we can always make ${\displaystyle |f(x)-f(y)|>\varepsilon }$ for any single choice of ${\displaystyle \delta >0}$, for a given ${\displaystyle \varepsilon >0}$.

#### Absolute continuity

Definition. Let ${\displaystyle I\subset \mathbb {R} }$ be an interval on the real line. A function ${\displaystyle f:I\to \mathbb {R} }$ is said to be absolutely continuouson${\displaystyle I}$ if for every positive number ${\displaystyle \varepsilon }$, there is a positive number ${\displaystyle \delta }$ such that whenever a finite sequence of pairwise disjoint sub-intervals ${\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{n},y_{n})}$ of ${\displaystyle I}$ satisfies [5]

${\displaystyle \sum _{k=1}^{n}(y_{k}-x_{k})<\delta }$

then

${\displaystyle \sum _{k=1}^{n}|f(y_{k})-f(x_{k})|<\varepsilon .}$

Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.

### Differentiation

The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point ${\displaystyle a}$, and the slope of the line is the derivative of the function at ${\displaystyle a}$.

A function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ is differentiable at ${\displaystyle a}$ if the limit

${\displaystyle f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}}$

exists. This limit is known as the derivative of ${\displaystyle f}$ at ${\displaystyle a}$, and the function ${\displaystyle f'}$, possibly defined on only a subset of ${\displaystyle \mathbb {R} }$, is the derivative (or derivative function) of${\displaystyle f}$. If the derivative exists everywhere, the function is said to be differentiable.

As a simple consequence of the definition, ${\displaystyle f}$ is continuous at ${\displaystyle a}$ if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so on.

One can classify functions by their differentiability class. The class ${\displaystyle C^{0}}$ (sometimes ${\displaystyle C^{0}([a,b])}$ to indicate the interval of applicability) consists of all continuous functions. The class ${\displaystyle C^{1}}$ consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a ${\displaystyle C^{1}}$ function is exactly a function whose derivative exists and is of class ${\displaystyle C^{0}}$. In general, the classes ${\displaystyle C^{k}}$ can be defined recursively by declaring ${\displaystyle C^{0}}$ to be the set of all continuous functions and declaring ${\displaystyle C^{k}}$ for any positive integer ${\displaystyle k}$ to be the set of all differentiable functions whose derivative is in ${\displaystyle C^{k-1}}$. In particular, ${\displaystyle C^{k}}$ is contained in ${\displaystyle C^{k-1}}$ for every ${\displaystyle k}$, and there are examples to show that this containment is strict. Class ${\displaystyle C^{\infty }}$ is the intersection of the sets ${\displaystyle C^{k}}$ as ${\displaystyle k}$ varies over the non-negative integers, and the members of this class are known as the smooth functions. Class ${\displaystyle C^{\omega }}$ consists of all analytic functions, and is strictly contained in ${\displaystyle C^{\infty }}$ (see bump function for a smooth function that is not analytic).

### Series

A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first ${\displaystyle n}$ terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as ${\displaystyle n}$ grows without bound. The series is assigned the value of this limit, if it exists.

Given an (infinite) sequence ${\displaystyle (a_{n})}$, we can define an associated series as the formal mathematical object ${\textstyle a_{1}+a_{2}+a_{3}+\cdots =\sum _{n=1}^{\infty }a_{n}}$, sometimes simply written as ${\textstyle \sum a_{n}}$. The partial sums of a series ${\textstyle \sum a_{n}}$ are the numbers ${\textstyle s_{n}=\sum _{j=1}^{n}a_{j}}$. A series ${\textstyle \sum a_{n}}$ is said to be convergent if the sequence consisting of its partial sums, ${\displaystyle (s_{n})}$, is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number ${\textstyle s=\lim _{n\to \infty }s_{n}}$.

The word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion).

An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes:

${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{2^{n}}}={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots =1.}$

In contrast, the harmonic series has been known since the Middle Ages to be a divergent series:

${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots =\infty .}$

(Here, "${\displaystyle =\infty }$" is merely a notational convention to indicate that the partial sums of the series grow without bound.)

A series ${\textstyle \sum a_{n}}$ is said to converge absolutely if ${\textstyle \sum |a_{n}|}$ is convergent. A convergent series ${\textstyle \sum a_{n}}$ for which ${\textstyle \sum |a_{n}|}$ diverges is said to convergenon-absolutely. [6] It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a series that converges non-absolutely is

${\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\cdots =\ln 2.}$

#### Taylor series

The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series

${\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+{\frac {f^{(3)}(a)}{3!}}(x-a)^{3}+\cdots .}$

which can be written in the more compact sigma notation as

${\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}\,(x-a)^{n}}$

where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and (xa)0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series.

A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that ${\displaystyle |x-a| (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable.

#### Fourier series

Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis.

### Integration

Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion . Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common value.

The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned.

#### Riemann integration

The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let ${\displaystyle [a,b]}$ be a closed interval of the real line; then a tagged partition${\displaystyle {\cal {P}}}$ of ${\displaystyle [a,b]}$ is a finite sequence

${\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!}$

This partitions the interval ${\displaystyle [a,b]}$ into ${\displaystyle n}$ sub-intervals ${\displaystyle [x_{i-1},x_{i}]}$ indexed by ${\displaystyle i=1,\ldots ,n}$, each of which is "tagged" with a distinguished point ${\displaystyle t_{i}\in [x_{i-1},x_{i}]}$. For a function ${\displaystyle f}$ bounded on ${\displaystyle [a,b]}$, we define the Riemann sum of ${\displaystyle f}$ with respect to tagged partition ${\displaystyle {\cal {P}}}$ as

${\displaystyle \sum _{i=1}^{n}f(t_{i})\Delta _{i},}$

where ${\displaystyle \Delta _{i}=x_{i}-x_{i-1}}$ is the width of sub-interval ${\displaystyle i}$. Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, ${\textstyle \|\Delta _{i}\|=\max _{i=1,\ldots ,n}\Delta _{i}}$. We say that the Riemann integral of ${\displaystyle f}$ on ${\displaystyle [a,b]}$ is ${\displaystyle S}$ if for any ${\displaystyle \varepsilon >0}$ there exists ${\displaystyle \delta >0}$ such that, for any tagged partition ${\displaystyle {\cal {P}}}$ with mesh ${\displaystyle \|\Delta _{i}\|<\delta }$, we have

${\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\Delta _{i}\right|<\varepsilon .}$

This is sometimes denoted ${\textstyle {\mathcal {R}}\int _{a}^{b}f=S}$. When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former.

The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense.

#### Lebesgue integration and measure

Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to Lebesgue integral probability theory.

### Distributions

Distributions (or generalized functions ) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

### Relation to complex analysis

Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressibility as power series, and satisfying the Cauchy integral formula.

In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers.

Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus.

## Important results

Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems.

Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology.

## Related Research Articles

In mathematics, a continuous function is a function that does not have any abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its output can be assured by restricting to sufficiently small changes in its input. If not continuous, a function is said to be discontinuous. Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, during which attempts such as the epsilon–delta definition were made to formalize it.

In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration.

In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members. The number of elements is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers, or the set of the first n natural numbers. Sequences are one type of indexed families as an indexed family is defined as a function which domain is called the index set, and the elements of the index set are the indices for the elements of the function image.

In mathematics, a function f is uniformly continuous if, roughly speaking, it is possible to guarantee that f(x) and f(y) be as close to each other as we please by requiring only that x and y be sufficiently close to each other; unlike ordinary continuity, where the maximum distance between f(x) and f(y) may depend on x and y themselves.

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set if, given any arbitrarily small positive number , a number can be found such that each of the functions differ from by no more than at every pointin. Described in an informal way, if converges to uniformly, then the rate at which approaches is "uniform" throughout its domain in the following sense: in order to guarantee that falls within a certain distance of , we do not need to know the value of in question — there can be found a single value of independent of , such that choosing will ensure that is within of for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find so that, for that particular, falls within of whenever .

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number . Similarly, an improper integral of a function, , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.

In calculus, the squeeze theorem, also known as the pinching theorem, the sandwich theorem, the sandwich rule, the police theorem, the between theorem and sometimes the squeeze lemma, is a theorem regarding the limit of a function. In Italy, the theorem is also known as theorem of carabinieri.

In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897).

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.

In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in Lp in terms of convergence in measure and a condition related to uniform integrability.

In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xnx then g(xn) → g(x). The continuous mapping theorem states that this will also be true if we replace the deterministic sequence {xn} with a sequence of random variables {Xn}, and replace the standard notion of convergence of real numbers “→” with one of the types of convergence of random variables.

## References

1. Tao, Terence (2003). "Lecture notes for MATH 131AH" (PDF). Course Website for MATH 131AH, Department of Mathematics, UCLA.
2. Gaughan, Edward (2009). "1.1 Sequences and Convergence". Introduction to Analysis. AMS (2009). ISBN   978-0-8218-4787-9.
3. Some authors (e.g., Rudin 1976) use braces instead and write ${\displaystyle \{a_{n}\}}$. However, this notation conflicts with the usual notation for a set, which, in contrast to a sequence, disregards the order and the multiplicity of its elements.
4. Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN   978-0-495-01166-8.
5. Royden 1988 , Sect. 5.4, page 108; Nielsen 1997 , Definition 15.6 on page 251; Athreya & Lahiri 2006 , Definitions 4.4.1, 4.4.2 on pages 128,129. The interval I is assumed to be bounded and closed in the former two books but not the latter book.
6. The term unconditional convergence refers to series whose sum does not depend on the order of the terms (i.e., any rearrangement gives the same sum). Convergence is termed conditional otherwise. For series in ${\displaystyle \mathbb {R} ^{n}}$, it can be shown that absolute convergence and unconditional convergence are equivalent. Hence, the term "conditional convergence" is often used to mean non-absolute convergence. However, in the general setting of Banach spaces, the terms do not coincide, and there are unconditionally convergent series that do not converge absolutely.