This article provides insufficient context for those unfamiliar with the subject.(January 2022) |
In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity. [1] [2] [3] [4] Mathematically, if in continuous time has (unilateral) Laplace transform , then a final value theorem establishes conditions under which Likewise, if in discrete time has (unilateral) Z-transform , then a final value theorem establishes conditions under which
An Abelian final value theorem makes assumptions about the time-domain behavior of to calculate Conversely, a Tauberian final value theorem makes assumptions about the frequency-domain behaviour of to calculate (see Abelian and Tauberian theorems for integral transforms).
In the following statements, the notation means that approaches 0, whereas means that approaches 0 through the positive numbers.
Suppose that every pole of is either in the open left half plane or at the origin, and that has at most a single pole at the origin. Then as and [5]
Suppose that and both have Laplace transforms that exist for all If exists and exists then [3] : Theorem 2.36 [4] : 20 [6]
Remark
Both limits must exist for the theorem to hold. For example, if then does not exist, but [3] : Example 2.37 [4] : 20
Suppose that is bounded and differentiable, and that is also bounded on . If as then [7]
Suppose that every pole of is either in the open left half-plane or at the origin. Then one of the following occurs:
In particular, if is a multiple pole of then case 2 or 3 applies [5]
Suppose that is Laplace transformable. Let . If exists and exists then
where denotes the Gamma function. [5]
Final value theorems for obtaining have applications in establishing the long-term stability of a system.
Suppose that is bounded and measurable and Then exists for all and [7]
Elementary proof [7]
Suppose for convenience that on and let . Let and choose so that for all Since for every we have
hence
Now for every we have
On the other hand, since is fixed it is clear that , and so if is small enough.
Suppose that all of the following conditions are satisfied:
Then [8]
Remark
The proof uses the dominated convergence theorem. [8]
Let be a continuous and bounded function such that such that the following limit exists
Then [9]
Suppose that is continuous and absolutely integrable in Suppose further that is asymptotically equal to a finite sum of periodic functions that is
where is absolutely integrable in and vanishes at infinity. Then
Let and be the Laplace transform of Suppose that satisfies all of the following conditions:
Then diverges to infinity as [11]
Let be measurable and such that the (possibly improper) integral converges for Then This is a version of Abel's theorem.
To see this, notice that and apply the final value theorem to after an integration by parts: For
By the final value theorem, the left-hand side converges to for
To establish the convergence of the improper integral in practice, Dirichlet's test for improper integrals is often helpful. An example is the Dirichlet integral.
Final value theorems for obtaining have applications in probability and statistics to calculate the moments of a random variable. Let be cumulative distribution function of a continuous random variable and let be the Laplace–Stieltjes transform of Then the -th moment of can be calculated as The strategy is to write where is continuous and for each for a function For each put as the inverse Laplace transform of obtain and apply a final value theorem to deduce Then
and hence is obtained.
For example, for a system described by transfer function
the impulse response converges to
That is, the system returns to zero after being disturbed by a short impulse. However, the Laplace transform of the unit step response is
and so the step response converges to
So a zero-state system will follow an exponential rise to a final value of 3.
For a system described by the transfer function
the final value theorem appears to predict the final value of the impulse response to be 0 and the final value of the step response to be 1. However, neither time-domain limit exists, and so the final value theorem predictions are not valid. In fact, both the impulse response and step response oscillate, and (in this special case) the final value theorem describes the average values around which the responses oscillate.
There are two checks performed in Control theory which confirm valid results for the Final Value Theorem:
Rule 1 was not satisfied in this example, in that the roots of the denominator are and
If exists and exists then [4] : 101
Final value of the system
in response to a step input with amplitude is:
The sampled-data system of the above continuous-time LTI system at the aperiodic sampling times is the discrete-time system
where and
The final value of this system in response to a step input with amplitude is the same as the final value of its original continuous-time system. [12]
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets. Some authors require additional restrictions on the measure, as described below.
In mathematics, the Laplace transform, named after Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value H(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability.
In mathematics, the Cauchy principal value, named after Augustin-Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation: where ∇2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth function applied to a Feller process at a stopping time. It may be seen as a stochastic generalization of the (second) fundamental theorem of calculus. It is named after the Russian mathematician Eugene Dynkin.
In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(−θφ(x)) over a fixed set A as θ becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero.
In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path. This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.