Two-sided Laplace transform

Last updated

In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment-generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral

Contents

The integral is most commonly understood as an improper integral, which converges if and only if both integrals

exist. There seems to be no generally accepted notation for the two-sided transform; the used here recalls "bilateral". The two-sided transform used by some authors is

In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function.

In science and engineering applications, the argument t often represents time (in seconds), and the function f(t) often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t. In population ecology, the argument t often represents spatial displacement in a dispersal kernel.

When working with functions of time, f(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components.

Relationship to the Fourier transform

The Fourier transform can be defined in terms of the two-sided Laplace transform:

Note that definitions of the Fourier transform differ, and in particular

is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as

The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip which may not include the real axis where the Fourier transform is supposed to converge.

This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its most general form in the context of Laplace, not Fourier, transforms.

At the same time, nowadays Laplace transform theory falls within the ambit of more general integral transforms, or even general harmonical analysis. In that framework and nomenclature, Laplace transforms are simply another form of Fourier analysis, even if more general in hindsight.

Relationship to other integral transforms

If u is the Heaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transform may be defined in terms of the two-sided Laplace transform by

On the other hand, we also have

where is the function that multiplies by minus one (), so either version of the Laplace transform can be defined in terms of the other.

The Mellin transform may be defined in terms of the two-sided Laplace transform by

with as above, and conversely we can get the two-sided transform from the Mellin transform by

The moment-generating function of a continuous probability density function ƒ(x) can be expressed as .

Properties

The following properties can be found in Bracewell (2000) and Oppenheim & Willsky (1997)

Properties of the bilateral Laplace transform
PropertyTime domains domainStrip of convergenceComment
Definition
Time scaling
Reversal
Frequency-domain derivative
Frequency-domain general derivative
Derivative
General derivative
Frequency-domain integrationonly valid if the integral exists
Time-domain integral
Time-domain integral
Frequency shifting
Time shifting
Modulation
Finite difference
Multiplication. The integration is done along the vertical line Re(σ) = c inside the region of convergence.
Complex conjugation
Convolution
Cross-correlation

Most properties of the bilateral Laplace transform are very similar to properties of the unilateral Laplace transform, but there are some important differences:

Properties of the unilateral transform vs. properties of the bilateral transform
unilateral time domain bilateral time domain unilateral-'s' domain bilateral-'s' domain
Differentiation
Second-order differentiation
Convolution
Cross-correlation

Parseval's theorem and Plancherel's theorem

Let and be functions with bilateral Laplace transforms and in the strips of convergence . Let with . Then Parseval's theorem holds: [1]

This theorem is proved by applying the inverse Laplace transform on the convolution theorem in form of the cross-correlation.

Let be a function with bilateral Laplace transform in the strip of convergence . Let with . Then the Plancherel theorem holds: [2]

Uniqueness

For any two functions for which the two-sided Laplace transforms exist, if i.e. for every value of then almost everywhere.

Region of convergence

Bilateral transform requirements for convergence are more difficult than for unilateral transforms. The region of convergence will be normally smaller.

If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit

exists. The Laplace transform converges absolutely if the integral

exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.

The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). [3] Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. [4] The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.

Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral

That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.

There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.

In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output.

Causality

Bilateral transforms do not respect causality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred.

Table of selected bilateral Laplace transforms

Following list of interesting examples for the bilateral Laplace transform can be deduced from the corresponding Fourier or unilateral Laplace transformations (see also Bracewell (2000)):

Selected bilateral Laplace transforms
FunctionTime domain
Laplace s-domain
Region of convergenceComment
Rectangular impulse
Triangular impulse
Gaussian impulse
Exponential decay is the Heaviside step function
Exponential growth

See also

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result. The integral is evaluated for all values of shift, producing the convolution function.

In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering, mostly as a tool for solving linear differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication. For suitable functions f, the Laplace transform is defined by the integral

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex frequency-domain representation.

In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.

In fractional calculus, an area of mathematical analysis, the differintegral is a combined differentiation/integration operator. Applied to a function ƒ, the q-differintegral of f, here denoted by

In mathematics, the inverse Laplace transform of a function F(s) is the piecewise-continuous and exponentially-restricted real function f(t) which has the property:

The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability.

In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. The theorem is named for Raymond Paley (1907–1933) and Norbert Wiener (1894–1964). The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.

<span class="mw-page-title-main">Dirichlet integral</span> Integral of sin(x)/x from 0 to infinity.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In mathematics, the Mellin inversion formula tells us conditions under which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function.

In mathematics, Borel summation is a summation method for divergent series, introduced by Émile Borel (1899). It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several variations of this method that are also called Borel summation, and a generalization of it called Mittag-Leffler summation.

<span class="mw-page-title-main">Weierstrass transform</span> "Smoothing" integral transform

In mathematics, the Weierstrass transform of a function f : RR, named after Karl Weierstrass, is a "smoothed" version of f(x) obtained by averaging the values of f, weighted with a Gaussian centered at x.

In mathematics, Katugampola fractional operators are integral operators that generalize the Riemann–Liouville and the Hadamard fractional operators into a unique form. The Katugampola fractional integral generalizes both the Riemann–Liouville fractional integral and the Hadamard fractional integral into a single form and It is also closely related to the Erdelyi–Kober operator that generalizes the Riemann–Liouville fractional integral. Katugampola fractional derivative has been defined using the Katugampola fractional integral and as with any other fractional differential operator, it also extends the possibility of taking real number powers or complex number powers of the integral and differential operators.

In analytic number theory, a Dirichlet series, or Dirichlet generating function (DGF), of a sequence is a common way of understanding and summing arithmetic functions in a meaningful way. A little known, or at least often forgotten about, way of expressing formulas for arithmetic functions and their summatory functions is to perform an integral transform that inverts the operation of forming the DGF of a sequence. This inversion is analogous to performing an inverse Z-transform to the generating function of a sequence to express formulas for the series coefficients of a given ordinary generating function.

References

  1. LePage 1980 , Chapter 11-3, p.340
  2. Widder 1941 , Chapter VI, §8, p.246
  3. Widder 1941 , Chapter II, §1
  4. Widder 1941 , Chapter VI, §2