The following is a list of Laplace transforms for many common functions of a single variable. [1] The Laplace transform is an integral transform that takes a function of a positive real variable t (often time) to a function of a complex variable s (complex angular frequency).
The Laplace transform of a function can be obtained using the formal definition of the Laplace transform. However, some properties of the Laplace transform can be used to obtain the Laplace transform of some functions more easily.
For functions and and for scalar , the Laplace transform satisfies
and is, therefore, regarded as a linear operator.
The Laplace transform of is .
is the Laplace transform of .
The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t).
The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.
The following functions and variables are used in the table below:
Function | Time domain | Laplace s-domain | Region of convergence | Reference |
---|---|---|---|---|
unit impulse | all s | inspection | ||
delayed impulse | Re(s) > 0 | time shift of unit impulse [2] | ||
unit step | Re(s) > 0 | integrate unit impulse | ||
delayed unit step | Re(s) > 0 | time shift of unit step [3] | ||
ramp | Re(s) > 0 | integrate unit impulse twice | ||
nth power (for integer n) | Re(s) > 0 (n > −1) | Integrate unit step n times | ||
qth power (for complex q) | Re(s) > 0 Re(q) > −1 | [4] [5] | ||
nth root | Re(s) > 0 | Set q = 1/n above. | ||
nth power with frequency shift | Re(s) > −α | Integrate unit step, apply frequency shift | ||
delayed nth power with frequency shift | Re(s) > −α | Integrate unit step, apply frequency shift, apply time shift | ||
exponential decay | Re(s) > −α | Frequency shift of unit step | ||
two-sided exponential decay (only for bilateral transform) | −α < Re(s) < α | Frequency shift of unit step | ||
exponential approach | Re(s) > 0 | Unit step minus exponential decay | ||
sine | Re(s) > 0 | [6] | ||
cosine | Re(s) > 0 | [6] | ||
hyperbolic sine | Re(s) > |α| | [7] | ||
hyperbolic cosine | Re(s) > |α| | [7] | ||
exponentially decaying sine wave | Re(s) > −α | [6] | ||
exponentially decaying cosine wave | Re(s) > −α | [6] | ||
natural logarithm | Re(s) > 0 | [7] | ||
Bessel function of the first kind, of order n | Re(s) > 0 (n > −1) | [7] | ||
Error function | Re(s) > 0 | [7] | ||
In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.
In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .
In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions is the pointwise product of their Fourier transforms. More generally, convolution in one domain equals point-wise multiplication in the other domain. Other versions of the convolution theorem are applicable to various Fourier-related transforms.
The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for nonnegative arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex valued frequency-domain representation.
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.
In fractional calculus, an area of mathematical analysis, the differintegral is a combined differentiation/integration operator. Applied to a function ƒ, the q-differintegral of f, here denoted by
In mathematics, the inverse Laplace transform of a function is the piecewise-continuous and exponentially-restricted real function which has the property:
The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability.
Analog signal processing is a type of signal processing conducted on continuous analog signals by some analog means. "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current, or electric charge around components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics, the Riemann–Liouville integral associates with a real function another function Iαf of the same kind for each value of the parameter α > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, Iαf is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment-generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral
A resistor–inductor circuit, or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor, either in series driven by a voltage source or in parallel driven by a current source. It is one of the simplest analogue infinite impulse response electronic filters.
In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function.