In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex frequency-domain representation.
It can be considered as a discrete-time equivalent of the Laplace transform. This similarity is explored in the theory of time-scale calculus.
The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewiczand others as a way to treat sampled-data control systems used with radar. It gives a tractable way to solve linear, constant-coefficient difference equations. It was later dubbed "the z-transform" by Ragazzini and Zadeh in the sampled-data control group at Columbia University in 1952.
The modified or advanced Z-transform was later developed and popularized by E. I. Jury.
The idea contained within the Z-transform is also known in mathematical literature as the method of generating functions which can be traced back as early as 1730 when it was introduced by de Moivre in conjunction with probability theory.From a mathematical view the Z-transform can also be viewed as a Laurent series where one views the sequence of numbers under consideration as the (Laurent) expansion of an analytic function.
The Z-transform can be defined as either a one-sided or two-sided transform.
The bilateral or two-sided Z-transform of a discrete-time signal is the formal power series defined as
where is an integer and is, in general, a complex number:
where is the magnitude of , is the imaginary unit, and is the complex argument (also referred to as angle or phase) in radians.
Alternatively, in cases where is defined only for , the single-sided or unilateral Z-transform is defined as
In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a discrete-time causal system.
An important example of the unilateral Z-transform is the probability-generating function, where the component is the probability that a discrete random variable takes the value , and the function is usually written as in terms of . The properties of Z-transforms (below) have useful interpretations in the context of probability theory.
The inverse Z-transform is
where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). In the case where the ROC is causal (see Example 2), this means the path C must encircle all of the poles of .
A special case of this contour integral occurs when C is the unit circle. This contour can be used when the ROC includes the unit circle, which is always guaranteed when is stable, that is, when all the poles are inside the unit circle. With this contour, the inverse Z-transform simplifies to the inverse discrete-time Fourier transform, or Fourier series, of the periodic values of the Z-transform around the unit circle:
The Z-transform with a finite range of n and a finite number of uniformly spaced z values can be computed efficiently via Bluestein's FFT algorithm. The discrete-time Fourier transform (DTFT)—not to be confused with the discrete Fourier transform (DFT)—is a special case of such a Z-transform obtained by restricting z to lie on the unit circle.
The region of convergence (ROC) is the set of points in the complex plane for which the Z-transform summation converges.
Let x[n] = (0.5)n. Expanding x[n] on the interval (−∞, ∞) it becomes
Looking at the sum
Therefore, there are no values of z that satisfy this condition.
Let (where u is the Heaviside step function). Expanding x[n] on the interval (−∞, ∞) it becomes
Looking at the sum
The last equality arises from the infinite geometric series and the equality only holds if |0.5z−1| < 1 which can be rewritten in terms of z as |z| > 0.5. Thus, the ROC is |z| > 0.5. In this case the ROC is the complex plane with a disc of radius 0.5 at the origin "punched out".
Let (where u is the Heaviside step function). Expanding x[n] on the interval (−∞, ∞) it becomes
Looking at the sum
Using the infinite geometric series, again, the equality only holds if |0.5−1z| < 1 which can be rewritten in terms of z as |z| < 0.5. Thus, the ROC is |z| < 0.5. In this case the ROC is a disc centered at the origin and of radius 0.5.
What differentiates this example from the previous example is only the ROC. This is intentional to demonstrate that the transform result alone is insufficient.
Examples 2 & 3 clearly show that the Z-transform X(z) of x[n] is unique when and only when specifying the ROC. Creating the pole–zero plot for the causal and anticausal case show that the ROC for either case does not include the pole that is at 0.5. This extends to cases with multiple poles: the ROC will never contain poles.
In example 2, the causal system yields an ROC that includes |z| = ∞ while the anticausal system in example 3 yields an ROC that includes |z| = 0.
In systems with multiple poles it is possible to have a ROC that includes neither |z| = ∞ nor |z| = 0. The ROC creates a circular band. For example,
has poles at 0.5 and 0.75. The ROC will be 0.5 < |z| < 0.75, which includes neither the origin nor infinity. Such a system is called a mixed-causality system as it contains a causal term (0.5)nu[n] and an anticausal term −(0.75)nu[−n−1].
The stability of a system can also be determined by knowing the ROC alone. If the ROC contains the unit circle (i.e., |z| = 1) then the system is stable. In the above systems the causal system (Example 2) is stable because |z| > 0.5 contains the unit circle.
Let us assume we are provided a Z-transform of a system without a ROC (i.e., an ambiguous x[n]). We can determine a unique x[n] provided we desire the following:
For stability the ROC must contain the unit circle. If we need a causal system then the ROC must contain infinity and the system function will be a right-sided sequence. If we need an anticausal system then the ROC must contain the origin and the system function will be a left-sided sequence. If we need both stability and causality, all the poles of the system function must be inside the unit circle.
The unique x[n] can then be found.
|Linearity||Contains ROC1 ∩ ROC2|
|Decimation||ohio-state.edu or ee.ic.ac.uk|
|ROC, except z = 0 if k > 0 and z = ∞ if k < 0|
|Bilateral Z-transform: |
|First difference backward|
with x[n]=0 for n<0
|Contains the intersection of ROC of X1(z) and z ≠ 0|
|First difference forward|
|Scaling in the z-domain|
|Differentiation||ROC, if is rational;|
|Convolution||Contains ROC1 ∩ ROC2|
|Cross-correlation||Contains the intersection of ROC of and|
Initial value theorem : If x[n] is causal, then
Final value theorem : If the poles of (z−1)X(z) are inside the unit circle, then
is the unit (or Heaviside) step function and
is the discrete-time unit impulse function (cf Dirac delta function which is a continuous-time version). The two functions are chosen together so that the unit step function is the accumulation (running total) of the unit impulse function.
|17||, for positive integer|
|18||, for positive integer|
For values of in the region , known as the unit circle, we can express the transform as a function of a single, real variable, ω, by defining . And the bi-lateral transform reduces to a Fourier series:
which is also known as the discrete-time Fourier transform (DTFT) of the sequence. This 2π-periodic function is the periodic summation of a Fourier transform, which makes it a widely used analysis tool. To understand this, let be the Fourier transform of any function, , whose samples at some interval, T, equal the x[n] sequence. Then the DTFT of the x[n] sequence can be written as follows.
When T has units of seconds, has units of hertz. Comparison of the two series reveals that is a normalized frequency with units of radians per sample. The value ω=2π corresponds to Hz. And now, with the substitution Eq.4 can be expressed in terms of the Fourier transform, X(•):
As parameter T changes, the individual terms of Eq.5 move farther apart or closer together along the f-axis. In Eq.6 however, the centers remain 2π apart, while their widths expand or contract. When sequence x(nT) represents the impulse response of an LTI system, these functions are also known as its frequency response. When the sequence is periodic, its DTFT is divergent at one or more harmonic frequencies, and zero at all other frequencies. This is often represented by the use of amplitude-variant Dirac delta functions at the harmonic frequencies. Due to periodicity, there are only a finite number of unique amplitudes, which are readily computed by the much simpler discrete Fourier transform (DFT). (See DTFT § Periodic data.)
The bilinear transform can be used to convert continuous-time filters (represented in the Laplace domain) into discrete-time filters (represented in the Z-domain), and vice versa. The following substitution is used:
to convert some function in the Laplace domain to a function in the Z-domain (Tustin transformation), or
from the Z-domain to the Laplace domain. Through the bilinear transformation, the complex s-plane (of the Laplace transform) is mapped to the complex z-plane (of the z-transform). While this mapping is (necessarily) nonlinear, it is useful in that it maps the entire axis of the s-plane onto the unit circle in the z-plane. As such, the Fourier transform (which is the Laplace transform evaluated on the axis) becomes the discrete-time Fourier transform. This assumes that the Fourier transform exists; i.e., that the axis is in the region of convergence of the Laplace transform.
Given a one-sided Z-transform, X(z), of a time-sampled function, the corresponding starred transform produces a Laplace transform and restores the dependence on sampling parameter, T:
The inverse Laplace transform is a mathematical abstraction known as an impulse-sampled function.
The linear constant-coefficient difference (LCCD) equation is a representation for a linear system based on the autoregressive moving-average equation.
Both sides of the above equation can be divided by α0, if it is not zero, normalizing α0 = 1 and the LCCD equation can be written
This form of the LCCD equation is favorable to make it more explicit that the "current" output y[n] is a function of past outputs y[n−p], current input x[n], and previous inputs x[n−q].
Taking the Z-transform of the above equation (using linearity and time-shifting laws) yields
and rearranging results in
From the fundamental theorem of algebra the numerator has M roots (corresponding to zeros of H) and the denominator has N roots (corresponding to poles). Rewriting the transfer function in terms of zeros and poles
where qk is the k-th zero and pk is the k-th pole. The zeros and poles are commonly complex and when plotted on the complex plane (z-plane) it is called the pole–zero plot.
In addition, there may also exist zeros and poles at z = 0 and z = ∞. If we take these poles and zeros as well as multiple-order zeros and poles into consideration, the number of zeros and poles are always equal.
By factoring the denominator, partial fraction decomposition can be used, which can then be transformed back to the time domain. Doing so would result in the impulse response and the linear constant coefficient difference equation of the system.
If such a system H(z) is driven by a signal X(z) then the output is Y(z) = H(z)X(z). By performing partial fraction decomposition on Y(z) and then taking the inverse Z-transform the output y[n] can be found. In practice, it is often useful to fractionally decompose before multiplying that quantity by z to generate a form of Y(z) which has terms with easily computable inverse Z-transforms.
The bilinear transform is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms linear differential equations into algebraic equations and convolution into multiplication.
In engineering, a transfer function of an electronic or control system component is a mathematical function which theoretically models the device's output for each possible input. In its simplest form, this function is a two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.
In mathematics, the Dirac delta function is a generalized function or distribution, a function on the space of test functions. It was introduced by physicist Paul Dirac. It is called a function, although it is not a function R → C.
In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.
In mathematics, a Fourier series is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a synthesis of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving weights that describe a given function is a form of Fourier analysis. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.
The Short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in Software Defined Radio based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use FFTs with 2^24 points on desktop computers.
In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.
Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response which does not become exactly zero past a certain point, but continues indefinitely. This is in contrast to a finite impulse response (FIR) system in which the impulse response does become exactly zero at times for some finite , thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as IIR systems or IIR filters.
In mathematics and in signal processing, the Hilbert transform is a specific linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). This linear operator is given by convolution with the function . The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° to every frequency component of a function, the sign of the shift depending on the sign of the frequency. The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.
In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for linear signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of values.
In system analysis, among other fields of study, a linear time-invariant system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = x(t) ∗ h(t) where h(t) is called the system's impulse response and ∗ represents convolution. What's more, there are systematic methods for solving any such system, whereas systems not meeting both properties are generally more difficult to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as:
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property. The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase-contrast microscopy, but on digital images. PST can be applied to digital images and temporal data.