Linear system

Last updated

In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear [ disambiguation needed ] case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.

Contents

Definition

Block diagram illustrating the additivity property for a deterministic continuous-time SISO system. The system satisfies the additivity property or is additive if and only if
y
3
(
t
)
=
y
1
(
t
)
+
y
2
(
t
)
{\displaystyle y_{3}(t)=y_{1}(t)+y_{2}(t)}
for all time
t
{\displaystyle t}
and for all inputs
x
1
(
t
)
{\displaystyle x_{1}(t)}
and
x
2
(
t
)
{\displaystyle x_{2}(t)}
. Click image to expand it. Additivity property block diagram for a SISO system.png
Block diagram illustrating the additivity property for a deterministic continuous-time SISO system. The system satisfies the additivity property or is additive if and only if for all time and for all inputs and . Click image to expand it.
Block diagram illustrating the homogeneity property for a deterministic continuous-time SISO system. The system satisfies the homogeneity property or is homogeneous if and only if
y
2
(
t
)
=
a
y
1
(
t
)
{\displaystyle y_{2}(t)=a\,y_{1}(t)}
for all time
t
{\displaystyle t}
, for all real constant
a
{\displaystyle a}
and for all input
x
1
(
t
)
{\displaystyle x_{1}(t)}
. Click image to expand it. Homogeneity property block diagram for a SISO system.png
Block diagram illustrating the homogeneity property for a deterministic continuous-time SISO system. The system satisfies the homogeneity property or is homogeneous if and only if for all time , for all real constant and for all input . Click image to expand it.
Block diagram illustrating the superposition principle for a deterministic continuous-time SISO system. The system satisfies the superposition principle and is thus linear if and only if
y
3
(
t
)
=
a
1
y
1
(
t
)
+
a
2
y
2
(
t
)
{\displaystyle y_{3}(t)=a_{1}\,y_{1}(t)+a_{2}\,y_{2}(t)}
for all time
t
{\displaystyle t}
, for all real constants
a
1
{\displaystyle a_{1}}
and
a
2
{\displaystyle a_{2}}
and for all inputs
x
1
(
t
)
{\displaystyle x_{1}(t)}
and
x
2
(
t
)
{\displaystyle x_{2}(t)}
. Click image to expand it. Superposition principle block diagram for a SISO system.png
Block diagram illustrating the superposition principle for a deterministic continuous-time SISO system. The system satisfies the superposition principle and is thus linear if and only if for all time , for all real constants and and for all inputs and . Click image to expand it.

A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description.

A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.) [1] [2] [3] [4]

The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs. [5] [6]

In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. [6] In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs. [6]

Mathematically, for a continuous-time system, given two arbitrary inputs

as well as their respective zero-state outputs

then a linear system must satisfy

for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t.

The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t).

The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components.

Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).

Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.

A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.

The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (, , , ) are considered instead of input and output signals (, , , .) [2] [4]

This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.

Examples

A simple harmonic oscillator obeys the differential equation:

If

then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.

Other examples of linear systems include those described by , , , and any system described by ordinary linear differential equations. [4] Systems described by , , , , , , , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle. [7] [8] [9] [10]

The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.

Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form , using product-to-sum trigonometric identities it can be easily shown that the output is , that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.

Time-varying impulse response

The time-varying impulse responseh(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is

where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is

then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:

The convolution integral

The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:

If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = tt' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,

Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:

In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = follows the formula for the frequency response function:

Discrete-time systems

The output of any discrete time linear system is related to the input by the time-varying convolution sum:

or equivalently for a time-invariant system on redefining h,

where

represents the lag time between the stimulus at time m and the response at time n.

See also

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

In signal processing, group delay and phase delay are two related ways of describing how a signal's frequency components are delayed in time when passing through a linear time-invariant (LTI) system. Phase delay describes the time shift of a sinusoidal component. Group delay describes the time shift of the envelope of a wave packet, a "pack" or "group" of oscillations centered around one frequency that travel together, formed for instance by multiplying a sine wave by an envelope.

Analog signal processing is a type of signal processing conducted on continuous analog signals by some analog means. "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current, or electric charge around components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.

In control theory, a causal system is a system where the output depends on past and current inputs but not future inputs—i.e., the output depends only on the input for values of .

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains simultaneously, using various time–frequency representations. Rather than viewing a 1-dimensional signal and some transform, time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.

In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In electrical circuit theory, the zero state response (ZSR) is the behaviour or response of a circuit with initial state of zero. The ZSR results only from the external inputs or driving functions of the circuit and not from the initial state.

<span class="mw-page-title-main">Causal filter</span>

In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems that are realizable must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).
<span class="mw-page-title-main">Wigner distribution function</span>

The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.

<span class="mw-page-title-main">Gabor transform</span>

The Gabor transform, named after Dennis Gabor, is a special case of the short-time Fourier transform. It is used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. The function to be transformed is first multiplied by a Gaussian function, which can be regarded as a window function, and the resulting function is then transformed with a Fourier transform to derive the time-frequency analysis. The window function means that the signal near the time being analyzed will have higher weight. The Gabor transform of a signal x(t) is defined by this formula:

In control theory, we may need to find out whether or not a system such as

The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.

In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing (multidimensional signal processing). Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear systems cannot be treated as linear systems, using Fourier transformation and wavelet analysis. Nonlinear systems will have chaotic behavior, limit cycle, steady state, bifurcation, multi-stability and so on. Nonlinear systems do not have a canonical representation, like impulse response for linear systems. But there are some efforts to characterize nonlinear systems, such as Volterra and Wiener series using polynomial integrals as the use of those methods naturally extend the signal into multi-dimensions. Another example is the Empirical mode decomposition method using Hilbert transform instead of Fourier Transform for nonlinear multi-dimensional systems. This method is an empirical method and can be directly applied to data sets. Multi-dimensional nonlinear filters (MDNF) are also an important part of NMSP, MDNF are mainly used to filter noise in real data. There are nonlinear-type hybrid filters used in color image processing, nonlinear edge-preserving filters use in magnetic resonance image restoration. Those filters use both temporal and spatial information and combine the maximum likelihood estimate with the spatial smoothing algorithm.

Transfer function filter utilizes the transfer function and the Convolution theorem to produce a filter. In this article, an example of such a filter using finite impulse response is discussed and an application of the filter into real world data is shown.

References

  1. Phillips, Charles L.; Parr, John M.; Riskin, Eve A. (2008). Signals, Systems, and Transforms (4 ed.). Pearson. p. 74. ISBN   978-0-13-198923-8.
  2. 1 2 Bessai, Horst J. (2005). MIMO Signals and Systems. Springer. pp. 27–28. ISBN   0-387-23488-8.
  3. Alkin, Oktay (2014). Signals and Systems: A MATLAB Integrated Approach. CRC Press. p. 99. ISBN   978-1-4665-9854-6.
  4. 1 2 3 Nahvi, Mahmood (2014). Signals and Systems. McGraw-Hill. pp. 162–164, 166, 183. ISBN   978-0-07-338070-4.
  5. Sundararajan, D. (2008). A Practical Approach to Signals and Systems. Wiley. p. 80. ISBN   978-0-470-82353-8.
  6. 1 2 3 Roberts, Michael J. (2018). Signals and Systems: Analysis Using Transform Methods and MATLAB® (3 ed.). McGraw-Hill. pp. 131, 133–134. ISBN   978-0-07-802812-0.
  7. Deergha Rao, K. (2018). Signals and Systems. Springer. pp. 43–44. ISBN   978-3-319-68674-5.
  8. Chen, Chi-Tsong (2004). Signals and systems (3 ed.). Oxford University Press. pp. 55–57. ISBN   0-19-515661-7.
  9. ElAli, Taan S.; Karim, Mohammad A. (2008). Continuous Signals and Systems with MATLAB (2 ed.). CRC Press. p. 53. ISBN   978-1-4200-5475-0.
  10. Apte, Shaila Dinkar (2016). Signals and Systems: Principles and Applications. Cambridge University Press. p. 187. ISBN   978-1-107-14624-2.