Distributed parameter system

Last updated

In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.

Contents

Linear time-invariant distributed-parameter systems

Abstract evolution equations

Discrete-time

With U, X and Y Hilbert spaces and   L(X),   L(U, X),   L(X, Y) and   L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:

with (the state) a sequence with values in X, (the input or control) a sequence with values in U and (the output) a sequence with values in Y.

Continuous-time

The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:

,
.

An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, [1] but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.

Example: a partial differential equation

The partial differential equation with and given by

fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as

It can be shown [2] that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as

Example: a delay differential equation

The delay differential equation

fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be the product of the complex numbers with L2(τ, 0). The operator A is defined as

It can be shown [3] that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as

Transfer functions

As in the finite-dimensional case the transfer function is defined through the Laplace transform (continuous-time) or Z-transform (discrete-time). Whereas in the finite-dimensional case the transfer function is a proper rational function, the infinite-dimensionality of the state space leads to irrational functions (which are however still holomorphic).

Discrete-time

In discrete-time the transfer function is given in terms of the state-space parameters by and it is holomorphic in a disc centered at the origin. [4] In case 1/z belongs to the resolvent set of A (which is the case on a possibly smaller disc centered at the origin) the transfer function equals . An interesting fact is that any function that is holomorphic in zero is the transfer function of some discrete-time system.

Continuous-time

If A generates a strongly continuous semigroup and B, C and D are bounded operators, then [5] the transfer function is given in terms of the state space parameters by for s with real part larger than the exponential growth bound of the semigroup generated by A. In more general situations this formula as it stands may not even make sense, but an appropriate generalization of this formula still holds. [6] To obtain an easy expression for the transfer function it is often better to take the Laplace transform in the given differential equation than to use the state space formulas as illustrated below on the examples given above.

Transfer function for the partial differential equation example

Setting the initial condition equal to zero and denoting Laplace transforms with respect to t by capital letters we obtain from the partial differential equation given above

This is an inhomogeneous linear differential equation with as the variable, s as a parameter and initial condition zero. The solution is . Substituting this in the equation for Y and integrating gives so that the transfer function is .

Transfer function for the delay differential equation example

Proceeding similarly as for the partial differential equation example, the transfer function for the delay equation example is [7] .

Controllability

In the infinite-dimensional case there are several non-equivalent definitions of controllability which for the finite-dimensional case collapse to the one usual notion of controllability. The three most important controllability concepts are:

Controllability in discrete-time

An important role is played by the maps which map the set of all U valued sequences into X and are given by . The interpretation is that is the state that is reached by applying the input sequence u when the initial condition is zero. The system is called

  • exactly controllable in time n if the range of equals X,
  • approximately controllable in time n if the range of is dense in X,
  • null controllable in time n if the range of includes the range of An.

Controllability in continuous-time

In controllability of continuous-time systems the map given by plays the role that plays in discrete-time. However, the space of control functions on which this operator acts now influences the definition. The usual choice is L2(0, ∞;U), the space of (equivalence classes of) U-valued square integrable functions on the interval (0, ∞), but other choices such as L1(0, ∞;U) are possible. The different controllability notions can be defined once the domain of is chosen. The system is called [8]

  • exactly controllable in time t if the range of equals X,
  • approximately controllable in time t if the range of is dense in X,
  • null controllable in time t if the range of includes the range of .

Observability

As in the finite-dimensional case, observability is the dual notion of controllability. In the infinite-dimensional case there are several different notions of observability which in the finite-dimensional case coincide. The three most important ones are:

Observability in discrete-time

An important role is played by the maps which map X into the space of all Y valued sequences and are given by if k  n and zero if k > n. The interpretation is that is the truncated output with initial condition x and control zero. The system is called

  • exactly observable in time n if there exists a kn > 0 such that for all x  X,
  • approximately observable in time n if is injective,
  • final state observable in time n if there exists a kn > 0 such that for all x  X.

Observability in continuous-time

In observability of continuous-time systems the map given by for s[0,t] and zero for s>t plays the role that plays in discrete-time. However, the space of functions to which this operator maps now influences the definition. The usual choice is L2(0, ∞, Y), the space of (equivalence classes of) Y-valued square integrable functions on the interval (0,∞), but other choices such as L1(0, ∞, Y) are possible. The different observability notions can be defined once the co-domain of is chosen. The system is called [9]

  • exactly observable in time t if there exists a kt > 0 such that for all x  X,
  • approximately observable in time t if is injective,
  • final state observable in time t if there exists a kt > 0 such that for all x  X.

Duality between controllability and observability

As in the finite-dimensional case, controllability and observability are dual concepts (at least when for the domain of and the co-domain of the usual L2 choice is made). The correspondence under duality of the different concepts is: [10]

See also

Notes

  1. Curtain and Zwart
  2. Curtain and Zwart Example 2.2.4
  3. Curtain and Zwart Theorem 2.4.6
  4. This is the mathematical convention, engineers seem to prefer transfer functions to be holomorphic at infinity; this is achieved by replacing z by 1/z
  5. Curtain and Zwart Lemma 4.3.6
  6. Staffans Theorem 4.6.7
  7. Curtain and Zwart Example 4.3.13
  8. Tucsnak Definition 11.1.1
  9. Tucsnak Definition 6.1.1
  10. Tucsnak Theorem 11.2.1

Related Research Articles

In quantum mechanics, bra–ket notation, or Dirac notation, is ubiquitous. The notation uses the angle brackets, "" and "", and a vertical bar "", to construct "bras" and "kets".

Wave equation Second-order linear differential equation important in physics

The wave equation is an important second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics.

Laplaces equation Second order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as

Dirac equation Relativistic quantum mechanical wave equation

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-½ massive particles such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way.

Potential flow

In fluid dynamics, potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero.

Wave function Mathematical description of the quantum state of a system

A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ and Ψ.

In physics, charge conjugation is a transformation that switches all particles with their corresponding antiparticles, thus changing the sign of all charges: not only electric charge but also the charges relevant to other forces. The term C-symmetry is an abbreviation of the phrase "charge conjugation symmetry", and is used in discussions of the symmetry of physical laws under charge-conjugation. Other important discrete symmetries are P-symmetry (parity) and T-symmetry.

In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition. By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance, and orientation. Every non-trivial rotation is determined by its axis of rotation and its angle of rotation. Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity map satisfies the definition of a rotation. Owing to the above properties, the set of all rotations is a group under composition. Rotations are not commutative, making it a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable; so it is in fact a Lie group. It is compact and has dimension 3.

In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are very useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.

Change of basis Change of coordinates for a vector space

In mathematics, an ordered basis of a vector space of finite dimension n allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of n scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector v on one basis is, in general, different from the coordinate vector that represents v on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis.

Nonlinear Schrödinger equation

In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides and to Bose–Einstein condensates confined to highly anisotropic cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model.

In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.

In mathematics, the Prolate spheroidal wave functions (PSWF) are a set of orthogonal bandlimited functions. They are eigenfunctions of a timelimiting operation followed by a lowpassing operation. Let denote the time truncation operator, such that if and only if is timelimited within . Similarly, let denote an ideal low-pass filtering operator, such that if and only if is bandlimited within . The operator turns out to be linear, bounded and self-adjoint. For we denote with the n-th eigenfunction, defined as

In mathematics, Liouville's formula, also known as the Abel-Jacobi-Liouville Identity, is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville. Jacobi's formula provides another representation of the same mathematical relationship.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

Symmetry in quantum mechanics Properties underlying modern physics

Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems.

The Schamel equation (S-equation) is a nonlinear partial differential equation of first order in time and third order in space. Similar to a Korteweg de Vries equation (KdV), it describes the development of a localized, coherent wave structure that propagates in a nonlinear dispersive medium. It was first derived in 1973 by Hans Schamel to describe the effects of electron trapping in the trough of the potential of a solitary electrostatic wave structure travelling with ion acoustic speed in a two-component plasma. It now applies to various localized pulse dynamics such as:

References