Additive state decomposition occurs when a system is decomposed into two or more subsystems with the same dimension as that of the original system. [1] [2] A commonly used decomposition in the control field is to decompose a system into two or more lower-order subsystems, called lower-order subsystem decomposition here. In contrast, additive state decomposition is to decompose a system into two or more subsystems with the same dimension as that of the original system. [3]
Taking a system P for example, it is decomposed into two subsystems: Pp and Ps, where dim(Pp) = np and dim(Ps) = ns, respectively. The lower-order subsystem decomposition satisfies
By contrast, the additive state decomposition satisfies
Consider an 'original' system as follows:
| (1) |
where .
First, a 'primary' system is brought in, having the same dimension as the original system:
| (2) |
where
From the original system and the primary system, the following 'secondary' system is derived:
New variables are defined as follows:
| (3) |
Then the secondary system can be further written as follows:
| (4) |
From the definition ( 3 ), it follows
The process is shown in this picture:
In fact, the idea of the additive state decomposition has been implicitly mentioned in existing literature. An existing example is the tracking controller design, which often requires a reference system to derive error dynamics. The reference system (primary system) is assumed to be given as follows:
Based on the reference system, the error dynamics (secondary system) are derived as follows:
where
This is a commonly used step to transform a tracking problem to a stabilization problem when adaptive control is used.
Consider a class of systems as follows:
| (5) |
Choose ( 5 ) as the original system and design the primary system as follows:
| (6) |
Then the secondary system is determined by the rule ( 4 ):
| (7) |
By additive state decomposition
Since
the tracking error e(t) can be analyzed by ep(t) and es(t) separately. If ep(t) and es(t) are bounded and small, then so is e(t). Fortunately, note that ( 6 ) is a linear time-invariant system and is independent of the secondary system ( 7 ), for the analysis of which many tools such as the transfer function are available. By contrast, the transfer function tool cannot be directly applied to the original system ( 5 ) as it is time-varying.
Consider a class of nonlinear systems as follows:
| (8) |
where x, y, u represent the state, output and input, respectively; the function φ(•) is nonlinear. The objective is to design u such that y − r → 0 as t → ∞. Choose ( 8 ) as the original system and design the primary system as follows:
| (9) |
Then the secondary system is determined by the rule ( 4 ):
| (10) |
where us = up. Then x = xp + xs and y = yp + ys. Here, the task yp → 0 is assigned to the linear time-invariant system ( 9 ) (a linear time-invariant system being simpler than a nonlinear one). On the other hand, the task xs → 0 is assigned to the nonlinear system ( 10 ) (a stabilizing control problem is simpler than a tracking problem). If the two tasks are accomplished, then y = yp + ys → 0. The basic idea is to decompose an original system into two subsystems in charge of simpler subtasks. Then one designs controllers for two subtasks, and finally combines them to achieve the original control task. The process is shown in this picture:
A well-known example implicitly using additive state decomposition is the superposition principle, widely used in physics and engineering.
The superposition principle states: For all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. For a simple linear system:
the statement of the superposition principle means x = xp + xs, where
Obviously, this result can also be derived from the additive state decomposition. Moreover, the superposition principle and additive state decomposition have the following relationship. From Table 1, additive state decomposition can be applied not only to linear systems but also nonlinear systems.
Suitable systems | Emphasis | |
---|---|---|
Superposition principle | Linear | Superposition |
Additive state decomposition | Linear/nonlinear | Decomposition |
Additive state decomposition is used in stabilizing control, [4] and can be extended to additive output decomposition. [5]
Linearity is the property of a mathematical relationship (function) that can be graphically represented as a straight line. Linearity is closely related to proportionality. Examples in physics include rectilinear motion, the linear relationship of voltage and current in an electrical conductor, and the relationship of mass and weight. By contrast, more complicated relationships are nonlinear.
In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.
Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is asymptotically stable. The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
In control engineering, a state-space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations or difference equations. State variables are variables whose values evolve over time in a way that depends on the values they have at any given time and on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables.
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.
Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In control theory, the observability and controllability of a linear system are mathematical duals. The concept of observability was introduced by the Hungarian-American engineer Rudolf E. Kálmán for linear dynamic systems. A dynamical system designed to estimate the state of a system from measurements of the outputs is called a state observer or simply an observer for that system.
The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input produces response.
In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.
Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.
In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.
Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form
In control theory, a control-Lyapunov function (cLf) is an extension of the idea of Lyapunov function to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is stable. That is, whether the system starting in a state in some domain D will remain in D, or for asymptotic stability will eventually return to . The control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state asymptotically by applying the control u.
A linear circuit is an electronic circuit which obeys the superposition principle. This means that the output of the circuit F(x) when a linear combination of signals ax1(t) + bx2(t) is applied to it is equal to the linear combination of the outputs due to the signals x1(t) and x2(t) applied separately:
In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.
Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter Schmid in 2008. Given a time series of data, DMD computes a set of modes each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator. Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis, which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped sinusoidal behavior in time.
Input-to-state stability (ISS) is a stability notion widely used to study stability of nonlinear control systems with external inputs. Roughly speaking, a control system is ISS if it is globally asymptotically stable in the absence of external inputs and if its trajectories are bounded by a function of the size of the input for all sufficiently large times. The importance of ISS is due to the fact that the concept has bridged the gap between input–output and state-space methods, widely used within the control systems community. The notion of ISS was introduced by Eduardo Sontag in 1989.
Robust Principal Component Analysis (RPCA) is a modification of the widely used statistical procedure of principal component analysis (PCA) which works well with respect to grossly corrupted observations. A number of different approaches exist for Robust PCA, including an idealized version of Robust PCA, which aims to recover a low-rank matrix L0 from highly corrupted measurements M = L0 +S0. This decomposition in low-rank and sparse matrices can be achieved by techniques such as Principal Component Pursuit method (PCP), Stable PCP, Quantized PCP, Block based PCP, and Local PCP. Then, optimization methods are used such as the Augmented Lagrange Multiplier Method (ALM), Alternating Direction Method (ADM), Fast Alternating Minimization (FAM), Iteratively Reweighted Least Squares (IRLS ) or alternating projections (AP).
The quantum algorithm for linear systems of equations, also called HHL algorithm, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm published in 2008 for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.