In applied mathematics, the Rosenbrock system matrix or Rosenbrock's system matrix of a linear time-invariant system is a useful representation bridging state-space representation and transfer function matrix form. It was proposed in 1967 by Howard H. Rosenbrock. [1]
Consider the dynamic system
The Rosenbrock system matrix is given by
In the original work by Rosenbrock, the constant matrix is allowed to be a polynomial in .
The transfer function between the input and output is given by
where is the column of and is the row of .
Based in this representation, Rosenbrock developed his version of the PBH test.
For computational purposes, a short form of the Rosenbrock system matrix is more appropriate [2] and given by
The short form of the Rosenbrock system matrix has been widely used in H-infinity methods in control theory, where it is also referred to as packed form; see command pck in MATLAB. [3] An interpretation of the Rosenbrock System Matrix as a Linear Fractional Transformation can be found in. [4]
One of the first applications of the Rosenbrock form was the development of an efficient computational method for Kalman decomposition, which is based on the pivot element method. A variant of Rosenbrock’s method is implemented in the minreal command of Matlab [5] and GNU Octave.
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748.
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.
In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that
In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.
In physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group. Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. In special relativity the homogenous and inhomogenous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations; conversely, the group contraction in the classical limit c → ∞ of Poincaré transformations yields Galilean transformations.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:
In theoretical physics, the Bogoliubov transformation, also known as the Bogoliubov–Valatin transformation, was independently developed in 1958 by Nikolay Bogolyubov and John George Valatin for finding solutions of BCS theory in a homogeneous system. The Bogoliubov transformation is an isomorphism of either the canonical commutation relation algebra or canonical anticommutation relation algebra. This induces an autoequivalence on the respective representations. The Bogoliubov transformation is often used to diagonalize Hamiltonians, which yields the stationary solutions of the corresponding Schrödinger equation. The Bogoliubov transformation is also important for understanding the Unruh effect, Hawking radiation, pairing effects in nuclear physics, and many other topics.
In quantum mechanics, a two-state system is a quantum system that can exist in any quantum superposition of two independent quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor.
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1) submatrices of B. Specifically, for every i,
In mathematical physics and gauge theory, the ADHM construction or monad construction is the construction of all instantons using methods of linear algebra by Michael Atiyah, Vladimir Drinfeld, Nigel Hitchin, Yuri I. Manin in their paper "Construction of Instantons."
In mathematics, Capelli's identity, named after Alfredo Capelli (1887), is an analogue of the formula det(AB) = det(A) det(B), for certain matrices with noncommuting entries, related to the representation theory of the Lie algebra . It can be used to relate an invariant ƒ to the invariant Ωƒ, where Ω is Cayley's Ω process.
In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. Some particular topics out of many include; operations defined on matrices, functions of matrices, and the eigenvalues of matrices.
In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959, and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer.