Redheffer star product

Last updated

In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959, [1] and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer.

Contents

Definition

Suppose are the block matrices and , whose blocks have the same shape when . The Redheffer star product is then defined by: [1]

,

assuming that are invertible, where is an identity matrix conformable to or , respectively. This can be rewritten several ways making use of the so-called push-through identity .

Redheffer's definition extends beyond matrices to linear operators on a Hilbert space . [2] . By definition, are linear endomorphisms of , making linear endomorphisms of , where is the direct sum. However, the star product still makes sense as long as the transformations are compatible, which is possible when and so that .

Properties

Existence

exists if and only if exists. [3] Thus when either exists, so does the Redheffer star product.

Identity

The star identity is the identity on , or . [2]

Associativity

The star product is associative, provided all of the relevant matrices are defined. [3] Thus .

Adjoint

Provided either side exists, the adjoint of a Redheffer star product is . [2]

Inverse

If is the left matrix inverse of such that , has a right inverse, and exists, then . [2] Similarly, if is the left matrix inverse of such that , has a right inverse, and exists, then .

Also, if and has a left inverse then .

The star inverse equals the matrix inverse and both can be computed with block inversion as [2]

.

Derivation from a linear system

The coupled system of equations, with arrows labeling the inputs and outputs to each matrix Redheffer I-O.svg
The coupled system of equations, with arrows labeling the inputs and outputs to each matrix

The star product arises from solving multiple linear systems of equations that share variables in common. Often, each linear system models the behavior of one subsystem in a physical process and by connecting the multiple subsystems into a whole, one can eliminate variables shared across subsystems in order to obtain the overall linear system. For instance, let be elements of a Hilbert space such that [4]

and

The "plumbing" of one of Redheffer's systems of equations. Redheffer system detail.svg
The "plumbing" of one of Redheffer's systems of equations.

giving the following equations in variables:

.

By substituting the first equation into the last we find:

.

By substituting the last equation into the first we find:

.

Eliminating by substituting the two preceding equations into those for results in the Redheffer star product being the matrix such that: [1]

The star product eliminates the shared variables in this coupled system of equations. Redheffer star system.svg
The star product eliminates the shared variables in this coupled system of equations.

.

Connection to scattering matrices

The "plumbing" of the scattering matrix has a different convention than Redheffer that amounts to swapping and relabeling several quantities. The advantage is that now the S-matrix's subscripts label the input and output ports as well as the block indices. Scatter matrix detail.svg
The "plumbing" of the scattering matrix has a different convention than Redheffer that amounts to swapping and relabeling several quantities. The advantage is that now the S-matrix's subscripts label the input and output ports as well as the block indices.

Many scattering processes take on a form that motivates a different convention for the block structure of the linear system of a scattering matrix. Typically a physical device that performs a linear transformation on inputs, such as linear dielectric media on electromagnetic waves or in quantum mechanical scattering, can be encapsulated as a system which interacts with the environment through various ports, each of which accepts inputs and returns outputs. It is conventional to use a different notation for the Hilbert space, , whose subscript labels a port on the device. Additionally, any element, , has an additional superscript labeling the direction of travel (where + indicates moving from port i to i+1 and - indicates the reverse).

The equivalent notation for a Redheffer transformation, , used in the previous section is

.

The action of the S-matrix, , is defined with an additional flip compared to Redheffer's definition: [5]

,

so . Note that for in order for the off-diagonal identity matrices to be defined, we require be the same underlying Hilbert space. (The subscript does not imply any difference, but is just a label for bookkeeping.)

The star product, , for two S-matrices, , is given by [5]

The "plumbing" of a coupled pair of scattering matrices in a star product. Scatter connection.svg
The "plumbing" of a coupled pair of scattering matrices in a star product.

,

where and , so .

Properties

These are analogues of the properties of for Most of them follow from the correspondence . , the exchange operator, is also the S-matrix star identity defined below. For the rest of this section, are S-matrices.

Existence

exists when either or exist.

Identity

The S-matrix star identity, , is . This means

Associativity

Associativity of follows from associativity of and of matrix multiplication.

Adjoint

From the correspondence between and , and the adjoint of , we have that

Inverse

The matrix that is the S-matrix star product inverse of in the sense that is where is the ordinary matrix inverse and is as defined above.

Connection to transfer matrices

Transfer matrices have a different "plumbing" than scattering matrices. They connect one port to another instead of the inputs at all ports to the outputs at all ports. Transfer detail.svg
Transfer matrices have a different "plumbing" than scattering matrices. They connect one port to another instead of the inputs at all ports to the outputs at all ports.

Observe that a scattering matrix can be rewritten as a transfer matrix, , with action , where [6]

.

Here the subscripts relate the different directions of propagation at each port. As a result, the star product of scattering matrices

,

is analogous to the following matrix multiplication of transfer matrices [7]

,

where and , so .

Generalizations

Redheffer generalized the star product in several ways:

Arbitrary bijections

If there is a bijection given by then an associative star product can be defined by: [7]

.

The particular star product defined by Redheffer above is obtained from:

where .

3x3 star product

A star product can also be defined for 3x3 matrices. [8]

Applications to scattering matrices

In physics, the Redheffer star product appears when constructing a total scattering matrix from two or more subsystems. If system has a scattering matrix and system has scattering matrix , then the combined system has scattering matrix . [5]

Transmission line theory

Many physical processes, including radiative transfer, neutron diffusion, circuit theory, and others are described by scattering processes whose formulation depends on the dimension of the process and the representation of the operators. [6] For probabilistic problems, the scattering equation may appear in a Kolmogorov-type equation.

Electromagnetism

The Redheffer star product can be used to solve for the propagation of electromagnetic fields in stratified, multilayered media. [9] Each layer in the structure has its own scattering matrix and the total structure's scattering matrix can be described as the star product between all of the layers. [10] A free software program that simulates electromagnetism in layered media is the Stanford Stratified Structure Solver.

Semiconductor interfaces

Kinetic models of consecutive semiconductor interfaces can use a scattering matrix formulation to model the motion of electrons between the semiconductors. [11]

Factorization on graphs

In the analysis of Schrödinger operators on graphs, the scattering matrix of a graph can be obtained as a generalized star product of the scattering matrices corresponding to its subgraphs. [12]

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column with all other entries 0. An n × n permutation matrix can represent a permutation of n elements. Pre-multiplying an n-row matrix M by a permutation matrix P, forming PM, results in permuting the rows of M, while post-multiplying an n-column matrix M, forming MP, permutes the columns of M.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In mathematics, the Iwasawa decomposition of a semisimple Lie group generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix. It is named after Kenkichi Iwasawa, the Japanese mathematician who developed this method.

In mathematical functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.

The Peres–Horodecki criterion is a necessary condition, for the joint density matrix of two quantum mechanical systems and , to be separable. It is also called the PPT criterion, for positive partial transpose. In the 2×2 and 2×3 dimensional cases the condition is also sufficient. It is used to decide the separability of mixed states, where the Schmidt decomposition does not apply. The theorem was discovered in 1996 by Asher Peres and the Horodecki family

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

A quaternionic matrix is a matrix whose elements are quaternions.

In mathematics, the oscillator representation is a projective unitary representation of the symplectic group, first investigated by Irving Segal, David Shale, and André Weil. A natural extension of the representation leads to a semigroup of contraction operators, introduced as the oscillator semigroup by Roger Howe in 1988. The semigroup had previously been studied by other mathematicians and physicists, most notably Felix Berezin in the 1960s. The simplest example in one dimension is given by SU(1,1). It acts as Möbius transformations on the extended complex plane, leaving the unit circle invariant. In that case the oscillator representation is a unitary representation of a double cover of SU(1,1) and the oscillator semigroup corresponds to a representation by contraction operators of the semigroup in SL(2,C) corresponding to Möbius transformations that take the unit disk into itself.

In mathematics, Manin matrices, named after Yuri Manin who introduced them around 1987–88, are a class of matrices with elements in a not-necessarily commutative ring, which in a certain sense behave like matrices whose elements commute. In particular there is natural definition of the determinant for them and most linear algebra theorems like Cramer's rule, Cayley–Hamilton theorem, etc. hold true for them. Any matrix with commuting elements is a Manin matrix. These matrices have applications in representation theory in particular to Capelli's identity, Yangian and quantum integrable systems.

References

  1. 1 2 3 Redheffer, Raymond (1959). "Inequalities for a Matrix Riccati Equation". Journal of Mathematics and Mechanics. 8 (3): 349–367. ISSN   0095-9057. JSTOR   24900576.
  2. 1 2 3 4 5 Redheffer, R. M. (1960). "On a Certain Linear Fractional Transformation". Journal of Mathematics and Physics. 39 (1–4): 269–286. doi:10.1002/sapm1960391269. ISSN   1467-9590.
  3. 1 2 Mistiri, F. (1986-01-01). "The Star-product and its Algebraic Properties". Journal of the Franklin Institute. 321 (1): 21–38. doi:10.1016/0016-0032(86)90053-0. ISSN   0016-0032.
  4. Liu, Victor. "On scattering matrices and the Redheffer star product" (PDF). Retrieved 26 June 2021.
  5. 1 2 3 Rumpf, Raymond C. (2011). "Improved Formulation of Scattering Matrices for Semi-Analytical Methods that is Consistent with Convention". Progress in Electromagnetics Research B. 35: 241–261. doi: 10.2528/PIERB11083107 . ISSN   1937-6472.
  6. 1 2 Redheffer, Raymond (1962). "On the Relation of Transmission-Line Theory to Scattering and Transfer". Journal of Mathematics and Physics. 41 (1–4): 1–41. doi:10.1002/sapm19624111. ISSN   1467-9590.
  7. 1 2 Redheffer, Raymond (1960). "Supplementary Note on Matrix Riccati Equations". Journal of Mathematics and Mechanics. 9 (5): 745–7f48. ISSN   0095-9057. JSTOR   24900784.
  8. Redheffer, Raymond M. (1960). "The Mycielski-Paszkowski Diffusion Problem". Journal of Mathematics and Mechanics. 9 (4): 607–621. ISSN   0095-9057. JSTOR   24900958.
  9. Ko, D. Y. K.; Sambles, J. R. (1988-11-01). "Scattering matrix method for propagation of radiation in stratified media: attenuated total reflection studies of liquid crystals". JOSA A. 5 (11): 1863–1866. Bibcode:1988JOSAA...5.1863K. doi:10.1364/JOSAA.5.001863. ISSN   1520-8532.
  10. Whittaker, D. M.; Culshaw, I. S. (1999-07-15). "Scattering-matrix treatment of patterned multilayer photonic structures". Physical Review B. 60 (4): 2610–2618. Bibcode:1999PhRvB..60.2610W. doi:10.1103/PhysRevB.60.2610.
  11. Gosse, Laurent (2014-01-01). "Redheffer Products and Numerical Approximation of Currents in One-Dimensional Semiconductor Kinetic Models". Multiscale Modeling & Simulation. 12 (4): 1533–1560. doi:10.1137/130939584. ISSN   1540-3459.
  12. Kostrykin, V.; Schrader, R. (2001-03-22). "The generalized star product and the factorization of scattering matrices on graphs". Journal of Mathematical Physics. 42 (4): 1563–1598. arXiv: math-ph/0008022 . Bibcode:2001JMP....42.1563K. doi:10.1063/1.1354641. ISSN   0022-2488. S2CID   6791638.