In power engineering, Kron reduction is a method used to reduce or eliminate the desired node without need of repeating the steps like in Gaussian elimination. [1]
It is named after American electrical engineer Gabriel Kron.
Kron reduction is a useful tool to eliminate unused nodes in a Y-parameter matrix. [2] [3] For example, three linear elements linked in series with a port at each end may be easily modeled as a 4X4 nodal admittance matrix of Y-parameters, but only the two port nodes normally need to be considered for modeling and simulation. Kron reduction may be used to eliminate the internal nodes, and thereby reducing the 4th order Y-parameter matrix to a 2nd order Y-parameter matrix. The 2nd order Y-parameter matrix is then more easily converted to a Z-parameter matrix or S-parameter matrix when needed.
Matrix operations
Consider a general Y-parameter matrix that may be created from a combination of linear elements constructed such that two internal nodes exist.
While it is possible to use the 4X4 matrix in simulations or to construct a 4X4 S-parameter matrix, is may be simpler to reduce the Y-parameter matrix to a 2X2 by eliminating the two internal nodes through Kron Reduction, and then simulating with a 2X2 matrix and/or converting to a 2X2 S-parameter or Z-Parameter matrix.
The process for executing a Kron reduction is as follows: [4]
Select the Kth row/column used to model the undesired internal nodes to be eliminated. Apply the below formula to all other matrix entries that do not reside on the Kth row and column. Then simply remove the Kth row and column of the matrix, which reduces the size of the matrix by one.
Kron Reduction for the Kth row/column of an NxN matrix:
Linear elements that are also passive always form a symmetric Y-parameter matrix, that is, in all cases. The number of computations of a Kron reduction may be reduced by taking advantage of this symmetry, as shown ion the equation below.
Kron Reduction for symmetric NxN matrices:
Once all the matrix entries have been modified by the Kron Reduction equation, the Kth row/column me be eliminated, and the matrix order is reduced by one. Repeat for all internal nodes desired to be eliminated
The concept behind Kron reduction is quite simple. Y-parameters are measured using nodes shorted to ground, but unused nodes, that is nodes without ports, are not necessarily grounded, and their state is not directly known to the outside. Therefore, the Y-parameter matrix of the full network does not adequately describe the Y-parameter of the network being modeled, and contains extraneous entries if some nodes do not have ports.
Consider the case of two lumped elements of equal value in series, two resistors of equal resistance for example. If both resistors have an admittance of , and the series network has an admittance of . The full admittance matrix that accounts for all three nodes in the network would look like below, using standard Y-parameter matrix construction techniques:
However, it is easily observed that the two resistors in series, each with an assigned admittance of Y, has a net admittance of , and since resistors do not leak current to ground, that the network Y12 is equal and opposite to YR11, that is YR12 = -YR11. The 2 port network without the middle node can be created by inspection and is shown below:
Since row and column 2 of the matrix is to be eliminated, we can rewrite without row 2 and column 2. We will call this rewritten matrix .
Now we have a basis to create the translation equation by finding an equation that translates each entry in to the corresponding entry in :
For each of the four entries, it can be observed that subtracting from the left-of-arrow value successfully makes the translation. Since is identical to , each case of meets the condition shown in the general translation equations.
The same process may be used for elements of arbitrary admittance ( etc.) and networks of arbitrary size, but the algebra becomes more complex. The trick is to deduce and/or calculate an expression that translates the original matrix entries to the reduced matrix entries.
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.
In linear algebra, a matrix is in row echelon form if it can be obtained as the result of Gaussian elimination. Every matrix can be put in row echelon form by applying a sequence of elementary row operations. The term echelon comes from the French échelon, and refers to the fact that the nonzero entries of a matrix in row echelon form look like an inverted staircase.
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
In linear algebra, a nilpotent matrix is a square matrix N such that
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
In electric circuits analysis, nodal analysis, node-voltage analysis, or the branch current method is a method of determining the voltage between "nodes" in an electrical circuit in terms of the branch currents.
In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’
A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain and domain neutral. This means a bond graph can incorporate multiple domains seamlessly.
In power engineering, nodal admittance matrix is an N x N matrix describing a linear power system with N buses. It represents the nodal admittance of the buses in a power system. In realistic systems which contain thousands of buses, the admittance matrix is quite sparse. Each bus in a real power system is usually connected to only a few other buses through the transmission lines. The nodal admittance matrix is used in the formulation of the power flow problem.
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.
Electronic circuit simulation uses mathematical models to replicate the behavior of an actual electronic device or circuit. Simulation software allows for the modeling of circuit operation and is an invaluable analysis tool. Due to its highly accurate modeling capability, many colleges and universities use this type of software for the teaching of electronics technician and electronics engineering programs. Electronics simulation software engages its users by integrating them into the learning experience. These kinds of interactions actively engage learners to analyze, synthesize, organize, and evaluate content and result in learners constructing their own knowledge.
Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism.