Kron reduction

Last updated

In power engineering, Kron reduction is a method used to reduce or eliminate the desired node without need of repeating the steps like in Gaussian elimination. [1]

Contents

It is named after American electrical engineer Gabriel Kron.

Description

Kron reduction is a useful tool to eliminate unused nodes in a Y-parameter matrix. [2] [3] For example, three linear elements linked in series with a port at each end may be easily modeled as a 4X4 nodal admittance matrix of Y-parameters, but only the two port nodes normally need to be considered for modeling and simulation. Kron reduction may be used to eliminate the internal nodes, and thereby reducing the 4th order Y-parameter matrix to a 2nd order Y-parameter matrix. The 2nd order Y-parameter matrix is then more easily converted to a Z-parameter matrix or S-parameter matrix when needed.

Matrix operations

Consider a general Y-parameter matrix that may be created from a combination of linear elements constructed such that two internal nodes exist.

While it is possible to use the 4X4 matrix in simulations or to construct a 4X4 S-parameter matrix, is may be simpler to reduce the Y-parameter matrix to a 2X2 by eliminating the two internal nodes through Kron Reduction, and then simulating with a 2X2 matrix and/or converting to a 2X2 S-parameter or Z-Parameter matrix.

The process for executing a Kron reduction is as follows: [4]

Select the Kth row/column used to model the undesired internal nodes to be eliminated. Apply the below formula to all other matrix entries that do not reside on the Kth row and column. Then simply remove the Kth row and column of the matrix, which reduces the size of the matrix by one.

Kron Reduction for the Kth row/column of an NxN matrix:

Linear elements that are also passive always form a symmetric Y-parameter matrix, that is, in all cases. The number of computations of a Kron reduction may be reduced by taking advantage of this symmetry, as shown ion the equation below.

Kron Reduction for symmetric NxN matrices:

Once all the matrix entries have been modified by the Kron Reduction equation, the Kth row/column me be eliminated, and the matrix order is reduced by one. Repeat for all internal nodes desired to be eliminated

Simplified theory and derivation

The concept behind Kron reduction is quite simple. Y-parameters are measured using nodes shorted to ground, but unused nodes, that is nodes without ports, are not necessarily grounded, and their state is not directly known to the outside. Therefore, the Y-parameter matrix of the full network does not adequately describe the Y-parameter of the network being modeled, and contains extraneous entries if some nodes do not have ports.

Consider the case of two lumped elements of equal value in series, two resistors of equal resistance for example. If both resistors have an admittance of , and the series network has an admittance of . The full admittance matrix that accounts for all three nodes in the network would look like below, using standard Y-parameter matrix construction techniques:

However, it is easily observed that the two resistors in series, each with an assigned admittance of Y, has a net admittance of , and since resistors do not leak current to ground, that the network Y12 is equal and opposite to YR11, that is YR12 = -YR11. The 2 port network without the middle node can be created by inspection and is shown below:

Since row and column 2 of the matrix is to be eliminated, we can rewrite without row 2 and column 2. We will call this rewritten matrix .

Now we have a basis to create the translation equation by finding an equation that translates each entry in to the corresponding entry in :

For each of the four entries, it can be observed that subtracting from the left-of-arrow value successfully makes the translation. Since is identical to , each case of meets the condition shown in the general translation equations.

The same process may be used for elements of arbitrary admittance ( etc.) and networks of arbitrary size, but the algebra becomes more complex. The trick is to deduce and/or calculate an expression that translates the original matrix entries to the reduced matrix entries.

See also

Related Research Articles

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.

<span class="mw-page-title-main">Lagrange polynomial</span> Polynomials used for interpolation

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.

In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

<span class="mw-page-title-main">Nodal analysis</span> Method in electric circuits analysis

In electric circuits analysis, nodal analysis, node-voltage analysis, or the branch current method is a method of determining the voltage between "nodes" in an electrical circuit in terms of the branch currents.

In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’

In power engineering, nodal admittance matrix is an N x N matrix describing a linear power system with N buses. It represents the nodal admittance of the buses in a power system. In realistic systems which contain thousands of buses, the admittance matrix is quite sparse. Each bus in a real power system is usually connected to only a few other buses through the transmission lines. The nodal admittance matrix is used in the formulation of the power flow problem.

In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

Admittance parameters or Y-parameters are properties used in many areas of electrical engineering, such as power, electronics, and telecommunications. These parameters are used to describe the electrical behavior of linear electrical networks. They are also used to describe the small-signal (linearized) response of non-linear networks. Y parameters are also known as short circuited admittance parameters. They are members of a family of similar parameters used in electronic engineering, other examples being: S-parameters, Z-parameters, H-parameters, T-parameters or ABCD-parameters.

<span class="mw-page-title-main">Electronic circuit simulation</span>

Electronic circuit simulation uses mathematical models to replicate the behavior of an actual electronic device or circuit. Simulation software allows for modeling of circuit operation and is an invaluable analysis tool. Due to its highly accurate modeling capability, many colleges and universities use this type of software for the teaching of electronics technician and electronics engineering programs. Electronics simulation software engages its users by integrating them into the learning experience. These kinds of interactions actively engage learners to analyze, synthesize, organize, and evaluate content and result in learners constructing their own knowledge.

<span class="mw-page-title-main">Estimation of signal parameters via rotational invariance techniques</span>

Estimation theory, or estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation. However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.

Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism.

References

  1. Caliskan, Sina Yamac; Tabuada, Paulo (2014). "Towards Kron reduction of generalized electrical networks". Automatica. 50 (10): 2586–2590. doi:10.1016/j.automatica.2014.08.017.
  2. "Elements of Power Systems Analysis" (PDF).
  3. Granger and Stevenson, John and William (1994). Power System Analysis (Tata ed.). Singapore: McGraw-Hill. pp. 271–274. ISBN   0-07-113338-0.
  4. "Node Elimination by Kron Reduction".