In electrical engineering, the method of symmetrical components simplifies analysis of unbalanced three-phase power systems under both normal and abnormal conditions. The basic idea is that an asymmetrical set of N phasors can be expressed as a linear combination of N symmetrical sets of phasors by means of a complex linear transformation. [1] Fortescue's theorem (symmetrical components) is based on superposition principle, [2] so it is applicable to linear power systems only, or to linear approximations of non-linear power systems.
In the most common case of three-phase systems, the resulting "symmetrical" components are referred to as direct (or positive), inverse (or negative) and zero (or homopolar). The analysis of power system is much simpler in the domain of symmetrical components, because the resulting equations are mutually linearly independent if the circuit itself is balanced. [3]
In 1918 Charles Legeyt Fortescue presented a paper [4] which demonstrated that any set of N unbalanced phasors (that is, any such polyphase signal) could be expressed as the sum of N symmetrical sets of balanced phasors, for values of N that are prime. Only a single frequency component is represented by the phasors.
In 1943 Edith Clarke published a textbook giving a method of use of symmetrical components for three-phase systems that greatly simplified calculations over the original Fortescue paper. [5] In a three-phase system, one set of phasors has the same phase sequence as the system under study (positive sequence; say ABC), the second set has the reverse phase sequence (negative sequence; ACB), and in the third set the phasors A, B and C are in phase with each other (zero sequence, the common-mode signal). Essentially, this method converts three unbalanced phases into three independent sources, which makes asymmetric fault analysis more tractable.
By expanding a one-line diagram to show the positive sequence, negative sequence, and zero sequence impedances of generators, transformers and other devices including overhead lines and cables, analysis of such unbalanced conditions as a single line to ground short-circuit fault is greatly simplified. The technique can also be extended to higher order phase systems.
Physically, in a three phase system, a positive sequence set of currents produces a normal rotating field, a negative sequence set produces a field with the opposite rotation, and the zero sequence set produces a field that oscillates but does not rotate between phase windings. Since these effects can be detected physically with sequence filters, the mathematical tool became the basis for the design of protective relays, which used negative-sequence voltages and currents as a reliable indicator of fault conditions. Such relays may be used to trip circuit breakers or take other steps to protect electrical systems.
The analytical technique was adopted and advanced by engineers at General Electric and Westinghouse, and after World War II it became an accepted method for asymmetric fault analysis.
As shown in the figure to the above right, the three sets of symmetrical components (positive, negative, and zero sequence) add up to create the system of three unbalanced phases as pictured in the bottom of the diagram. The imbalance between phases arises because of the difference in magnitude and phase shift between the sets of vectors. Notice that the colors (red, blue, and yellow) of the separate sequence vectors correspond to three different phases (A, B, and C, for example). To arrive at the final plot, the sum of vectors of each phase is calculated. This resulting vector is the effective phasor representation of that particular phase. This process, repeated, produces the phasor for each of the three phases.
Symmetrical components are most commonly used for analysis of three-phase electrical power systems. The voltage or current of a three-phase system at some point can be indicated by three phasors, called the three components of the voltage or the current.
This article discusses voltage; however, the same considerations also apply to current. In a perfectly balanced three-phase power system, the voltage phasor components have equal magnitudes but are 120 degrees apart. In an unbalanced system, the magnitudes and phases of the voltage phasor components are different.
Decomposing the voltage phasor components into a set of symmetrical components helps analyze the system as well as visualize any imbalances. If the three voltage components are expressed as phasors (which are complex numbers), a complex vector can be formed in which the three phase components are the components of the vector. A vector for three phase voltage components can be written as
and decomposing the vector into three symmetrical components gives
where the subscripts 0, 1, and 2 refer respectively to the zero, positive, and negative sequence components. The sequence components differ only by their phase angles, which are symmetrical and so are radians or 120°.
Define a phasor rotation operator , which rotates a phasor vector counterclockwise by 120 degrees when multiplied by it:
Note that so that .
The zero sequence components have equal magnitude and are in phase with each other, therefore:
and the other sequence components have the same magnitude, but their phase angles differ by 120°. If the original unbalanced set of voltage phasors have positive or abc phase sequence, then:
meaning that
Thus,
where
If instead the original unbalanced set of voltage phasors have negative or acb phase sequence, the following matrix can be similarly derived:
The sequence components are derived from the analysis equation
where
The above two equations tell how to derive symmetrical components corresponding to an asymmetrical set of three phasors:
Visually, if the original components are symmetrical, sequences 0 and 2 will each form a triangle, summing to zero, and sequence 1 components will sum to a straight line.
The phasors form a closed triangle (e.g., outer voltages or line to line voltages). To find the synchronous and inverse components of the phases, take any side of the outer triangle and draw the two possible equilateral triangles sharing the selected side as base. These two equilateral triangles represent a synchronous and an inverse system.
If the phasors V were a perfectly synchronous system, the vertex of the outer triangle not on the base line would be at the same position as the corresponding vertex of the equilateral triangle representing the synchronous system. Any amount of inverse component would mean a deviation from this position. The deviation is exactly 3 times the inverse phase component.
The synchronous component is in the same manner 3 times the deviation from the "inverse equilateral triangle". The directions of these components are correct for the relevant phase. It seems counter intuitive that this works for all three phases regardless of the side chosen but that is the beauty of this illustration. The graphic is from Napoleon's Theorem, which matches a graphical calculation technique that sometimes appears in older references books. [6]
It can be seen that the transformation matrix A above is a DFT matrix, and as such, symmetrical components can be calculated for any poly-phase system.
Harmonics often occur in power systems as a consequence of non-linear loads. Each order of harmonics contributes to different sequence components. The fundamental and harmonics of order will contribute to the positive sequence component. Harmonics of order will contribute to the negative sequence. Harmonics of order contribute to the zero sequence.
Note that the rules above are only applicable if the phase values (or distortion) in each phase are exactly the same. Please further note that even harmonics are not common in power systems.
The zero sequence represents the component of the unbalanced phasors that is equal in magnitude and phase. Because they are in phase, zero sequence currents flowing through an n-phase network will sum to n times the magnitude of the individual zero sequence currents components. Under normal operating conditions this sum is small enough to be negligible. However, during large zero sequence events such as lightning strikes, this nonzero sum of currents can lead to a larger current flowing through the neutral conductor than the individual phase conductors. Because neutral conductors are typically not larger than individual phase conductors, and are often smaller than these conductors, a large zero sequence component can lead to overheating of neutral conductors and to fires.
One way to prevent large zero sequence currents is to use a delta connection, which appears as an open circuit to zero sequence currents. For this reason, most transmission, and much sub-transmission is implemented using delta. Much distribution is also implemented using delta, although "old work" distribution systems have occasionally been "wyed-up" (converted from delta to wye) so as to increase the line's capacity at a low converted cost, but at the expense of a higher central station protective relay cost.
In the mathematical subfield of numerical analysis, a B-spline or basis spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. Any spline function of given degree can be expressed as a linear combination of B-splines of that degree. Cardinal B-splines have knots that are equidistant from each other. B-splines can be used for curve-fitting and numerical differentiation of experimental data.
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers, and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space.
The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation by a given amount.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p.
In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851.
Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of , and the diagonal entries of are the corresponding eigenvalues of ; with respect to this eigenvector basis, is represented by .
In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers that describes the vector in terms of a particular ordered basis. An easy example may be a position such as in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.
In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
Screw theory is the algebraic calculation of pairs of vectors, such as angular and linear velocity, or forces and moments, that arise in the kinematics and dynamics of rigid bodies.
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non-umbilic point of a surface embedded in Euclidean space. It is named after French mathematician Jean Gaston Darboux.
Vector control, also called field-oriented control (FOC), is a variable-frequency drive (VFD) control method in which the stator currents of a three-phase AC or brushless DC electric motor are identified as two orthogonal components that can be visualized with a vector. One component defines the magnetic flux of the motor, the other the torque. The control system of the drive calculates the corresponding current component references from the flux and torque references given by the drive's speed control. Typically proportional-integral (PI) controllers are used to keep the measured current components at their reference values. The pulse-width modulation of the variable-frequency drive defines the transistor switching according to the stator voltage references that are the output of the PI current controllers.
The direct-quadrature-zerotransformation or zero-direct-quadraturetransformation is a tensor that rotates the reference frame of a three-element vector or a three-by-three element matrix in an effort to simplify analysis. The DQZ transform is the product of the Clarke transform and the Park transform, first proposed in 1929 by Robert H. Park.
In electrical engineering, the alpha-betatransformation is a mathematical transformation employed to simplify the analysis of three-phase circuits. Conceptually it is similar to the dq0 transformation. One very useful application of the transformation is the generation of the reference signal used for space vector modulation control of three-phase inverters.
[…] the results of Fortescue […] are proven by the superposition theorem, and for this reason, a direct generalization to nonlinear networks is impossible.