Flow graph (mathematics)

Last updated

A flow graph is a form of digraph associated with a set of linear algebraic or differential equations: [1] [2]

Contents

"A signal flow graph is a network of nodes (or points) interconnected by directed branches, representing a set of linear algebraic equations. The nodes in a flow graph are used to represent the variables, or parameters, and the connecting branches represent the coefficients relating these variables to one another. The flow graph is associated with a number of simple rules which enable every possible solution [related to the equations] to be obtained." [1]

Although this definition uses the terms "signal-flow graph" and "flow graph" interchangeably, the term "signal-flow graph" is most often used to designate the Mason signal-flow graph, Mason being the originator of this terminology in his work on electrical networks. [3] [4] Likewise, some authors use the term "flow graph" to refer strictly to the Coates flow graph. [5] [6] According to Henley & Williams: [2]

"The nomenclature is far from standardized, and...no standardization can be expected in the foreseeable future."

A designation "flow graph" that includes both the Mason graph and the Coates graph, and a variety of other forms of such graphs [7] appears useful, and agrees with Abrahams and Coverley's and with Henley and Williams' approach. [1] [2]

A directed network – also known as a flow network – is a particular type of flow graph. A network is a graph with real numbers associated with each of its edges, and if the graph is a digraph, the result is a directed network. [8] A flow graph is more general than a directed network, in that the edges may be associated with gains, branch gains or transmittances, or even functions of the Laplace operator s, in which case they are called transfer functions. [2]

There is a close relationship between graphs and matrices and between digraphs and matrices. [9] "The algebraic theory of matrices can be brought to bear on graph theory to obtain results elegantly", and conversely, graph-theoretic approaches based upon flow graphs are used for the solution of linear algebraic equations. [10]

Deriving a flow graph from equations

An example of a signal-flow graph Signal flow graph example.png
An example of a signal-flow graph
Flow graph for three simultaneous equations. The edges incident on each node are colored differently just for emphasis. Flow graph for three linear equations.png
Flow graph for three simultaneous equations. The edges incident on each node are colored differently just for emphasis.

An example of a flow graph connected to some starting equations is presented.

The set of equations should be consistent and linearly independent. An example of such a set is: [2]

Consistency and independence of the equations in the set is established because the determinant of coefficients is non-zero, so a solution can be found using Cramer's rule.

Using the examples from the subsection Elements of signal-flow graphs, we construct the graph In the figure, a signal-flow graph in this case. To check that the graph does represent the equations given, go to node x1. Look at the arrows incoming to this node (colored green for emphasis) and the weights attached to them. The equation for x1 is satisfied by equating it to the sum of the nodes attached to the incoming arrows multiplied by the weights attached to these arrows. Likewise, the red arrows and their weights provide the equation for x2, and the blue arrows for x3.

Another example is the general case of three simultaneous equations with unspecified coefficients: [11]

To set up the flow graph, the equations are recast so each identifies a single variable by adding it to each side. For example:

Using the diagram and summing the incident branches into x1 this equation is seen to be satisfied.

As all three variables enter these recast equations in a symmetrical fashion, the symmetry is retained in the graph by placing each variable at the corner of an equilateral triangle. Rotating the figure 120° simply permutes the indices. This construction can be extended to more variables by placing the node for each variable at the vertex of a regular polygon with as many vertices as there are variables.

Of course, to be meaningful the coefficients are restricted to values such that the equations are independent and consistent.

See also

Further reading

Related Research Articles

<span class="mw-page-title-main">Linear equation</span> Equation that does not involve powers or products of variables

In mathematics, a linear equation is an equation that may be put in the form where are the variables, and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables. For example,

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In control engineering and system identification, a state-space representation is a mathematical model of a physical system specified as a set of input, output, and variables related by first-order differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the state variable values and may also depend on the input variable values.

In electrical engineering and electronics, a network is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which the map maps to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

<span class="mw-page-title-main">Symmetry in mathematics</span>

Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.

In linear algebra, an augmented matrix is a matrix obtained by appending a -dimensional row vector , on the right, as a further column to a -dimensional matrix . This is usually done for the purpose of performing the same elementary row operations on the augmented matrix as is done on the original one when solving a system of linear equations by Gaussian elimination.

In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.

In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others.

A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs, which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications.

<span class="mw-page-title-main">Directed graph</span> Graph with oriented edges

In mathematics, and more specifically in graph theory, a directed graph is a graph that is made up of a set of vertices connected by directed edges, often called arcs.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In linear algebra, the Rouché–Capelli theorem determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:

In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".

References

  1. 1 2 3 J. R. Abrahams, G. P. Coverley (1965). "Chapter 1: Elements of a flow graph". Signal flow analysis. Elsevier. p. 1. ISBN   9781483180700.
  2. 1 2 3 4 5 Ernest J Henley, RA Williams (1973). "Basic concepts". Graph theory in modern engineering; computer aided design, control, optimization, reliability analysis. Academic Press. p. 2. ISBN   9780080956077.
  3. Mason, Samuel J. (September 1953). "Feedback Theory - Some Properties of Signal Flow Graphs" (PDF). Proceedings of the IRE. 41 (9): 1144–1156. doi:10.1109/jrproc.1953.274449. S2CID   17565263. Archived from the original (PDF) on 2018-02-19. Retrieved 2015-01-09.
  4. SJ Mason (July 1956). "Feedback Theory-Further Properties of Signal Flow Graphs" (PDF). Proceedings of the IRE. 44 (7): 920–926. doi:10.1109/JRPROC.1956.275147. hdl: 1721.1/4778 . S2CID   18184015. On-line version found at MIT Research Laboratory of Electronics.
  5. Wai-Kai Chen (May 1964). "Some applications of linear graphs" (PDF). Coordinated Science Laboratory, University of Illinois, Urbana.
  6. RF Hoskins (2014). "Flow-graph and signal flow-graph analysis of linear systems". In SR Deards (ed.). Recent Developments in Network Theory: Proceedings of the Symposium Held at the College of Aeronautics, Cranfield, September 1961. Elsevier. ISBN   9781483223568.
  7. Kazuo Murota (2009). Matrices and Matroids for Systems Analysis. Springer Science & Business Media. p. 47. ISBN   9783642039942.
  8. Gary Chartrand (2012). Introductory Graph Theory (Republication of Graphs as Mathematical Models, 1977 ed.). Courier Corporation. p. 19. ISBN   9780486134949.
  9. Frank Harary (January 1967). "Graphs and Matrices" (PDF). SIAM Review. 9 (2). Archived from the original (PDF) on 2016-03-04. Retrieved 2015-01-09.
  10. K. Thulasiraman, M. N. S. Swamy (2011). Graphs: Theory and Algorithms. John Wiley & Sons. pp. 163 ff. ISBN   9781118030257.
  11. Narsingh Deo (2004). Graph Theory with Applications to Engineering and Computer Science (Reprint of 1974 ed.). Prentice-Hall of India. p. 417. ISBN   9788120301450.