Phase plane

Last updated

In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space.

Contents

The phase plane method refers to graphically determining the existence of limit cycles in the solutions of the differential equation.

The solutions to the differential equation are a family of functions. Graphically, this can be plotted in the phase plane like a two-dimensional vector field. Vectors representing the derivatives of the points with respect to a parameter (say time t), that is (dx/dt, dy/dt), at representative points are drawn. With enough of these arrows in place the system behaviour over the regions of plane in analysis can be visualized and limit cycles can be easily identified.

The entire field is the phase portrait , a particular path taken along a flow line (i.e. a path always tangent to the vectors) is a phase path. The flows in the vector field indicate the time-evolution of the system the differential equation describes.

In this way, phase planes are useful in visualizing the behaviour of physical systems; in particular, of oscillatory systems such as predator-prey models (see Lotka–Volterra equations). In these models the phase paths can "spiral in" towards zero, "spiral out" towards infinity, or reach neutrally stable situations called centres where the path traced out can be either circular, elliptical, or ovoid, or some variant thereof. This is useful in determining if the dynamics are stable or not. [1]

Other examples of oscillatory systems are certain chemical reactions with multiple steps, some of which involve dynamic equilibria rather than reactions that go to completion. In such cases one can model the rise and fall of reactant and product concentration (or mass, or amount of substance) with the correct differential equations and a good understanding of chemical kinetics. [2]

Example of a linear system

A two-dimensional system of linear differential equations can be written in the form: [1]

which can be organized into a matrix equation:

where A is the 2 × 2 coefficient matrix above, and v = (x, y) is a coordinate vector of two independent variables.

Such systems may be solved analytically, for this case by integrating: [3]

although the solutions are implicit functions in x and y, and are difficult to interpret. [1]

Solving using eigenvalues

More commonly they are solved with the coefficients of the right hand side written in matrix form using eigenvalues λ, given by the determinant:

and eigenvectors:

The eigenvalues represent the powers of the exponential components and the eigenvectors are coefficients. If the solutions are written in algebraic form, they express the fundamental multiplicative factor of the exponential term. Due to the nonuniqueness of eigenvectors, every solution arrived at in this way has undetermined constants c1, c2, …, cn.

The general solution is:

where λ1 and λ2 are the eigenvalues, and (k1, k2), (k3, k4) are the basic eigenvectors. The constants c1 and c2 account for the nonuniqueness of eigenvectors and are not solvable unless an initial condition is given for the system.

The above determinant leads to the characteristic polynomial:

which is just a quadratic equation of the form:

where

("tr" denotes trace) and

The explicit solution of the eigenvalues are then given by the quadratic formula:

where

Eigenvectors and nodes

The eigenvectors and nodes determine the profile of the phase paths, providing a pictorial interpretation of the solution to the dynamical system, as shown next.

Classification of equilibrium points of a linear autonomous system. These profiles also arise for non-linear autonomous systems in linearized approximations. Phase plane nodes.svg
Classification of equilibrium points of a linear autonomous system. These profiles also arise for non-linear autonomous systems in linearized approximations.

The phase plane is then first set-up by drawing straight lines representing the two eigenvectors (which represent stable situations where the system either converges towards those lines or diverges away from them). Then the phase plane is plotted by using full lines instead of direction field dashes. The signs of the eigenvalues indicate the phase plane's behaviour:

The above can be visualized by recalling the behaviour of exponential terms in differential equation solutions.

Repeated eigenvalues

This example covers only the case for real, separate eigenvalues. Real, repeated eigenvalues require solving the coefficient matrix with an unknown vector and the first eigenvector to generate the second solution of a two-by-two system. However, if the matrix is symmetric, it is possible to use the orthogonal eigenvector to generate the second solution.

Complex eigenvalues

Complex eigenvalues and eigenvectors generate solutions in the form of sines and cosines as well as exponentials. One of the simplicities in this situation is that only one of the eigenvalues and one of the eigenvectors is needed to generate the full solution set for the system.

See also

Related Research Articles

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

Ray transfer matrix analysis is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element is described by a 2×2 ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics.

<span class="mw-page-title-main">Eigenfunction</span> Mathematical function of a linear operator

In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that , or equivalently . For a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .Diagonalization is the process of finding the above  and .

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor.

In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling.

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.

A multi-compartment model is a type of mathematical model used for describing the way materials or energies are transmitted among the compartments of a system. Sometimes, the physical system that we try to model in equations is too complex, so it is much easier to discretize the problem and reduce the number of parameters. Each compartment is assumed to be a homogeneous entity within which the entities being modeled are equivalent. A multi-compartment model is classified as a lumped parameters model. Similar to more general mathematical models, multi-compartment models can treat variables as continuous, such as a differential equation, or as discrete, such as a Markov chain. Depending on the system being modeled, they can be treated as stochastic or deterministic.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered.

In mathematics, the slow manifold of an equilibrium point of a dynamical system occurs as the most common example of a center manifold. One of the main methods of simplifying dynamical systems, is to reduce the dimension of the system to that of the slow manifold—center manifold theory rigorously justifies the modelling. For example, some global and regional models of the atmosphere or oceans resolve the so-called quasi-geostrophic flow dynamics on the slow manifold of the atmosphere/oceanic dynamics, and is thus crucial to forecasting with a climate model.

A matrix difference equation is a difference equation in which the value of a vector of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The order of the equation is the maximum time gap between any two indicated values of the variable vector. For example,

Riemann invariants are mathematical transformations made on a system of conservation equations to make them more easily solvable. Riemann invariants are constant along the characteristic curves of the partial differential equations where they obtain the name invariant. They were first obtained by Bernhard Riemann in his work on plane waves in gas dynamics.

References

  1. 1 2 3 4 D.W. Jordan; P. Smith (2007). Non-Linear Ordinary Differential Equations: Introduction for Scientists and Engineers (4th ed.). Oxford University Press. ISBN   978-0-19-920825-8.
  2. K.T. Alligood; T.D. Sauer; J.A. Yorke (1996). Chaos: An Introduction to Dynamical Systems. Springer. ISBN   978-0-38794-677-1.
  3. W.E. Boyce; R.C. Diprima (1986). Elementary Differential Equations and Boundary Value Problems (4th ed.). John Wiley & Sons. ISBN   0-471-83824-1.