Autonomous system (mathematics)

Last updated
Stability diagram classifying Poincare maps of linear autonomous system
x
'
=
A
x
,
{\displaystyle x'=Ax,}
as stable or unstable according to their features. Stability generally increases to the left of the diagram. Some sink, source or node are equilibrium points. Stability Diagram.png
Stability diagram classifying Poincaré maps of linear autonomous system as stable or unstable according to their features. Stability generally increases to the left of the diagram. Some sink, source or node are equilibrium points.
2-dimensional case refers to Phase plane. Phase plane nodes.svg
2-dimensional case refers to Phase plane.

In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems.

Contents

Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future.

Definition

An autonomous system is a system of ordinary differential equations of the form

where x takes values in n-dimensional Euclidean space; t is often interpreted as time.

It is distinguished from systems of differential equations of the form

in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter t, again often interpreted as time; such systems are by definition not autonomous.

Properties

Solutions are invariant under horizontal translations:

Let be a unique solution of the initial value problem for an autonomous system

Then solves

Denoting gets and , thus

For the initial condition, the verification is trivial,

Example

The equation is autonomous, since the independent variable () does not explicitly appear in the equation. To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB

Ffun=@(X,Y)(2-Y).*Y;% function f(x,y)=(2-y)y[X,Y]=meshgrid(0:.2:6,-1:.2:3);% choose the plot sizesDY=Ffun(X,Y);DX=ones(size(DY));% generate the plot valuesquiver(X,Y,DX,DY,'k');% plot the direction field in blackholdon;contour(X,Y,DY,[012],'g');% add the isoclines(0 1 2) in greentitle('Slope field and isoclines for f(x,y)=(2-y)y')

One can observe from the plot that the function is -invariant, and so is the shape of the solution, i.e. for any shift .

Solving the equation symbolically in MATLAB, by running

symsy(x);equation=(diff(y)==(2-y)*y);% solve the equation for a general solution symbolicallyy_general=dsolve(equation);

obtains two equilibrium solutions, and , and a third solution involving an unknown constant , -2/(exp(C3-2*x)-1).

Picking up some specific values for the initial condition, one can add the plot of several solutions

Slope field with isoclines and solutions Slop field with isoclines and solutions.png
Slope field with isoclines and solutions
% solve the initial value problem symbolically% for different initial conditionsy1=dsolve(equation,y(1)==1);y2=dsolve(equation,y(2)==1);y3=dsolve(equation,y(3)==1);y4=dsolve(equation,y(1)==3);y5=dsolve(equation,y(2)==3);y6=dsolve(equation,y(3)==3);% plot the solutionsezplot(y1,[06]);ezplot(y2,[06]);ezplot(y3,[06]);ezplot(y4,[06]);ezplot(y5,[06]);ezplot(y6,[06]);title('Slope field, isoclines and solutions for f(x,y)=(2-y)y')legend('Slope field','Isoclines','Solutions y_{1..6}');text([123],[111],strcat('\leftarrow',{'y_1','y_2','y_3'}));text([123],[333],strcat('\leftarrow',{'y_4','y_5','y_6'}));gridon;

Qualitative analysis

Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line.

Solution techniques

The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order is equivalent to an -dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa.

First order

The first-order autonomous equation

is separable, so it can be solved by rearranging it into the integral form

Second order

The second-order autonomous equation

is more difficult, but it can be solved [2] by introducing the new variable

and expressing the second derivative of via the chain rule as

so that the original equation becomes

which is a first order equation containing no reference to the independent variable . Solving provides as a function of . Then, recalling the definition of :

which is an implicit solution.

Special case: x = f(x)

The special case where is independent of

benefits from separate treatment. [3] These types of equations are very common in classical mechanics because they are always Hamiltonian systems.

The idea is to make use of the identity

which follows from the chain rule, barring any issues due to division by zero.

By inverting both sides of a first order autonomous system, one can immediately integrate with respect to :

which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to instead of :

To reemphasize: what's been accomplished is that the second derivative with respect to has been expressed as a derivative of . The original second order equation can now be integrated:

This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants.

Special case: x = xnf(x)

Using the above approach, the technique can extend to the more general equation

where is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of . Rewriting the second derivative, rearranging, and expressing the left side as a derivative:

The right will carry +/− if is even. The treatment must be different if :

Higher orders

There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable only [4] [5] (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor.

Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum. [6]

Multivariate case

In , where is an -dimensional column vector dependent on .

The solution is where is an constant vector. [7]

Finite durations

For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, [8] meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. This finite-duration solutions cannot be analytical functions on the whole real line, and because they will being non-Lipschitz function at the ending time, they don't stand uniqueness of solutions of Lipschitz differential equations.

As example, the equation:

Admits the finite duration solution:

See also

Related Research Articles

<span class="mw-page-title-main">Wave equation</span> Differential wave equation important in physics

The (two-way) wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields – as they occur in classical physics – such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics. Single mechanical or electromagnetic waves propagating in a pre-defined direction can also be described with the first-order one-way wave equation, which is much easier to solve and also valid for inhomogeneous media.

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state space Markov chains, usually under the name Foster–Lyapunov functions.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

<span class="mw-page-title-main">Separation of variables</span> Technique for solving differential equations

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

<span class="mw-page-title-main">Leibniz's notation</span> Mathematical notation used for calculus

In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively.

In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

<span class="mw-page-title-main">Method of characteristics</span> Technique for solving hyperbolic partial differential equations

In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface.

In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.

<span class="mw-page-title-main">Method of undetermined coefficients</span> Approach for finding solutions of nonhomogeneous ordinary differential equations

In mathematics, the method of undetermined coefficients is an approach to finding a particular solution to certain nonhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator in order to find the best possible form of the particular solution, an ansatz or 'guess' is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time-consuming to perform.

<span class="mw-page-title-main">Integrating factor</span> Technique for solving differential equations

In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential. This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential.

<span class="mw-page-title-main">Exact differential equation</span> Type of differential equation subject to a particular solution methodology

In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.

In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).

A differential equation can be homogeneous in either of two respects.

<span class="mw-page-title-main">Notation for differentiation</span> Notation of differential calculus

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

In applied mathematics and mathematical analysis, the fractal derivative or Hausdorff derivative is a non-Newtonian generalization of the derivative dealing with the measurement of fractals, defined in fractal geometry. Fractal derivatives were created for the study of anomalous diffusion, by which traditional approaches fail to factor in the fractal nature of the media. A fractal measure t is scaled according to tα. Such a derivative is local, in contrast to the similarly applied fractional derivative. Fractal calculus is formulated as a generalized of standard calculus

References

  1. Egwald Mathematics - Linear Algebra: Systems of Linear Differential Equations: Linear Stability Analysis Accessed 10 October 2019.
  2. Boyce, William E.; Richard C. DiPrima (2005). Elementary Differential Equations and Boundary Volume Problems (8th ed.). John Wiley & Sons. p. 133. ISBN   0-471-43338-1.
  3. "Second order autonomous equation" (PDF). Eqworld . Retrieved 28 February 2021.
  4. Third order autonomous equation at eqworld.
  5. Fourth order autonomous equation at eqworld.
  6. Blanchard; Devaney; Hall (2005). Differential Equations. Brooks/Cole Publishing Co. pp. 540–543. ISBN   0-495-01265-3.
  7. "Method of Matrix Exponential". Math24. Retrieved 28 February 2021.
  8. Vardia T. Haimo (1985). "Finite Time Differential Equations". 1985 24th IEEE Conference on Decision and Control. pp. 1729–1733. doi:10.1109/CDC.1985.268832. S2CID   45426376.