In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems.
Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future.
An autonomous system is a system of ordinary differential equations of the form where x takes values in n-dimensional Euclidean space; t is often interpreted as time.
It is distinguished from systems of differential equations of the form in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter t, again often interpreted as time; such systems are by definition not autonomous.
Solutions are invariant under horizontal translations:
Let be a unique solution of the initial value problem for an autonomous system Then solves Denoting gets and , thus For the initial condition, the verification is trivial,
The equation is autonomous, since the independent variable () does not explicitly appear in the equation. To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB
Ffun=@(X,Y)(2-Y).*Y;% function f(x,y)=(2-y)y[X,Y]=meshgrid(0:.2:6,-1:.2:3);% choose the plot sizesDY=Ffun(X,Y);DX=ones(size(DY));% generate the plot valuesquiver(X,Y,DX,DY,'k');% plot the direction field in blackholdon;contour(X,Y,DY,[012],'g');% add the isoclines(0 1 2) in greentitle('Slope field and isoclines for f(x,y)=(2-y)y')
One can observe from the plot that the function is -invariant, and so is the shape of the solution, i.e. for any shift .
Solving the equation symbolically in MATLAB, by running
symsy(x);equation=(diff(y)==(2-y)*y);% solve the equation for a general solution symbolicallyy_general=dsolve(equation);
obtains two equilibrium solutions, and , and a third solution involving an unknown constant , -2/(exp(C3-2*x)-1)
.
Picking up some specific values for the initial condition, one can add the plot of several solutions
% solve the initial value problem symbolically% for different initial conditionsy1=dsolve(equation,y(1)==1);y2=dsolve(equation,y(2)==1);y3=dsolve(equation,y(3)==1);y4=dsolve(equation,y(1)==3);y5=dsolve(equation,y(2)==3);y6=dsolve(equation,y(3)==3);% plot the solutionsezplot(y1,[06]);ezplot(y2,[06]);ezplot(y3,[06]);ezplot(y4,[06]);ezplot(y5,[06]);ezplot(y6,[06]);title('Slope field, isoclines and solutions for f(x,y)=(2-y)y')legend('Slope field','Isoclines','Solutions y_{1..6}');text([123],[111],strcat('\leftarrow',{'y_1','y_2','y_3'}));text([123],[333],strcat('\leftarrow',{'y_4','y_5','y_6'}));gridon;
Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line.
The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order is equivalent to an -dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa.
The first-order autonomous equation is separable, so it can be solved by rearranging it into the integral form
The second-order autonomous equation is more difficult, but it can be solved [2] by introducing the new variable and expressing the second derivative of via the chain rule as so that the original equation becomes which is a first order equation containing no reference to the independent variable . Solving provides as a function of . Then, recalling the definition of :
which is an implicit solution.
The special case where is independent of
benefits from separate treatment. [3] These types of equations are very common in classical mechanics because they are always Hamiltonian systems.
The idea is to make use of the identity
which follows from the chain rule, barring any issues due to division by zero.
By inverting both sides of a first order autonomous system, one can immediately integrate with respect to :
which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to instead of :
To reemphasize: what's been accomplished is that the second derivative with respect to has been expressed as a derivative of . The original second order equation can now be integrated:
This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants.
Using the above approach, the technique can extend to the more general equation
where is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of . Rewriting the second derivative, rearranging, and expressing the left side as a derivative:
The right will carry +/− if is even. The treatment must be different if :
There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable only [4] [5] (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor.
Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum. [6]
In , where is an -dimensional column vector dependent on .
The solution is where is an constant vector. [7]
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, [8] meaning here that from its own dynamics, the system will reach the value zero at an ending time and stay there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they will be non-Lipschitz functions at the ending time, they don't stand[ clarification needed ] uniqueness of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
In mathematics, the Laplace transform, named after Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable .
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a wide number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:
In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.
In mathematics, the method of undetermined coefficients is an approach to finding a particular solution to certain nonhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator in order to find the best possible form of the particular solution, an ansatz or 'guess' is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time-consuming to perform.
In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential. This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential.
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.
The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.
A differential equation can be homogeneous in either of two respects.
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.
In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function.
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
In mathematics, the exponential response formula (ERF), also known as exponential response and complex replacement, is a method used to find a particular solution of a non-homogeneous linear ordinary differential equation of any order. The exponential response formula is applicable to non-homogeneous linear ordinary differential equations with constant coefficients if the function is polynomial, sinusoidal, exponential or the combination of the three. The general solution of a non-homogeneous linear ordinary differential equation is a superposition of the general solution of the associated homogeneous ODE and a particular solution to the non-homogeneous ODE. Alternative methods for solving ordinary differential equations of higher order are method of undetermined coefficients and method of variation of parameters.