Nonlinear system

Last updated

In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. [1] [2] Nonlinear problems are of interest to engineers, biologists, [3] [4] [5] physicists, [6] [7] mathematicians, and many other scientists since most systems are inherently nonlinear in nature. [8] Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.

As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, [9] and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.

Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:

Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.

Definition

In mathematics, a linear map (or linear function) is one which satisfies both of the following properties:

Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle

An equation written as

is called linear if is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if and is a homogeneous function.

The definition is very general in that can be any sensible mathematical object (number, vector, function, etc.), and the function can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If contains differentiation with respect to , the result will be a differential equation.

Nonlinear systems equations

A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.

For a single equation of the form many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.

Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborated algorithms have been designed, such as Gröbner base algorithms. [11]

For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.

Nonlinear recurrence relations

A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. [12] These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.

Nonlinear differential equations

A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.

One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.

Ordinary differential equations

First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation

has as a general solution (and also the special solution corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as

and the left-hand side of the equation is not a linear function of and its derivatives. Note that if the term were replaced with , the problem would be linear (the exponential decay problem).

Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.

Common methods for the qualitative analysis of nonlinear ordinary differential equations include:

Partial differential equations

The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.

Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.

Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.

Pendula

Illustration of a pendulum PendulumLayout.svg
Illustration of a pendulum
Linearizations of a pendulum PendulumLinearizations.png
Linearizations of a pendulum

A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown [14] that the motion of a pendulum can be described by the dimensionless nonlinear equation

where gravity points "downwards" and is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use as an integrating factor, which would eventually yield

which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless ).

Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at , called the small angle approximation, is

since for . This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at , corresponding to the pendulum being straight up:

since for . The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.

One more interesting linearization is possible around , around which :

This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.

Types of nonlinear dynamic behaviors

Examples of nonlinear equations

See also

Related Research Articles

In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

<span class="mw-page-title-main">Partial differential equation</span> Type of differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.

<span class="mw-page-title-main">Equations of motion</span> Equations that describe the behavior of a physical system

In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.

<span class="mw-page-title-main">Legendre polynomials</span> System of complete and orthogonal polynomials

In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:

A nonholonomic system in physics and mathematics is a physical system whose state depends on the path taken in order to achieve it. Such a system is described by a set of parameters subject to differential constraints and non-linear constraints, such that when the system evolves along a path in its parameter space but finally returns to the original set of parameter values at the start of the path, the system itself may not have returned to its original state. Nonholonomic mechanics is autonomous division of Newtonian mechanics.

In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the linear partial differential equation:

<span class="mw-page-title-main">Differential equation</span> Type of functional equation (mathematics)

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

<span class="mw-page-title-main">Hopf bifurcation</span> Critical point where a periodic solution arises

In the mathematical theory of bifurcations, a Hopfbifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).

In mathematics, more specifically in dynamical systems, the method of averaging exploits systems containing time-scales separation: a fast oscillationversus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.

<span class="mw-page-title-main">Pendulum (mechanics)</span> Free swinging suspended body

A pendulum is a body suspended from a fixed support so that it swings freely back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations.

In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function.

The Krylov–Bogolyubov averaging method is a mathematical method for approximate analysis of oscillating processes in non-linear mechanics. The method is based on the averaging principle when the exact differential equation of the motion is replaced by its averaged version. The method is named after Nikolay Krylov and Nikolay Bogoliubov.

Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view.

References

  1. "Explained: Linear and nonlinear systems". MIT News. Retrieved 2018-06-30.
  2. "Nonlinear systems, Applied Mathematics - University of Birmingham". www.birmingham.ac.uk. Retrieved 2018-06-30.
  3. "Nonlinear Biology", The Nonlinear Universe, The Frontiers Collection, Springer Berlin Heidelberg, 2007, pp. 181–276, doi:10.1007/978-3-540-34153-6_7, ISBN   9783540341529
  4. Korenberg, Michael J.; Hunter, Ian W. (March 1996). "The identification of nonlinear biological systems: Volterra kernel approaches". Annals of Biomedical Engineering. 24 (2): 250–268. doi:10.1007/bf02667354. ISSN   0090-6964. PMID   8678357. S2CID   20643206.
  5. Mosconi, Francesco; Julou, Thomas; Desprat, Nicolas; Sinha, Deepak Kumar; Allemand, Jean-François; Vincent Croquette; Bensimon, David (2008). "Some nonlinear challenges in biology". Nonlinearity. 21 (8): T131. Bibcode:2008Nonli..21..131M. doi:10.1088/0951-7715/21/8/T03. ISSN   0951-7715. S2CID   119808230.
  6. Gintautas, V. (2008). "Resonant forcing of nonlinear systems of differential equations". Chaos. 18 (3): 033118. arXiv: 0803.2252 . Bibcode:2008Chaos..18c3118G. doi:10.1063/1.2964200. PMID   19045456. S2CID   18345817.
  7. Stephenson, C.; et., al. (2017). "Topological properties of a self-assembled electrical network via ab initio calculation". Sci. Rep. 7: 41621. Bibcode:2017NatSR...741621S. doi:10.1038/srep41621. PMC   5290745 . PMID   28155863.
  8. de Canete, Javier, Cipriano Galindo, and Inmaculada Garcia-Moral (2011). System Engineering and Automation: An Interactive Educational Approach. Berlin: Springer. p. 46. ISBN   978-3642202292 . Retrieved 20 January 2018.{{cite book}}: CS1 maint: multiple names: authors list (link)
  9. Nonlinear Dynamics I: Chaos Archived 2008-02-12 at the Wayback Machine at MIT's OpenCourseWare
  10. Campbell, David K. (25 November 2004). "Nonlinear physics: Fresh breather". Nature. 432 (7016): 455–456. Bibcode:2004Natur.432..455C. doi:10.1038/432455a. ISSN   0028-0836. PMID   15565139. S2CID   4403332.
  11. Lazard, D. (2009). "Thirty years of Polynomial System Solving, and now?". Journal of Symbolic Computation. 44 (3): 222–231. doi: 10.1016/j.jsc.2008.03.004 .
  12. Billings S.A. "Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains". Wiley, 2013
  13. Vardia T. Haimo (1985). "Finite Time Differential Equations". 1985 24th IEEE Conference on Decision and Control. pp. 1729–1733. doi:10.1109/CDC.1985.268832. S2CID   45426376.
  14. David Tong: Lectures on Classical Dynamics

Further reading