Characteristic equation (calculus)

Last updated

In mathematics, the characteristic equation (or auxiliary equation [1] ) is an algebraic equation of degree n upon which depends the solution of a given nth- order differential equation [2] or difference equation. [3] [4] The characteristic equation can only be formed when the differential or difference equation is linear and homogeneous, and has constant coefficients. [1] Such a differential equation, with y as the dependent variable, superscript (n) denoting nth-derivative, and an, an 1, ..., a1, a0 as constants,

Contents

will have a characteristic equation of the form

whose solutions r1, r2, ..., rn are the roots from which the general solution can be formed. [1] [5] [6] Analogously, a linear difference equation of the form

has characteristic equation

discussed in more detail at Linear recurrence with constant coefficients.

The characteristic roots (roots of the characteristic equation) also provide qualitative information about the behavior of the variable whose evolution is described by the dynamic equation. For a differential equation parameterized on time, the variable's evolution is stable if and only if the real part of each root is negative. For difference equations, there is stability if and only if the modulus of each root is less than 1. For both types of equation, persistent fluctuations occur if there is at least one pair of complex roots.

The method of integrating linear ordinary differential equations with constant coefficients was discovered by Leonhard Euler, who found that the solutions depended on an algebraic 'characteristic' equation. [2] The qualities of the Euler's characteristic equation were later considered in greater detail by French mathematicians Augustin-Louis Cauchy and Gaspard Monge. [2] [6]

Derivation

Starting with a linear homogeneous differential equation with constant coefficients an, an 1, ..., a1, a0,

it can be seen that if y(x) = erx, each term would be a constant multiple of erx. This results from the fact that the derivative of the exponential function erx is a multiple of itself. Therefore, y′ = rerx, y″ = r2erx, and y(n) = rnerx are all multiples. This suggests that certain values of r will allow multiples of erx to sum to zero, thus solving the homogeneous differential equation. [5] In order to solve for r, one can substitute y = erx and its derivatives into the differential equation to get

Since erx can never equal zero, it can be divided out, giving the characteristic equation

By solving for the roots, r, in this characteristic equation, one can find the general solution to the differential equation. [1] [6] For example, if r has roots equal to 3, 11, and 40, then the general solution will be , where , , and are arbitrary constants which need to be determined by the boundary and/or initial conditions.

Formation of the general solution

Solving the characteristic equation for its roots, r1, ..., rn, allows one to find the general solution of the differential equation. The roots may be real or complex, as well as distinct or repeated. If a characteristic equation has parts with distinct real roots, h repeated roots, or k complex roots corresponding to general solutions of yD(x), yR1(x), ..., yRh(x), and yC1(x), ..., yCk(x), respectively, then the general solution to the differential equation is

Example

The linear homogeneous differential equation with constant coefficients

has the characteristic equation

By factoring the characteristic equation into[ further explanation needed ]

one can see that the solutions for r are the distinct single root r1 = 3 and the double complex roots r2,3,4,5 = 1± i. This corresponds to the real-valued general solution

with constants c1, ..., c5.

Distinct real roots

The superposition principle for linear homogeneous says that if u1, ..., un are n linearly independent solutions to a particular differential equation, then c1u1 + ⋯ + cnun is also a solution for all values c1, ..., cn. [1] [7] Therefore, if the characteristic equation has distinct real roots r1, ..., rn, then a general solution will be of the form

Repeated real roots

If the characteristic equation has a root r1 that is repeated k times, then it is clear that yp(x) = c1er1x is at least one solution. [1] However, this solution lacks linearly independent solutions from the other k 1 roots. Since r1 has multiplicity k, the differential equation can be factored into [1]

The fact that yp(x) = c1er1x is one solution allows one to presume that the general solution may be of the form y(x) = u(x)er1x, where u(x) is a function to be determined. Substituting uer1x gives

when k =1. By applying this fact k times, it follows that

By dividing out er1x, it can be seen that

Therefore, the general case for u(x) is a polynomial of degree k 1, so that u(x) = c1 + c2x + c3x2 + ⋯ + ckxk−1. [6] Since y(x) = uer1x, the part of the general solution corresponding to r1 is

Complex roots

If a second-order differential equation has a characteristic equation with complex conjugate roots of the form r1 = a + bi and r2 = abi, then the general solution is accordingly y(x) = c1e(a + bi)x + c2e(abi)x. By Euler's formula, which states that e = cosθ + i sinθ, this solution can be rewritten as follows:

where c1 and c2 are constants that can be non-real and which depend on the initial conditions. [6] (Indeed, since y(x) is real, c1c2 must be imaginary or zero and c1 + c2 must be real, in order for both terms after the last equals sign to be real.)

For example, if c1 = c2 = 1/2, then the particular solution y1(x) = eax cosbx is formed. Similarly, if c1 = 1/2i and c2 = −1/2i, then the independent solution formed is y2(x) = eax sinbx. Thus by the superposition principle for linear homogeneous differential equations, a second-order differential equation having complex roots r =a ± bi will result in the following general solution:

This analysis also applies to the parts of the solutions of a higher-order differential equation whose characteristic equation involves non-real complex conjugate roots.

See also

Related Research Articles

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: where k is a positive constant.

<span class="mw-page-title-main">Trigonometric functions</span> Functions of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

Lambert <i>W</i> function Multivalued function in mathematics

In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:

<span class="mw-page-title-main">Projectile motion</span> Motion of launched objects due to gravity

Projectile motion is a form of motion experienced by an object or particle that is projected in a gravitational field, such as from Earth's surface, and moves along a curved path under the action of gravity only. In the particular case of projectile motion on Earth, most calculations assume the effects of air resistance are passive and negligible.

In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an equidimensional equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.

In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation: where 2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.

The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.

In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:

  1. Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
  2. Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation, it leads to the same degree of complexity in the control design. In worst cases, it is potentially disastrous in terms of stability and oscillations.
  3. Voluntary introduction of delays can benefit the control system.
  4. In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).
<span class="mw-page-title-main">Toroidal coordinates</span>

Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci and in bipolar coordinates become a ring of radius in the plane of the toroidal coordinate system; the -axis is the axis of rotation. The focal ring is also known as the reference circle.

The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space rather than just the real line.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.

In mathematics, the exponential response formula (ERF), also known as exponential response and complex replacement, is a method used to find a particular solution of a non-homogeneous linear ordinary differential equation of any order. The exponential response formula is applicable to non-homogeneous linear ordinary differential equations with constant coefficients if the function is polynomial, sinusoidal, exponential or the combination of the three. The general solution of a non-homogeneous linear ordinary differential equation is a superposition of the general solution of the associated homogeneous ODE and a particular solution to the non-homogeneous ODE. Alternative methods for solving ordinary differential equations of higher order are method of undetermined coefficients and method of variation of parameters.

In physics, a sinusoidal plane wave is a special case of plane wave: a field whose value varies as a sinusoidal function of time and of the distance from some fixed plane. It is also called a monochromatic plane wave, with constant frequency.

References

  1. 1 2 3 4 5 6 7 Edwards, C. Henry; Penney, David E. (2008). "Chapter 3". Differential Equations: Computing and Modeling. David Calvis. Upper Saddle River, New Jersey: Pearson Education. pp. 156–170. ISBN   978-0-13-600438-7.
  2. 1 2 3 Smith, David Eugene. "History of Modern Mathematics: Differential Equations". University of South Florida.
  3. Baumol, William J. (1970). Economic Dynamics (3rd ed.). p.  172.
  4. Chiang, Alpha (1984). Fundamental Methods of Mathematical Economics (3rd ed.). McGraw-Hill. pp.  578, 600. ISBN   9780070107809.
  5. 1 2 Chu, Herman; Shah, Gaurav; Macall, Tom. "Linear Homogeneous Ordinary Differential Equations with Constant Coefficients". eFunda. Retrieved 1 March 2011.
  6. 1 2 3 4 5 Cohen, Abraham (1906). An Elementary Treatise on Differential Equations. D. C. Heath and Company.
  7. Dawkins, Paul. "Differential Equation Terminology". Paul's Online Math Notes. Retrieved 2 March 2011.