Abel's identity

Last updated

In mathematics, Abel's identity (also called Abel's formula [1] or Abel's differential equation identity) is an equation that expresses the Wronskian of two solutions of a homogeneous second-order linear ordinary differential equation in terms of a coefficient of the original differential equation. The relation can be generalised to nth-order linear ordinary differential equations. The identity is named after the Norwegian mathematician Niels Henrik Abel.

Contents

Since Abel's identity relates to the different linearly independent solutions of the differential equation, it can be used to find one solution from the other. It provides useful identities relating the solutions, and is also useful as a part of other techniques such as the method of variation of parameters. It is especially useful for equations such as Bessel's equation where the solutions do not have a simple analytical form, because in such cases the Wronskian is difficult to compute directly.

A generalisation of first-order systems of homogeneous linear differential equations is given by Liouville's formula.

Statement

Consider a homogeneous linear second-order ordinary differential equation

on an interval I of the real line with real- or complex-valued continuous functions p and q. Abel's identity states that the Wronskian of two real- or complex-valued solutions and of this differential equation, that is the function defined by the determinant

satisfies the relation

for each point .

Remarks

Proof

Differentiating the Wronskian using the product rule gives (writing for and omitting the argument for brevity)

Solving for in the original differential equation yields

Substituting this result into the derivative of the Wronskian function to replace the second derivatives of and gives

This is a first-order linear differential equation, and it remains to show that Abel's identity gives the unique solution, which attains the value at . Since the function is continuous on , it is bounded on every closed and bounded subinterval of and therefore integrable, hence

is a well-defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, one obtains

due to the differential equation for . Therefore, has to be constant on , because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since , Abel's identity follows by solving the definition of for .

Proof that the Wronskian never changes sign

For all , the Wronskian is either identically zero, always positive, or always negative, given that , , and are real-valued. This is demonstrated as follows.

Abel's identity states that

Let . Then must be a real-valued constant because and are real-valued.

Let . As is real-valued, so is , so is strictly positive.

Thus, is identically zero when , always positive when is positive, and always negative when is negative.

Furthermore, when , , and , one can similarly show that is either identically or non-zero for all values of x.

Generalization

The Wronskian of functions on an interval is the function defined by the determinant

Consider a homogeneous linear ordinary differential equation of order :

on an interval of the real line with a real- or complex-valued continuous function . Let by solutions of this nth order differential equation. Then the generalisation of Abel's identity states that this Wronskian satisfies the relation:

for each point .

Direct proof

For brevity, we write for and omit the argument . It suffices to show that the Wronskian solves the first-order linear differential equation

because the remaining part of the proof then coincides with the one for the case .

In the case we have and the differential equation for coincides with the one for . Therefore, assume in the following.

The derivative of the Wronskian is the derivative of the defining determinant. It follows from the Leibniz formula for determinants that this derivative can be calculated by differentiating every row separately, hence

However, note that every determinant from the expansion contains a pair of identical rows, except the last one. Since determinants with linearly dependent rows are equal to 0, one is only left with the last one:

Since every solves the ordinary differential equation, we have

for every . Hence, adding to the last row of the above determinant times its first row, times its second row, and so on until times its next to last row, the value of the determinant for the derivative of is unchanged and we get

Proof using Liouville's formula

The solutions form the square-matrix valued solution

of the -dimensional first-order system of homogeneous linear differential equations

The trace of this matrix is , hence Abel's identity follows directly from Liouville's formula.

Related Research Articles

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables. For example,

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In mathematics, the Wronskian of n differentiable functions is the determinant formed with the functions and their n – 1 first derivatives. It was introduced in 1812 by the Polish mathematician Jozef Hoene-Wronski, and is used in the study of differential equations, where it can sometimes show the linear independence of a set of solutions.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

<span class="mw-page-title-main">Block matrix</span> Matrix defined using smaller matrices called blocks

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is

In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In linear algebra, geometry, and trigonometry, the Cayley–Menger determinant is a formula for the content, i.e. the higher-dimensional volume, of a -dimensional simplex in terms of the squares of all of the distances between pairs of its vertices. The determinant is named after Arthur Cayley and Karl Menger.

In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.

In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In mathematics, Liouville's formula, also known as the Abel–Jacobi–Liouville identity, is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville. Jacobi's formula provides another representation of the same mathematical relationship.

In algebra, the continuant is a multivariate polynomial representing the determinant of a tridiagonal matrix and having applications in generalized continued fractions.

<span class="mw-page-title-main">Ordinary differential equation</span> Differential equation containing derivatives with respect to only one variable

In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.

In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.

References

  1. Rainville, Earl David; Bedient, Phillip Edward (1969). Elementary Differential Equations . Collier-Macmillan International Editions.