Singular solution

Last updated

A singular solutionys(x) of an ordinary differential equation is a solution that is singular or one for which the initial value problem (also called the Cauchy problem by some authors) fails to have a unique solution at some point on the solution. The set on which a solution is singular may be as small as a single point or as large as the full real line. Solutions which are singular in the sense that the initial value problem fails to have a unique solution need not be singular functions.

Contents

In some cases, the term singular solution is used to mean a solution at which there is a failure of uniqueness to the initial value problem at every point on the curve. A singular solution in this stronger sense is often given as tangent to every solution from a family of solutions. By tangent we mean that there is a point x where ys(x) = yc(x) and y's(x) = y'c(x) where yc is a solution in a family of solutions parameterized by c. This means that the singular solution is the envelope of the family of solutions.

Usually, singular solutions appear in differential equations when there is a need to divide in a term that might be equal to zero. Therefore, when one is solving a differential equation and using division one must check what happens if the term is equal to zero, and whether it leads to a singular solution. The Picard–Lindelöf theorem, which gives sufficient conditions for unique solutions to exist, can be used to rule out the existence of singular solutions. Other theorems, such as the Peano existence theorem, give sufficient conditions for solutions to exist without necessarily being unique, which can allow for the existence of singular solutions.

A divergent solution

Consider the homogeneous linear ordinary differential equation

where primes denote derivatives with respect to x. The general solution to this equation is

For a given , this solution is smooth except at where the solution is divergent. Furthermore, for a given , this is the unique solution going through .

Failure of uniqueness

Consider the differential equation

A one-parameter family of solutions to this equation is given by

Another solution is given by

Since the equation being studied is a first-order equation, the initial conditions are the initial x and y values. By considering the two sets of solutions above, one can see that the solution fails to be unique when . (It can be shown that for if a single branch of the square root is chosen, then there is a local solution which is unique using the Picard–Lindelöf theorem.) Thus, the solutions above are all singular solutions, in the sense that solution fails to be unique in a neighbourhood of one or more points. (Commonly, we say "uniqueness fails" at these points.) For the first set of solutions, uniqueness fails at one point, , and for the second solution, uniqueness fails at every value of . Thus, the solution is a singular solution in the stronger sense that uniqueness fails at every value of x. However, it is not a singular function since it and all its derivatives are continuous.

In this example, the solution is the envelope of the family of solutions . The solution is tangent to every curve at the point .

The failure of uniqueness can be used to construct more solutions. These can be found by taking two constant and defining a solution to be when , to be when , and to be when . Direct calculation shows that this is a solution of the differential equation at every point, including and . Uniqueness fails for these solutions on the interval , and the solutions are singular, in the sense that the second derivative fails to exist, at and .

Further example of failure of uniqueness

The previous example might give the erroneous impression that failure of uniqueness is directly related to . Failure of uniqueness can also be seen in the following example of a Clairaut's equation:

We write y' = p and then

Now, we shall take the differential according to x:

which by simple algebra yields

This condition is solved if 2p+x=0 or if p=0.

If p' = 0 it means that y' = p = c = constant, and the general solution of this new equation is:

where c is determined by the initial value.

If x + 2p = 0 then we get that p = −½x and substituting in the ODE gives

Now we shall check when these solutions are singular solutions. If two solutions intersect each other, that is, they both go through the same point (x,y), then there is a failure of uniqueness for a first-order ordinary differential equation. Thus, there will be a failure of uniqueness if a solution of the first form intersects the second solution.

The condition of intersection is : ys(x) = yc(x). We solve

to find the intersection point, which is .

We can verify that the curves are tangent at this point y's(x) = y'c(x). We calculate the derivatives:

Hence,

is tangent to every member of the one-parameter family of solutions

of this Clairaut equation:

See also

Bibliography

Related Research Articles

Tangent In mathematics, straight line touching a plane curve without crossing it

In geometry, the tangent line to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve y = f(x) at a point x = c if the line passes through the point (c, f ) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.

In mathematics, the Banach fixed-point theorem is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.

Partial differential equation Type of multivariable function

In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.

Harmonic function Functions in mathematics

In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function f : UR, where U is an open subset of Rn, that satisfies Laplace's equation, that is,

Vector field Assignment of a vector to each point in a subset of Euclidean space

In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. For instance, a vector field in the plane can be visualised as a collection of arrows with a given magnitude and direction, each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.

Normal (geometry) Line or vector perpendicular to a curve or a surface

In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at the point. A normal vector may have length one or its length may represent the curvature of the object ; its algebraic sign may indicate sides.

Algebraic curve Curve defined as zeros of polynomials

In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.

Linear differential equation Differential equations that are linear with respect to the unknown function and its derivatives

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or other sciences frequently amounts to solving an initial value problem. In that context, the differential initial value is an equation which specifies how the system evolves with time given the initial conditions of the problem.

In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds.

Picard–Lindelöf theorem Existence and uniqueness of solutions to initial value problems

In mathematics – specifically, in differential equations – the Picard–Lindelöf theorem, Picard's existence theorem, Cauchy–Lipschitz theorem, or existence and uniqueness theorem gives a set of conditions under which an initial value problem has a unique solution.

Integral curve

In mathematics, an integral curve is a parametric curve that represents a specific solution to an ordinary differential equation or system of equations. If the differential equation is represented as a vector field or slope field, then the corresponding integral curves are tangent to the field at each point.

Critical point (mathematics) Point where the derivative of a function is zero

Critical point is a wide term used in many branches of mathematics.

Euler method Approach to finding numerical solutions of ordinary differential equations

In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis.

Peano existence theorem Theorem regarding the existence of a solution to a differential equation.

In mathematics, specifically in the study of ordinary differential equations, the Peano existence theorem, Peano theorem or Cauchy–Peano theorem, named after Giuseppe Peano and Augustin-Louis Cauchy, is a fundamental theorem which guarantees the existence of solutions to certain initial value problems.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).

Carathéodorys existence theorem Statement on solutions to ordinary differential equations

In mathematics, Carathéodory's existence theorem says that an ordinary differential equation has a solution under relatively mild conditions. It is a generalization of Peano's existence theorem. Peano's theorem requires that the right-hand side of the differential equation be continuous, while Carathéodory's theorem shows existence of solutions for some discontinuous equations. The theorem is named after Constantin Carathéodory.

Ordinary differential equation Differential equation containing derivatives with respect to only one variable

In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.

Cauchy–Kowalevski theorem Existence and uniqueness theorem for certain partial differential equations

In mathematics, the Cauchy–Kovalevskaya theorem is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. A special case was proven by Augustin Cauchy (1842), and the full result by Sophie Kovalevskaya (1875).