Frobenius method

Last updated
Some solutions of a differential equation having a regular singular point with indicial roots
r
=
1
2
{\displaystyle r={\frac {1}{2}}}
and
-
1
{\displaystyle -1}
. Singular point with indicial roots one half and negative one.png
Some solutions of a differential equation having a regular singular point with indicial roots and .

In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a linear second-order ordinary differential equation of the form

Contents

with and .

in the vicinity of the regular singular point .

One can divide by to obtain a differential equation of the form

which will not be solvable with regular power series methods if either p(z)/z or q(z)/z2 is not analytic at z = 0. The Frobenius method enables one to create a power series solution to such a differential equation, provided that p(z) and q(z) are themselves analytic at 0 or, being analytic elsewhere, both their limits at 0 exist (and are finite).

History: Frobenius' Actual Contributions

Frobenius' contribution [1] was not so much in all the possible forms of the series solutions involved (see below). These forms had all been established earlier, [2] by Fuchs. [3] [4] The indicial polynomial (see below) and its role had also been established by Fuchs. [2]

A first contribution by Frobenius to the theory was to show that - as regards a first, linearly independent solution, which then has the form of an analytical power series multiplied by an arbitrary power r of the independent variable (see below) - the coefficients of the generalized power series obey a recurrence relation, so that they can always be straightforwardly calculated.

A second contribution by Frobenius was to show that, in cases in which the roots of the indicial equation differ by an integer, the general form of the second linearly independent solution (see below) can be obtained by a procedure which is based on differentiation [5] with respect to the parameter r, mentioned above.

A large part of Frobenius' 1873 publication [1] was devoted to proofs of convergence of all the series involved in the solutions, as well as establishing the radii of convergence of these series.

Explanation of Frobenius Method: first, linearly independent solution

The method of Frobenius is to seek a power series solution of the form

Differentiating:

Substituting the above differentiation into our original ODE:

The expression

is known as the indicial polynomial, which is quadratic in r. The general definition of the indicial polynomial is the coefficient of the lowest power of z in the infinite series. In this case it happens to be that this is the rth coefficient but, it is possible for the lowest possible exponent to be r  2, r  1 or, something else depending on the given differential equation. This detail is important to keep in mind. In the process of synchronizing all the series of the differential equation to start at the same index value (which in the above expression is k = 1), one can end up with complicated expressions. However, in solving for the indicial roots attention is focused only on the coefficient of the lowest power of z.

Using this, the general expression of the coefficient of zk + r is

These coefficients must be zero, since they should be solutions of the differential equation, so

The series solution with Ak above,

satisfies

If we choose one of the roots to the indicial polynomial for r in Ur(z), we gain a solution to the differential equation. If the difference between the roots is not an integer, we get another, linearly independent solution in the other root.

Example

Let us solve

Divide throughout by z2 to give

which has the requisite singularity at z = 0.

Use the series solution

Now, substituting

From (r − 1)2 = 0 we get a double root of 1. Using this root, we set the coefficient of zk + r − 2 to be zero (for it to be a solution), which gives us:

hence we have the recurrence relation:

Given some initial conditions, we can either solve the recurrence entirely or obtain a solution in power series form.

Since the ratio of coefficients is a rational function, the power series can be written as a generalized hypergeometric series.

"The exceptional cases": roots separated by an integer

The previous example involved an indicial polynomial with a repeated root, which gives only one solution to the given differential equation. In general, the Frobenius method gives two independent solutions provided that the indicial equation's roots are not separated by an integer (including zero).

If the root is repeated or the roots differ by an integer, then the second solution can be found using:

where is the first solution (based on the larger root in the case of unequal roots), is the smaller root, and the constant C and the coefficients are to be determined. Once is chosen (for example by setting it to 1) then C and the are determined up to but not including , which can be set arbitrarily. This then determines the rest of the In some cases the constant C must be zero.

Example: consider the following differential equation (Kummer's equation with a = 1 and b = 2):

The roots of the indicial equation are −1 and 0. Two independent solutions are and so we see that the logarithm does not appear in any solution. The solution has a power series starting with the power zero. In a power series starting with the recurrence relation places no restriction on the coefficient for the term which can be set arbitrarily. If it is set to zero then with this differential equation all the other coefficients will be zero and we obtain the solution 1/z.

Tandem Recurrence Relations for Series Coefficients in the Exceptional Cases

In cases in which roots of the indicial polynomial differ by an integer (including zero), the coefficients of all series involved in second linearly independent solutions can be calculated straightforwardly from tandem recurrence relations. [5] These tandem relations can be constructed by further developing Frobenius' original invention of differentiating with respect to the parameter r, and using this approach to actually calculate the series coefficients in all cases. [5]

See also

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Legendre polynomials</span> System of complete and orthogonal polynomials

In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

<span class="mw-page-title-main">Chebyshev polynomials</span> Polynomial sequence

The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:

In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction is an operation that consists of expressing the fraction as a sum of a polynomial and one or several fractions with a simpler denominator.

<span class="mw-page-title-main">Generalized hypergeometric function</span> Family of power series in mathematics

In mathematics, a generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. The series, if convergent, defines a generalized hypergeometric function, which may then be defined over a wider domain of the argument by analytic continuation. The generalized hypergeometric series is sometimes just called the hypergeometric series, though this term also sometimes just refers to the Gaussian hypergeometric series. Generalized hypergeometric functions include the (Gaussian) hypergeometric function and the confluent hypergeometric function as special cases, which in turn have many particular special functions as special cases, such as elementary functions, Bessel functions, and the classical orthogonal polynomials.

In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.

The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In mathematics, the power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients.

<span class="mw-page-title-main">Confluent hypergeometric function</span> Solution of a confluent hypergeometric equation

In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:

<span class="mw-page-title-main">Padé approximant</span> Best approximation of a function by a rational function of given order

In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series.

<span class="mw-page-title-main">Bring radical</span> Real root of the polynomial x^5+x+a

In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial

In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called Fuchsian equations. It is named after Lazarus Immanuel Fuchs.

The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.

In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics. The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite.

Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.

References

  1. 1 2 Frobenius, Ferdinand Georg (1968) [Originally in Journal für die reine und angewandte Mathematik 76, 214-235 (1873)]. "Uber die Integration der linearen Differentialgleichungen durch Reihen". Gesammelte Abhandlungen (in German). Berlin: Springer-Verlag. pp. 84–105.
  2. 1 2 Gray, Jeremy (1986). Linear Differential Equations and Group Theory from Riemann to Poincare. Boston: Birkhauser. ISBN   0-8176-3318-9.
  3. Fuchs, Lazarus Immanuel (1865). "Zur Theorie der linearen Differentialgleichungen mit veranderlichen Coefficienten". Gesammelte Mathematische Werke von L. Fuchs (in German). University Of Michigan Library.
  4. Fuchs, Lazarus Immanuel (1866). "Zur Theorie der linearen Differentialgleichungen mit veranderlichen Coefficienten". Journal für die reine und angewandte Mathematik. 66: 159–204.
  5. 1 2 3 van der Toorn, Ramses (27 December 2022). "Tandem Recurrence Relations for Coefficients of Logarithmic Frobenius Series Solutions about Regular Singular Points". Axioms. 12 (1): 32. doi: 10.3390/axioms12010032 . ISSN   2075-1680.