In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a linear second-order ordinary differential equation of the form
with and .
in the vicinity of the regular singular point .
One can divide by to obtain a differential equation of the form
which will not be solvable with regular power series methods if either p(z)/z or q(z)/z2 is not analytic at z = 0. The Frobenius method enables one to create a power series solution to such a differential equation, provided that p(z) and q(z) are themselves analytic at 0 or, being analytic elsewhere, both their limits at 0 exist (and are finite).
Frobenius' contribution [1] was not so much in all the possible forms of the series solutions involved (see below). These forms had all been established earlier, [2] by Fuchs. [3] [4] The indicial polynomial (see below) and its role had also been established by Fuchs. [2]
A first contribution by Frobenius to the theory was to show that - as regards a first, linearly independent solution, which then has the form of an analytical power series multiplied by an arbitrary power r of the independent variable (see below) - the coefficients of the generalized power series obey a recurrence relation, so that they can always be straightforwardly calculated.
A second contribution by Frobenius was to show that, in cases in which the roots of the indicial equation differ by an integer, the general form of the second linearly independent solution (see below) can be obtained by a procedure which is based on differentiation [5] with respect to the parameter r, mentioned above.
A large part of Frobenius' 1873 publication [1] was devoted to proofs of convergence of all the series involved in the solutions, as well as establishing the radii of convergence of these series.
The method of Frobenius is to seek a power series solution of the form
Differentiating:
Substituting the above differentiation into our original ODE:
The expression
is known as the indicial polynomial, which is quadratic in r. The general definition of the indicial polynomial is the coefficient of the lowest power of z in the infinite series. In this case it happens to be that this is the rth coefficient but, it is possible for the lowest possible exponent to be r − 2, r − 1 or, something else depending on the given differential equation. This detail is important to keep in mind. In the process of synchronizing all the series of the differential equation to start at the same index value (which in the above expression is k = 1), one can end up with complicated expressions. However, in solving for the indicial roots attention is focused only on the coefficient of the lowest power of z.
Using this, the general expression of the coefficient of zk + r is
These coefficients must be zero, since they should be solutions of the differential equation, so
The series solution with Ak above,
satisfies
If we choose one of the roots to the indicial polynomial for r in Ur(z), we gain a solution to the differential equation. If the difference between the roots is not an integer, we get another, linearly independent solution in the other root.
Let us solve
Divide throughout by z2 to give
which has the requisite singularity at z = 0.
Use the series solution
Now, substituting
From (r − 1)2 = 0 we get a double root of 1. Using this root, we set the coefficient of zk + r − 2 to be zero (for it to be a solution), which gives us:
hence we have the recurrence relation:
Given some initial conditions, we can either solve the recurrence entirely or obtain a solution in power series form.
Since the ratio of coefficients is a rational function, the power series can be written as a generalized hypergeometric series.
The previous example involved an indicial polynomial with a repeated root, which gives only one solution to the given differential equation. In general, the Frobenius method gives two independent solutions provided that the indicial equation's roots are not separated by an integer (including zero).
If the root is repeated or the roots differ by an integer, then the second solution can be found using:
where is the first solution (based on the larger root in the case of unequal roots), is the smaller root, and the constant C and the coefficients are to be determined. Once is chosen (for example by setting it to 1) then C and the are determined up to but not including , which can be set arbitrarily. This then determines the rest of the In some cases the constant C must be zero.
Example: consider the following differential equation (Kummer's equation with a = 1 and b = 2):
The roots of the indicial equation are −1 and 0. Two independent solutions are and so we see that the logarithm does not appear in any solution. The solution has a power series starting with the power zero. In a power series starting with the recurrence relation places no restriction on the coefficient for the term which can be set arbitrarily. If it is set to zero then with this differential equation all the other coefficients will be zero and we obtain the solution 1/z.
In cases in which roots of the indicial polynomial differ by an integer (including zero), the coefficients of all series involved in second linearly independent solutions can be calculated straightforwardly from tandem recurrence relations. [5] These tandem relations can be constructed by further developing Frobenius' original invention of differentiating with respect to the parameter r, and using this approach to actually calculate the series coefficients in all cases. [5]
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:
In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:
In mathematics, the power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients.
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:
In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial
In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.
In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called Fuchsian equations. It is named after Lazarus Immanuel Fuchs.
The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.
In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm.
In mathematics, particularly in computer algebra, Abramov's algorithm computes all rational solutions of a linear recurrence equation with polynomial coefficients. The algorithm was published by Sergei A. Abramov in 1989.
In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics. The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite.