Hermite interpolation

Last updated

In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than n that takes the same value at n given points as a given function. Instead, Hermite interpolation computes a polynomial of degree less than mn such that the polynomial and its first m − 1 derivatives have the same values at n given points as a given function and its first m − 1 derivatives.

Contents

Hermite's method of interpolation is closely related to the Newton's interpolation method, in that both are derived from the calculation of divided differences. However, there are other methods for computing a Hermite interpolating polynomial. One can use linear algebra, by taking the coefficients of the interpolating polynomial as unknowns, and writing as linear equations the constraints that the interpolating polynomial must satisfy. For another method, see Chinese remainder theorem § Hermite interpolation.

Statement of the problem

Hermite interpolation consists of computing a polynomial of degree as low as possible that matches an unknown function both in observed value, and the observed value of its first m derivatives. This means that n(m + 1) values

must be known. The resulting polynomial has a degree less than n(m + 1). (In a more general case, there is no need for m to be a fixed value; that is, some points may have more known derivatives than others. In this case the resulting polynomial has a degree less than the number of data points.)

Let us consider a polynomial P(x) of degree less than n(m + 1) with indeterminate coefficients; that is, the coefficients of P(x) are n(m + 1) new variables. Then, by writing the constraints that the interpolating polynomial must satisfy, one gets a system of n(m + 1) linear equations in n(m + 1) unknowns.

In general, such a system has exactly one solution. Charles Hermite proved that this is effectively the case here, as soon as the xi are pairwise different,[ citation needed ] and provided a method for computing it, which is described below.

Method

Simple case

When using divided differences to calculate the Hermite polynomial of a function f, the first step is to copy each point m times. (Here we will consider the simplest case for all points.) Therefore, given data points , and values and for a function that we want to interpolate, we create a new dataset

such that

Now, we create a divided differences table for the points . However, for some divided differences,

which is undefined. In this case, the divided difference is replaced by . All others are calculated normally.

General case

In the general case, suppose a given point has k derivatives. Then the dataset contains k identical copies of . When creating the table, divided differences of identical values will be calculated as

For example,

etc.

Example

Consider the function . Evaluating the function and its first two derivatives at , we obtain the following data:

xf(x)f(x)f(x)
−12−856
0100
12856

Since we have two derivatives to work with, we construct the set . Our divided difference table is then:

and the generated polynomial is

by taking the coefficients from the diagonal of the divided difference table, and multiplying the kth coefficient by , as we would when generating a Newton polynomial.

Quintic Hermite interpolation

The quintic Hermite interpolation based on the function (), its first () and second derivatives () at two different points ( and ) can be used for example to interpolate the position of an object based on its position, velocity and acceleration. The general form is given by

Error

Call the calculated polynomial H and original function f. Evaluating a point , the error function is

where c is an unknown within the range , K is the total number of data-points, and is the number of derivatives known at each plus one.

See also

Related Research Articles

<span class="mw-page-title-main">Binomial coefficient</span> Number of subsets of a given size

In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers nk ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula

<span class="mw-page-title-main">Chinese remainder theorem</span> Theorem for solving simultaneous congruences

In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime.

<span class="mw-page-title-main">Diophantine equation</span> Polynomial equation whose integer solutions are sought

In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.

<span class="mw-page-title-main">Interpolation</span> Method for estimating new data within known data points

In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.

In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.

<span class="mw-page-title-main">Runge's phenomenon</span> Failure of convergence in interpolation

In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. The discovery shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.

In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.

In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.

<span class="mw-page-title-main">Lagrange polynomial</span> Polynomials used for interpolation

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

<span class="mw-page-title-main">Spline (mathematics)</span> Mathematical function defined piecewise by polynomials

In mathematics, a spline is a function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees.

In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials between each of the pairs of ten points, instead of fitting a single degree-nine polynomial to all of them. Spline interpolation is often preferred over polynomial interpolation because the interpolation error can be made small even when using low-degree polynomials for the spline. Spline interpolation also avoids the problem of Runge's phenomenon, in which oscillation can occur between points when interpolating using high-degree polynomials.

In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in Faà di Bruno's formula.

In numerical analysis, a cubic Hermite spline or cubic Hermite interpolator is a spline where each piece is a third-degree polynomial specified in Hermite form, that is, by its values and first derivatives at the end points of the corresponding domain interval.

In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions.

In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:

<span class="mw-page-title-main">Bring radical</span> Real root of the polynomial x^5+x+a

In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial

In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.

In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.

In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated.

In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point.

References