In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.
Given a data set of coordinate pairs with the are called nodes and the are called values. The Lagrange polynomial has degree and assumes each value at the corresponding node,
Although named after Joseph-Louis Lagrange, who published it in 1795, [1] the method was first discovered in 1779 by Edward Waring. [2] It is also an easy consequence of a formula published in 1783 by Leonhard Euler. [3]
Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory.
For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.
Given a set of nodes , which must all be distinct, for indices , the Lagrange basis for polynomials of degree for those nodes is the set of polynomials each of degree which take values if and . Using the Kronecker delta this can be written Each basis polynomial can be explicitly described by the product:
Notice that the numerator has roots at the nodes while the denominator scales the resulting polynomial so that
The Lagrange interpolating polynomial for those nodes through the corresponding values is the linear combination:
Each basis polynomial has degree , so the sum has degree , and it interpolates the data because
The interpolating polynomial is unique. Proof: assume the polynomial of degree interpolates the data. Then the difference is zero at distinct nodes But the only polynomial of degree with more than roots is the constant zero function, so or
Each Lagrange basis polynomial can be rewritten as the product of three parts, a function common to every basis polynomial, a node-specific constant (called the barycentric weight), and a part representing the displacement from to : [4]
By factoring out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form:
If the weights have been pre-computed, this requires only operations compared to for evaluating each Lagrange basis polynomial individually.
The barycentric interpolation formula can also easily be updated to incorporate a new node by dividing each of the , by and constructing the new as above.
For any because the constant function is the unique polynomial of degree interpolating the data We can thus further simplify the barycentric formula by dividing
This is called the second form or true form of the barycentric interpolation formula.
This second form has advantages in computation cost and accuracy: it avoids evaluation of ; the work to compute each term in the denominator has already been done in computing and so computing the sum in the denominator costs only addition operations; for evaluation points which are close to one of the nodes , catastrophic cancelation would ordinarily be a problem for the value , however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result.
Using this formula to evaluate at one of the nodes will result in the indeterminate ; computer implementations must replace such results by
Each Lagrange basis polynomial can also be written in barycentric form:
Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial , we must invert the Vandermonde matrix to solve for the coefficients of . By choosing a better basis, the Lagrange basis, , we merely get the identity matrix, , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.
This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.
Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial.
We wish to interpolate over the domain at the three nodes :
The node polynomial is
The barycentric weights are
The Lagrange basis polynomials are
The Lagrange interpolating polynomial is:
In (second) barycentric form,
The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.
But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.
Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes. [5]
The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.
When interpolating a given function f by a polynomial of degree k at the nodes we get the remainder which can be expressed as [6]
where is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as
The remainder can be bound as
Clearly, is zero at nodes. To find at a point , define a new function and choose where is the constant we are required to determine for a given . We choose so that has zeroes (at all nodes and ) between and (including endpoints). Assuming that is -times differentiable, since and are polynomials, and therefore, are infinitely differentiable, will be -times differentiable. By Rolle's theorem, has zeroes, has zeroes... has 1 zero, say . Explicitly writing :
The equation can be rearranged as [7]
Since we have
The dth derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials,
Recall (see § Definition above) that each Lagrange basis polynomial is
The first derivative can be found using the product rule:
The second derivative is
The third derivative is
and likewise for higher derivatives.
Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating the derivatives.
The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.
In numerical analysis, an n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the nodes xi and weights wi for i = 1, ..., n.
In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a wide number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:
In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. The discovery shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.
In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes.
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
In calculus, the trapezoidal rule is a technique for numerical integration, i.e., approximating the definite integral:
In numerical analysis, Chebyshev nodes are a set of specific real algebraic numbers, used as nodes for polynomial interpolation. They are the projection of equispaced points on the unit circle onto the real interval the diameter of the circle.
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
In mathematics, bicubic interpolation is an extension of cubic spline interpolation for interpolating data points on a two-dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.
In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code or a uniquely decodable code for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory. The prefix code can contain either finitely many or infinitely many codewords.
In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolant of a function (at the given nodes) is in comparison with the best polynomial approximation of the function (the degree of the polynomials are fixed). The Lebesgue constant for polynomials of degree at most n and for the set of n + 1 nodes T is generally denoted by Λn(T ). These constants are named after Henri Lebesgue.
In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than n that takes the same value at n given points as a given function. Instead, Hermite interpolation computes a polynomial of degree less than n such that the polynomial and its first few derivatives have the same values at m given points as the given function and its first few derivatives at those points. The number of pieces of information, function values and derivative values, must add up to .
Shamir's secret sharing (SSS) is an efficient secret sharing algorithm for distributing private information among a group. The secret cannot be revealed unless a quorum of the group acts together to pool their knowledge. To achieve this, the secret is mathematically divided into parts from which the secret can be reassembled only when a sufficient number of shares are combined. SSS has the property of information-theoretic security, meaning that even if an attacker steals some shares, it is impossible for the attacker to reconstruct the secret unless they have stolen the quorum number of shares.
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions . There are two kinds: the regular solid harmonics, which are well-defined at the origin and the irregular solid harmonics, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
In polynomial interpolation of two variables, the Padua points are the first known example of a unisolvent point set with minimal growth of their Lebesgue constant, proven to be . Their name is due to the University of Padua, where they were originally discovered.
In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function by a polynomial of order at most in terms of derivatives of of order . Both the error of the approximation and the derivatives of are measured by norms on a bounded domain in . This is similar to classical numerical analysis, where, for example, the error of linear interpolation can be bounded using the second derivative of . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of are measured by more general norms involving averages, not just the maximum norm.