This article may be confusing or unclear to readers.(December 2010) |
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:
where the data points and the nonnegative integers are given. It differs from Hermite interpolation in that it is possible to specify derivatives of at some points without specifying the lower derivatives or the polynomial itself. The name refers to George David Birkhoff, who first studied the problem in 1906. [1]
In contrast to Lagrange interpolation and Hermite interpolation, a Birkhoff interpolation problem does not always have a unique solution. For instance, there is no quadratic polynomial such that and . On the other hand, the Birkhoff interpolation problem where the values of and are given always has a unique solution. [2]
An important problem in the theory of Birkhoff interpolation is to classify those problems that have a unique solution. Schoenberg [3] formulates the problem as follows. Let denote the number of conditions (as above) and let be the number of interpolation points. Given a matrix , all of whose entries are either or , such that exactly entries are , then the corresponding problem is to determine such that
The matrix is called the incidence matrix. For example, the incidence matrices for the interpolation problems mentioned in the previous paragraph are:
Now the question is: Does a Birkhoff interpolation problem with a given incidence matrix have a unique solution for any choice of the interpolation points?
The case with interpolation points was tackled by George Pólya in 1931. [4] Let denote the sum of the entries in the first columns of the incidence matrix:
Then the Birkhoff interpolation problem with has a unique solution if and only if . Schoenberg showed that this is a necessary condition for all values of .
Consider a differentiable function on , such that . Let us see that there is no Birkhoff interpolation quadratic polynomial such that where : Since , one may write the polynomial as (by completing the square) where are merely the interpolation coefficients. The derivative of the interpolation polynomial is given by . This implies , however this is absurd, since is not necessarily . The incidence matrix is given by:
Consider a differentiable function on , and denote with . Let us see that there is indeed Birkhoff interpolation quadratic polynomial such that and . Construct the interpolating polynomial of at the nodes , such that . Thus the polynomial : is the Birkhoff interpolating polynomial. The incidence matrix is given by:
Given a natural number , and a differentiable function on , is there a polynomial such that: and for with ? Construct the Lagrange/Newton polynomial (same interpolating polynomial, different form to calculate and express them) that satisfies for , then the polynomial is the Birkhoff interpolating polynomial satisfying the above conditions. The incidence matrix is given by:
Given a natural number , and a differentiable function on , is there a polynomial such that: and for ? Construct as the interpolating polynomial of at and , such that . Define then the iterates . Then is the Birkhoff interpolating polynomial. The incidence matrix is given by:
In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In group theory, the quaternion group Q8 (sometimes just denoted by Q) is a non-abelian group of order eight, isomorphic to the eight-element subset of the quaternions under multiplication. It is given by the group presentation
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n.
In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
In mathematics, the Heisenberg group, named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form
In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees.
In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m=2n is even, it is a nonzero polynomial of degree n, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entiries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852) who indirectly named them after Johann Friedrich Pfaff.
In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).
In the study of Dirac fields in quantum field theory, Richard Feynman invented the convenient Feynman slash notation. If A is a covariant vector,
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,
In mathematics, the classical groups are defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.
In the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid. The approach involves approximating the function locally by an expression of the form
In statistics and in machine learning, a linear predictor function is a linear function of a set of coefficients and explanatory variables, whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers, as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".