In algebra, a multilinear polynomial [1] is a multivariate polynomial that is linear (meaning affine) in each of its variables separately, but not necessarily simultaneously. It is a polynomial in which no variable occurs to a power of or higher; that is, each monomial is a constant times a product of distinct variables. For example is a multilinear polynomial of degree (because of the monomial ) whereas is not. The degree of a multilinear polynomial is the maximum number of distinct variables occurring in any monomial.
Multilinear polynomials can be understood as a multilinear map (specifically, a multilinear form) applied to the vectors [1 x], [1 y], etc. The general form can be written as a tensor contraction:
For example, in two variables:
A multilinear polynomial is linear (affine) when varying only one variable, :where and do not depend on . Note that is generally not zero, so is linear in the "shaped like a line" sense, but not in the "directly proportional" sense of a multilinear map.
All repeated second partial derivatives are zero:In other words, its Hessian matrix is a symmetric hollow matrix.
In particular, the Laplacian , so is a harmonic function. This implies has maxima and minima only on the boundary of the domain.
More generally, every restriction of to a subset of its coordinates is also multilinear, so still holds when one or more variables are fixed. In other words, is harmonic on every "slice" of the domain along coordinate axes.
When the domain is rectangular in the coordinate axes (e.g. a hypercube), will have maxima and minima only on the vertices of the domain, i.e. the finite set of points with minimal and maximal coordinate values. The value of the function on these points completely determines the function, since the value on the edges of the boundary can be found by linear interpolation, and the value on the rest of the boundary and the interior is fixed by Laplace's equation, . [1]
The value of the polynomial at an arbitrary point can be found by repeated linear interpolation along each coordinate axis. Equivalently, it is a weighted mean of the vertex values, where the weights are the Lagrange interpolation polynomials. These weights also constitute a set of generalized barycentric coordinates for the hyperrectangle. Geometrically, the point divides the domain into smaller hyperrectangles, and the weight of each vertex is the (fractional) volume of the hyperrectangle opposite it.
Algebraically, the multilinear interpolant on the hyperrectangle is:where the sum is taken over the vertices . Equivalently,where V is the volume of the hyperrectangle.
The value at the center is the arithmetic mean of the value at the vertices, which is also the mean over the domain boundary, and the mean over the interior. The components of the gradient at the center are proportional to the balance of the vertex values along each coordinate axis.
The vertex values and the coefficients of the polynomial are related by a linear transformation (specifically, a Möbius transform if the domain is the unit hypercube , and a Walsh-Hadamard-Fourier transform if the domain is the symmetric hypercube ).
Multilinear polynomials are the interpolants of multilinear or n-linear interpolation on a rectangular grid, a generalization of linear interpolation, bilinear interpolation and trilinear interpolation to an arbitrary number of variables. This is a specific form of multivariate interpolation, not to be confused with piecewise linear interpolation. The resulting polynomial is not a linear function of the coordinates (its degree can be higher than 1), but it is a linear function of the fitted data values.
The determinant, permanent and other immanants of a matrix are homogeneous multilinear polynomials in the elements of the matrix (and also multilinear forms in the rows or columns).
The multilinear polynomials in variables form a -dimensional vector space, which is also the basis used in the Fourier analysis of (pseudo-)Boolean functions. Every (pseudo-)Boolean function can be uniquely expressed as a multilinear polynomial (up to a choice of domain and codomain).
Multilinear polynomials are important in the study of polynomial identity testing. [2]
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or any other type of expression. It may be a number without units, in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression. When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial has coefficients 2, −1, and 3, and the powers of the variable in the polynomial have coefficient parameters , , and .
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.
In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered:
In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function.
A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
Hidden Fields Equations (HFE), also known as HFE trapdoor function, is a public key cryptosystem which was introduced at Eurocrypt in 1996 and proposed by (in French)Jacques Patarin following the idea of the Matsumoto and Imai system. It is based on polynomials over finite fields of different size to disguise the relationship between the private key and public key. HFE is in fact a family which consists of basic HFE and combinatorial versions of HFE. The HFE family of cryptosystems is based on the hardness of the problem of finding solutions to a system of multivariate quadratic equations since it uses private affine transformations to hide the extension field and the private polynomials. Hidden Field Equations also have been used to construct digital signature schemes, e.g. Quartz and Sflash.
In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable ; when the variates are spatial coordinates, it is also known as spatial interpolation.
Zhegalkinpolynomials, also known as algebraic normal form, are a representation of functions in Boolean algebra. Introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in 1927, they are the polynomial ring over the integers modulo 2. The resulting degeneracies of modular arithmetic result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coefficients nor exponents. Coefficients are redundant because 1 is the only nonzero coefficient. Exponents are redundant because in arithmetic mod 2, x2 = x. Hence a polynomial such as 3x2y5z is congruent to, and can therefore be rewritten as, xyz.
In statistics and in machine learning, a linear predictor function is a linear function of a set of coefficients and explanatory variables, whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers, as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".