This article includes a list of general references, but it lacks sufficient corresponding inline citations .(June 2016) |
In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of Bernstein basis polynomials. The idea is named after mathematician Sergei Natanovich Bernstein.
Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0, 1], became important in the form of Bézier curves.
A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm.
The n +1 Bernstein basis polynomials of degree n are defined as
where is a binomial coefficient.
So, for example,
The first few Bernstein basis polynomials for blending 1, 2, 3 or 4 values together are:
The Bernstein basis polynomials of degree n form a basis for the vector space of polynomials of degree at most n with real coefficients.
A linear combination of Bernstein basis polynomials
is called a Bernstein polynomial or polynomial in Bernstein form of degree n. [1] The coefficients are called Bernstein coefficients or Bézier coefficients.
The first few Bernstein basis polynomials from above in monomial form are:
The Bernstein basis polynomials have the following properties:
Let ƒ be a continuous function on the interval [0, 1]. Consider the Bernstein polynomial
It can be shown that
uniformly on the interval [0, 1]. [4] [1] [5] [6]
Bernstein polynomials thus provide one way to prove the Weierstrass approximation theorem that every real-valued continuous function on a real interval [a, b] can be uniformly approximated by polynomial functions over . [7]
A more general statement for a function with continuous kth derivative is
where additionally
is an eigenvalue of Bn; the corresponding eigenfunction is a polynomial of degree k.
This proof follows Bernstein's original proof of 1912. [8] See also Feller (1966) or Koralov & Sinai (2007). [9] [5]
We will first give intuition for Bernstein's original proof. A continuous function on a compact interval must be uniformly continuous. Thus, the value of any continuous function can be uniformly approximated by its value on some finite net of points in the interval. This consideration renders the approximation theorem intuitive, given that polynomials should be flexible enough to match (or nearly match) a finite number of pairs . To do so, we might (1) construct a function close to on a lattice, and then (2) smooth out the function outside the lattice to make a polynomial.
The probabilistic proof below simply provides a constructive method to create a polynomial which is approximately equal to on such a point lattice, given that "smoothing out" a function is not always trivial. Taking the expectation of a random variable with a simple distribution is a common way to smooth. Here, we take advantage of the fact that Bernstein polynomials look like Binomial expectations. We split the interval into a lattice of n discrete values. Then, to evaluate any f(x), we evaluate f at one of the n lattice points close to x, randomly chosen by the Binomial distribution. The expectation of this approximation technique is polynomial, as it is the expectation of a function of a binomial RV. The proof below illustrates that this achieves a uniform approximation of f. The crux of the proof is to (1) justify replacing an arbitrary point with a binomially chosen lattice point by concentration properties of a Binomial distribution, and (2) justify the inference from to by uniform continuity.
Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value and
By the weak law of large numbers of probability theory,
for every δ > 0. Moreover, this relation holds uniformly in x, which can be seen from its proof via Chebyshev's inequality, taking into account that the variance of 1⁄n K, equal to 1⁄n x(1−x), is bounded from above by 1⁄(4n) irrespective of x.
Because ƒ, being continuous on a closed bounded interval, must be uniformly continuous on that interval, one infers a statement of the form
uniformly in x for each . Taking into account that ƒ is bounded (on the given interval) one finds that
uniformly in x. To justify this statement, we use a common method in probability theory to convert from closeness in probability to closeness in expectation. One splits the expectation of into two parts split based on whether or not . In the interval where the difference does not exceed ε, the expectation clearly cannot exceed ε. In the other interval, the difference still cannot exceed 2M, where M is an upper bound for |ƒ(x)| (since uniformly continuous functions are bounded). However, by our 'closeness in probability' statement, this interval cannot have probability greater than ε. Thus, this part of the expectation contributes no more than 2M times ε. Then the total expectation is no more than , which can be made arbitrarily small by choosing small ε.
Finally, one observes that the absolute value of the difference between expectations never exceeds the expectation of the absolute value of the difference, a consequence of Holder's Inequality. Thus, using the above expectation, we see that (uniformly in x)
Noting that our randomness was over K while x is constant, the expectation of f(x) is just equal to f(x). But then we have shown that converges to f(x). Then we will be done if is a polynomial in x (the subscript reminding us that x controls the distribution of K). Indeed it is:
In the above proof, recall that convergence in each limit involving f depends on the uniform continuity of f, which implies a rate of convergence dependent on f 's modulus of continuity It also depends on 'M', the absolute bound of the function, although this can be bypassed if one bounds and the interval size. Thus, the approximation only holds uniformly across x for a fixed f, but one can readily extend the proof to uniformly approximate a set of functions with a set of Bernstein polynomials in the context of equicontinuity.
The probabilistic proof can also be rephrased in an elementary way, using the underlying probabilistic ideas but proceeding by direct verification: [10] [6] [11] [12] [13]
The following identities can be verified:
In fact, by the binomial theorem
and this equation can be applied twice to . The identities (1), (2), and (3) follow easily using the substitution .
Within these three identities, use the above basis polynomial notation
and let
Thus, by identity (1)
so that
Since f is uniformly continuous, given , there is a such that whenever . Moreover, by continuity, . But then
The first sum is less than ε. On the other hand, by identity (3) above, and since , the second sum is bounded by times
It follows that the polynomials fn tend to f uniformly.
Bernstein polynomials can be generalized to k dimensions – the resulting polynomials have the form Bi1(x1) Bi2(x2) ... Bik(xk). [1] In the simplest case only products of the unit interval [0,1] are considered; but, using affine transformations of the line, Bernstein polynomials can also be defined for products [a1, b1] × [a2, b2] × ... × [ak, bk]. For a continuous function f on the k-fold product of the unit interval, the proof that f(x1, x2, ... , xk) can be uniformly approximated by
is a straightforward extension of Bernstein's proof in one dimension. [14]
In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and are nonnegative integers satisfying and the coefficient of each term is a specific positive integer depending on and . For example, for ,
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because .
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In mathematics, the Bernoulli polynomials, named after Jacob Bernoulli, combine the Bernoulli numbers and binomial coefficients. They are used for series expansion of functions, and with the Euler–MacLaurin formula.
In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.
In materials science and solid mechanics, Poisson's ratio is a measure of the Poisson effect, the deformation of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, ν is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2 to 0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.
In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group SU(n), or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of the understanding of the Standard Model of particle physics.
In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.
In combinatorics, the binomial transform is a sequence transformation that computes its forward differences. It is closely related to the Euler transform, which is the result of applying the binomial transform to the sequence associated with its ordinary generating function.
Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
In probability theory and statistics, the generalized multivariate log-gamma (G-MVLG) distribution is a multivariate distribution introduced by Demirhan and Hamurkaroglu in 2011. The G-MVLG is a flexible distribution. Skewness and kurtosis are well controlled by the parameters of the distribution. This enables one to control dispersion of the distribution. Because of this property, the distribution is effectively used as a joint prior distribution in Bayesian analysis, especially when the likelihood is not from the location-scale family of distributions such as normal distribution.
In mathematics, van der Corput's method generates estimates for exponential sums. The method applies two processes, the van der Corput processes A and B which relate the sums into simpler sums which are easier to estimate.
In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957, and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation.
In the branch of mathematics known as integration theory, the McShane integral, created by Edward J. McShane, is a modification of the Henstock-Kurzweil integral. The McShane integral is equivalent to the Lebesgue integral.