In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957, [1] and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation.
Mathematics includes the study of such topics as quantity, structure, space, and change.
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents of variables. An example of a polynomial of a single indeterminate, x, is x2 − 4x + 7. An example in three variables is x3 + 2xyz2 − yz + 1.
In mathematics, moduli of smoothness are used to quantitatively measure smoothness of functions. Moduli of smoothness generalise modulus of continuity and are used in approximation theory and numerical analysis to estimate errors of approximation by polynomials and splines.
Denote the value of the best uniform approximation of a function by algebraic polynomials of degree by
In mathematics, a function was originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable. The concept of function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept.
The moduli of smoothness of order of a function are defined as:
where is the finite difference of order .
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
Theorem: [2] [Whitney, 1957] If , then
where is a constant depending only on . The Whitney constant is the smallest value of for which the above inequality holds. The theorem is particularly useful when applied on intervals of small length, leading to good estimates on the error of spline approximation.
In mathematics, a spline is a function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees.
The original proof given by Whitney follows an analytic argument which utilizes the properties of moduli of smoothness. However, it can also be proved in a much shorter way using Peetre's K-functionals. [3]
Let:
where is the Lagrange polynomial for at the nodes .
Now fix some and choose for which . Then:
Therefore:
And since we have , (a property of moduli of smoothness)
Since can always be chosen in such a way that , this completes the proof.
It is important to have sharp estimates of the Whitney constants. It is easily shown that , and it was first proved by Burkill (1952) that , who conjectured that for all . Whitney was also able to prove that [2]
and
In 1964, Brudnyi was able to obtain the estimate , and in 1982, Sendov proved that . Then, in 1985, Ivanov and Takev proved that , and Binev proved that . Sendov conjectured that for all , and in 1985 was able to prove that the Whitney constants are bounded above by an absolute constant, that is, for all . Kryakin, Gilewicz, and Shevchuk (2002) [4] were able to show that for , and that for all .
In number theory, an arithmetic, arithmetical, or number-theoretic function is for most authors any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n".
In mathematics, Green's theorem gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green, though its first proof is due to Bernhard Riemann and is the two-dimensional special case of the more general Kelvin–Stokes theorem.
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if and only if
Virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.
In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).
An electromagnetic reverberation chamber is an environment for electromagnetic compatibility (EMC) testing and other electromagnetic investigations. Electromagnetic reverberation chambers have been introduced first by H.A. Mendes in 1968. A reverberation chamber is screened room with a minimum of absorption of electromagnetic energy. Due to the low absorption very high field strength can be achieved with moderate input power. A reverberation chamber is a cavity resonator with a high Q factor. Thus, the spatial distribution of the electrical and magnetic field strengths is strongly inhomogeneous. To reduce this inhomogeneity, one or more tuners (stirrers) are used. A tuner is a construction with large metallic reflectors that can be moved to different orientations in order to achieve different boundary conditions. The Lowest Usable Frequency (LUF) of a reverberation chamber depends on the size of the chamber and the design of the tuner. Small chambers have a higher LUF than large chambers.
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
In applied mathematics, discontinuous Galerkin methods form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics.
In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(dω). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as
In mathematics, the Besov space is a complete quasinormed space which is a Banach space when 1 ≤ p, q ≤ ∞. These spaces, as well as the similarly defined Triebel–Lizorkin spaces, serve to generalize more elementary function spaces such as Sobolev spaces and are effective at measuring regularity properties of functions.
In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts.
In mathematics, the ATS theorem is the theorem on the approximation of a trigonometric sum by a shorter one. The application of the ATS theorem in certain problems of mathematical and theoretical physics can be very helpful.
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
The generalized-strain mesh-free (GSMF) formulation is a local meshfree method in the field of numerical analysis, completely integration free, working as a weighted-residual weak-form collocation. This method was first presented by Oliveira and Portela (2016), in order to further improve the computational efficiency of meshfree methods in numerical analysis. Local meshfree methods are derived through a weighted-residual formulation which leads to a local weak form that is the well known work theorem of the theory of structures. In an arbitrary local region, the work theorem establishes an energy relationship between a statically-admissible stress field and an independent kinematically-admissible strain field. Based on the independence of these two fields, this formulation results in a local form of the work theorem that is reduced to regular boundary terms only, integration-free and free of volumetric locking.
In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes.
Laser theory of Fabry-Perot (FP) semiconductor lasers proves to be nonlinear, since the gain, the refractive index and the loss coefficient are the functions of energy flux. The nonlinear theory made it possible to explain a number of experiments some of which could not even be explained, much less modeled, on the basis of other theoretical models; this suggests that the nonlinear theory developed is a new paradigm of the laser theory.