In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957, [1] and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation.
Denote the value of the best uniform approximation of a function by algebraic polynomials of degree by
The moduli of smoothness of order of a function are defined as:
where is the finite difference of order .
Theorem: [2] [Whitney, 1957] If , then
where is a constant depending only on . The Whitney constant is the smallest value of for which the above inequality holds. The theorem is particularly useful when applied on intervals of small length, leading to good estimates on the error of spline approximation.
The original proof given by Whitney follows an analytic argument which utilizes the properties of moduli of smoothness. However, it can also be proved in a much shorter way using Peetre's K-functionals. [3]
Let:
where is the Lagrange polynomial for at the nodes .
Now fix some and choose for which . Then:
Therefore:
And since we have , (a property of moduli of smoothness)
Since can always be chosen in such a way that , this completes the proof.
It is important to have sharp estimates of the Whitney constants. It is easily shown that , and it was first proved by Burkill (1952) that , who conjectured that for all . Whitney was also able to prove that [2]
and
In 1964, Brudnyi was able to obtain the estimate , and in 1982, Sendov proved that . Then, in 1985, Ivanov and Takev proved that , and Binev proved that . Sendov conjectured that for all , and in 1985 was able to prove that the Whitney constants are bounded above by an absolute constant, that is, for all . Kryakin, Gilewicz, and Shevchuk (2002) [4] were able to show that for , and that for all .
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
The short-time Fourier transform (STFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if
In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology (named after Pierre Dolbeault) is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).
An electromagnetic reverberation chamber (also known as a reverb chamber (RVC) or mode-stirred chamber (MSC)) is an environment for electromagnetic compatibility (EMC) testing and other electromagnetic investigations. Electromagnetic reverberation chambers have been introduced first by H.A. Mendes in 1968. A reverberation chamber is screened room with a minimum of absorption of electromagnetic energy. Due to the low absorption, very high field strength can be achieved with moderate input power. A reverberation chamber is a cavity resonator with a high Q factor. Thus, the spatial distribution of the electrical and magnetic field strengths is strongly inhomogeneous (standing waves). To reduce this inhomogeneity, one or more tuners (stirrers) are used. A tuner is a construction with large metallic reflectors that can be moved to different orientations in order to achieve different boundary conditions. The Lowest Usable Frequency (LUF) of a reverberation chamber depends on the size of the chamber and the design of the tuner. Small chambers have a higher LUF than large chambers.
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics. Indeed, the solutions of such problems may involve strong gradients (and even discontinuities) so that classical finite element methods fail, while finite volume methods are restricted to low order approximations.
In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(dω). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as
Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
In mathematics, moduli of smoothness are used to quantitatively measure smoothness of functions. Moduli of smoothness generalise modulus of continuity and are used in approximation theory and numerical analysis to estimate errors of approximation by polynomials and splines.
In fluid dynamics, Stokes problem also known as Stokes second problem or sometimes referred to as Stokes boundary layer or Oscillating boundary layer is a problem of determining the flow created by an oscillating solid surface, named after Sir George Stokes. This is considered one of the simplest unsteady problems that has an exact solution for the Navier–Stokes equations. In turbulent flow, this is still named a Stokes boundary layer, but now one has to rely on experiments, numerical simulations or approximate methods in order to obtain useful information on the flow.
In quantum computing, Mølmer–Sørensen gate scheme refers to an implementation procedure for various multi-qubit quantum logic gates used mostly in trapped ion quantum computing. This procedure is based on the original proposition by Klaus Mølmer and Anders Sørensen in 1999–2000.
Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form , specifically to compute the spectrum of the Hamiltonian along . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking.