This article relies largely or entirely on a single source .(January 2013) |
In mathematics, Tonelli's theorem in functional analysis is a fundamental result on the weak lower semicontinuity of nonlinear functionals on Lp spaces. As such, it has major implications for functional analysis and the calculus of variations. Roughly, it shows that weak lower semicontinuity for integral functionals is equivalent to convexity of the integral kernel. The result is attributed to the Italian mathematician Leonida Tonelli.
Let be a bounded domain in -dimensional Euclidean space and let be a continuous extended real-valued function. Define a nonlinear functional on functions by
Then is sequentially weakly lower semicontinuous on the space for and weakly-∗ lower semicontinuous on if and only if is convex.
In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space. There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly, and may be extended so as to act on related objects.
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where U is an open subset of that satisfies Laplace's equation, that is,
In mathematical analysis, semicontinuity is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function is uppersemicontinuous at a point if, roughly speaking, the function values for arguments near are not much higher than
In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function s → s2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential
In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology.
In mathematical analysis, Trudinger's theorem or the Trudinger inequality is a result of functional analysis on Sobolev spaces. It is named after Neil Trudinger.
In mathematics, a locally integrable function is a function which is integrable on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain : in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization. They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.
In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions, where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
In the calculus of variations, the notion of polyconvexity is a generalization of the notion of convexity for functions defined on spaces of matrices. The notion of polyconvexity was introduced by John M. Ball as a sufficient conditions for proving the existence of energy minimizers in nonlinear elasticity theory. It is satisfied by a large class of hyperelastic stored energy densities, such as Mooney-Rivlin and Ogden materials. The notion of polyconvexity is related to the notions of convexity, quasiconvexity and rank-one convexity through the following diagram:
In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree.
In mathematics, the direct method in the calculus of variations is a general method for constructing a proof of the existence of a minimizer for a given functional, introduced by Stanisław Zaremba and David Hilbert around 1900. The method relies on methods of functional analysis and topology. As well as being used to prove the existence of a solution, direct methods may be used to compute the solution to desired accuracy.
In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haim Brezis and Petru Mironescu in the late 2010s.
In the calculus of variations, a subfield of mathematics, quasiconvexity is a generalisation of the notion of convexity. It is used to characterise the integrand of a functional and related to the existence of minimisers. Under some natural conditions, quasiconvexity of the integrand is a necessary and sufficient condition for a functional