In mathematics, specifically in convex analysis, the convex compactification is a compactification which is simultaneously a convex subset in a locally convex space in functional analysis. The convex compactification can be used for relaxation (as continuous extension) of various problems in variational calculus and optimization theory. The additional linear structure allows e.g. for developing a differential calculus and more advanced considerations e.g. in relaxation in variational calculus or optimization theory. [1] It may capture both fast oscillations and concentration effects in optimal controls or solutions of variational problems. They are known under the names of relaxed or chattering controls (or sometimes bang-bang controls) in optimal control problems.
The linear structure gives rise to various maximum principles as first-order necessary optimality conditions, known in optimal-control theory as Pontryagin's maximum principle. In variational calculus, the relaxed problems can serve for modelling of various microstructures arising in modelling Ferroics, i.e. various materials exhibiting e.g. Ferroelasticity (as Shape-memory alloys) or Ferromagnetism. The first-order optimality conditions for the relaxed problems leads Weierstrass-type maximum principle.
In partial differential equations, relaxation leads to the concept of measure-valued solutions.
The notion was introduced by Roubíček in 1991. [1]
In mathematics, in general topology, compactification is the process or result of making a topological space into a compact space. A compact space is a space in which every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape".
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Pierre-Louis Lions is a French mathematician. He is known for a number of contributions to the fields of partial differential equations and the calculus of variations. He was a recipient of the 1994 Fields Medal and the 1991 Prize of the Philip Morris tobacco and cigarette company.
Hilbert's twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert's other 22 problems, his 23rd is not so much a specific "problem" as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.
In control theory, a bang–bang controller, is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often used to control a plant that accepts a binary input, for example a furnace that is either completely on or completely off. Most common residential thermostats are bang–bang controllers. The Heaviside step function in its discrete form is an example of a bang–bang control signal. Due to the discontinuous control signal, systems that include bang–bang controllers are variable structure systems, and bang–bang controllers are thus variable structure controllers.
In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.
In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information.
In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.
In mathematics, a varifold is, loosely speaking, a measure-theoretic generalization of the concept of a differentiable manifold, by replacing differentiability requirements with those provided by rectifiable sets, while maintaining the general algebraic structure usually seen in differential geometry. Varifolds generalize the idea of a rectifiable current, and are studied in geometric measure theory.
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of Rn and f: U → Rm is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives.
In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only.
Laurence Chisholm Young was a British mathematician known for his contributions to measure theory, the calculus of variations, optimal control theory, and potential theory. He was the son of William Henry Young and Grace Chisholm Young, both prominent mathematicians. He moved to the US in 1949 but never sought American citizenship.
In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization. They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.
In mathematics, geometric measure theory (GMT) is the study of geometric properties of sets through measure theory. It allows mathematicians to extend tools from differential geometry to a much larger class of surfaces that are not necessarily smooth.
A set-valued function is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Ralph Tyrrell Rockafellar is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is the author of four major books including the landmark text "Convex Analysis" (1970), which has been cited more than 27,000 times according to Google Scholar and remains the standard reference on the subject, and "Variational Analysis" for which the authors received the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS).
In economics, non-convexity refers to violations of the convexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers with convex preferences and convex budget sets and on producers with convex production sets; for convex models, the predicted economic behavior is well understood. When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated with market failures, where supply and demand differ or where market equilibria can be inefficient. Non-convex economies are studied with nonsmooth analysis, which is a generalization of convex analysis.
In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If is the optimal dual value and is the optimal primal value then the duality gap is equal to . This value is always greater than or equal to 0. The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds.