In the mathematical area of game theory and of convex optimization, a minimax theorem is a theorem that claims that
under certain conditions on the sets and and on the function . [1] It is always true that the left-hand side is at most the right-hand side (max–min inequality) but equality only holds under certain conditions identified by minimax theorems. The first theorem in this sense is von Neumann's minimax theorem about two-player zero-sum games published in 1928, [2] which is considered the starting point of game theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved". [3] Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature. [4] [5]
Von Neumann's original theorem [2] was motivated by game theory and applies to the case where
Under these assumptions, von Neumann proved that
In the context of two-player zero-sum games, the sets and correspond to the strategy sets of the first and second player, respectively, which consist of lotteries over their actions (so-called mixed strategies), and their payoffs are defined by the payoff matrix . The function encodes the expected value of the payoff to the first player when the first player plays the strategy and the second player plays the strategy .
Von Neumann's minimax theorem can be generalized to domains that are compact and convex, and to functions that are concave in their first argument and convex in their second argument (known as concave-convex functions). Formally, let and be compact convex sets. If is a continuous function that is concave-convex, i.e.
Then we have that
Sion's minimax theorem is a generalization of von Neumann's minimax theorem due to Maurice Sion, [6] relaxing the requirement that It states: [6] [7]
Let be a convex subset of a linear topological space and let be a compact convex subset of a linear topological space. If is a real-valued function on with
Then we have that
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
In mathematics, a linear form is a linear map from a vector space to its field of scalars.
In functional analysis and related areas of mathematics, Fréchet spaces, named after Maurice Fréchet, are special topological vector spaces. They are generalizations of Banach spaces. All Banach and Hilbert spaces are Fréchet spaces. Spaces of infinitely differentiable functions are typical examples of Fréchet spaces, many of which are typically not Banach spaces.
In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above or on the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .
In mathematics, the Legendre transformation, first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function.
In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.
In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate. The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality.
In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.
In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that converts the answer of a problem to a simpler one. It is often used in solving ill-posed problems or to prevent overfitting.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.
In mathematics, a Schur-convex function, also known as S-convex, isotonic function and order-preserving function is a function that for all such that is majorized by , one has that . Named after Issai Schur, Schur-convex functions are used in the study of majorization.
In complex analysis, a branch of mathematics, a holomorphic function is said to be of exponential type C if its growth is bounded by the exponential function for some real-valued constant as . When a function is bounded in this way, it is then possible to express it as certain kinds of convergent summations over a series of other complex functions, as well as understanding when it is possible to apply techniques such as Borel summation, or, for example, to apply the Mellin transform, or to perform approximations using the Euler–Maclaurin formula. The general case is handled by Nachbin's theorem, which defines the analogous notion of -type for a general function as opposed to .
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.
In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.
Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.
This is a glossary for the terminology in a mathematical field of functional analysis.