In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality: [1]
A linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign.
Two-dimensional linear inequalities are expressions in two variables of the form:
where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane. [2] The line that determines the half-planes (ax + by = c) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point (x0, y0) which is not on the line and observe whether or not the inequality is satisfied.
For example, [3] to draw the solution set of x + 3y < 9, one first draws the line with equation x + 3y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality.
In Rn linear inequalities are the expressions that may be written in the form
where f is a linear form (also called a linear functional), and b a constant real number.
More concretely, this may be written out as
or
Here are called the unknowns, and are called the coefficients.
Alternatively, these may be written as
where g is an affine function. [4]
That is
or
Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs.
A system of linear inequalities is a set of linear inequalities in the same variables:
Here are the unknowns, are the coefficients of the system, and are the constant terms.
This can be concisely written as the matrix inequality
where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants.[ citation needed ]
In the above systems both strict and non-strict inequalities may be used.
Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination. [5]
The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation.
The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n-dimensional space Rn.
A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function) subject to a number of constraints on the variables which, in general, are linear inequalities. [6] The list of constraints is a system of linear inequalities.
The above definition requires well-defined operations of addition, multiplication and comparison; therefore, the notion of a linear inequality may be extended to ordered rings, and in particular to ordered fields.
In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex region is a subset that intersects every line into a single line segment . For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex.
In mathematics, a linear equation is an equation that may be put in the form where are the variables, and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero.
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is one less than that of the ambient space. Two lower-dimensional examples of hyperplanes are one-dimensional lines in a plane and zero-dimensional points on a line.
In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,
In geometry, a normal is an object that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the line perpendicular to the tangent line to the curve at the point.
In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above or on the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
In geometry, a half-space is either of the two parts into which a plane divides the three-dimensional Euclidean space. If the space is two-dimensional, then a half-space is called a half-plane. A half-space in a one-dimensional space is called a half-line or ray.
In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real lineR1, the real coordinate planeR2, and the real coordinate three-dimensional spaceR3. With component-wise addition and scalar multiplication, it is a real vector space.
Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the -dimensional Euclidean space . Most texts use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary.
In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, C is a cone if implies for every positive scalar s. A cone need not be convex, or even look like a cone in Euclidean space.
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
A second-order cone program (SOCP) is a convex optimization problem of the form
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's method A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization.
In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.
A separation oracle is a concept in the mathematical theory of convex optimization. It is a method to describe a convex set that is given as an input to an optimization algorithm. Separation oracles are used as input to ellipsoid methods.