This article needs additional citations for verification .(February 2018) |
In mathematics, a system of linear equations or a system of polynomial equations is considered underdetermined if there are fewer equations than unknowns [1] (in contrast to an overdetermined system, where there are more equations than unknowns). The terminology can be explained using the concept of constraint counting. Each unknown can be seen as an available degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom.
Therefore, the critical case (between overdetermined and underdetermined) occurs when the number of equations and the number of free variables are equal. For every variable giving a degree of freedom, there exists a corresponding constraint removing a degree of freedom. The underdetermined case, by contrast, occurs when the system has been underconstrained—that is, when the unknowns outnumber the equations.
An underdetermined linear system has either no solution or infinitely many solutions.
For example,
is an underdetermined system without any solution; any system of equations having no solution is said to be inconsistent. On the other hand, the system
is consistent and has an infinitude of solutions, such as (x, y, z) =(1, −2, 2), (2, −3, 2), and (3, −4, 2). All of these solutions can be characterized by first subtracting the first equation from the second, to show that all solutions obey z = 2; using this in either equation shows that any value of y is possible, with x = −1 − y.
More specifically, according to the Rouché–Capelli theorem, any system of linear equations (underdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution; since in an underdetermined system this rank is necessarily less than the number of unknowns, there are indeed an infinitude of solutions, with the general solution having k free parameters where k is the difference between the number of variables and the rank.
There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination. See System of linear equations for more details.
The homogeneous (with all constant terms equal to zero) underdetermined linear system always has non-trivial solutions (in addition to the trivial solution where all the unknowns are zero). There are an infinity of such solutions, which form a vector space, whose dimension is the difference between the number of unknowns and the rank of the matrix of the system.
The main property of linear underdetermined systems, of having either no solution or infinitely many, extends to systems of polynomial equations in the following way.
A system of polynomial equations which has fewer equations than unknowns is said to be underdetermined. It has either infinitely many complex solutions (or, more generally, solutions in an algebraically closed field) or is inconsistent. It is inconsistent if and only if 0 = 1 is a linear combination (with polynomial coefficients) of the equations (this is Hilbert's Nullstellensatz). If an underdetermined system of t equations in n variables (t < n) has solutions, then the set of all complex solutions is an algebraic set of dimension at least n - t. If the underdetermined system is chosen at random the dimension is equal to n - t with probability one.
In general, an underdetermined system of linear equations has an infinite number of solutions, if any. However, in optimization problems that are subject to linear equality constraints, only one of the solutions is relevant, namely the one giving the highest or lowest value of an objective function.
Some problems specify that one or more of the variables are constrained to take on integer values. An integer constraint leads to integer programming and Diophantine equations problems, which may have only a finite number of solutions.
Another kind of constraint, which appears in coding theory, especially in error correcting codes and signal processing (for example compressed sensing), consists in an upper bound on the number of variables which may be different from zero. In error correcting codes, this bound corresponds to the maximal number of errors that may be corrected simultaneously.
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Elementary algebra, also known as college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables.
In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1.
In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.
In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In mathematics, to solve an equation is to find its solutions, which are the values that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values such that, when substituted for the unknowns, the equation becomes an equality. A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set.
In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, called a parametric curve and parametric surface, respectively. In such cases, the equations are collectively called a parametric representation, or parametric system, or parameterization of the object.
In linear algebra, an augmented matrix is a matrix obtained by appending a -dimensional column vector , on the right, as a further column to a -dimensional matrix . This is usually done for the purpose of performing the same elementary row operations on the augmented matrix as is done on the original one when solving a system of linear equations by Gaussian elimination.
In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.
In mathematics, particularly in algebra, an indeterminate system is a system of simultaneous equations which has more than one solution. In the case of a linear system, the system may be said to be underspecified, in which case the presence of more than one solution would imply an infinite number of solutions, but that property does not extend to nonlinear systems.
The following outline is provided as an overview of and topical guide to algebra:
In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others.
A system of polynomial equations is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k.
Rouché–Capelli theorem is a theorem in linear algebra that determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:
In mathematics and particularly in algebra, a system of equations is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations.