Linear equation over a ring

Last updated

In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".

Contents

In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation

with and b in a given ring R, to decide if it has a solution with in R, and, if any, to provide one. This amounts to decide if b belongs to the ideal generated by the ai. The simplest instance of this problem is, for k = 1 and b = 1, to decide if a is a unit in R.

The syzygy problem consists, given k elements in R, to provide a system of generators of the module of the syzygies of that is a system of generators of the submodule of those elements in Rk that are solutions of the homogeneous equation

The simplest case, when k = 1 amounts to find a system of generators of the annihilator of a1.

Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems.

In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem.

A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective.

The article considers the main rings for which linear algebra is effective.

Generalities

To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result. [1]

Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations.

This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly.

A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination.

Properties of effective rings

Let R be an effective commutative ring.

Over the integers or a principal ideal domain

There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details.

More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and

It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain.

The above two algorithms imply that given a and b in the principal ideal domain, there is an algorithm computing a unimodular matrix

such that

(This algorithm is obtained by taking for s and t the coefficients of Bézout's identity, and for u and v the quotient of b and a by as + bt; this choice implies that the determinant of the square matrix is 1.)

Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system.

The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see Polynomial greatest common divisor § Bézout's identity and extended GCD algorithm for details.

Over polynomials rings over a field

Linear algebra is effective on a polynomial ring over a field k. This has been first proved in 1926 by Grete Hermann. [2] The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation.

Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory.

Related Research Articles

<span class="mw-page-title-main">Diophantine equation</span> Polynomial equation whose integer solutions are sought

In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2yz + 1.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables. For example,

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix

In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which the map maps to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, that were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem, which asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings.

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In linear algebra, the Hermite normal form is an analogue of reduced echelon form for matrices over the integers Z. Just as reduced echelon form can be used to solve problems about the solution to the linear system Ax=b where x is in Rn, the Hermite normal form can solve problems about the solution to the linear system Ax=b where this time x is restricted to have integer coordinates only. Other applications of the Hermite normal form include integer programming, cryptography, and abstract algebra.

In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

References

  1. Richman, Fred (1974). "Constructive aspects of Noetherian rings". Proc. Amer. Math. Soc. 44 (2): 436–441. doi: 10.1090/s0002-9939-1974-0416874-9 .
  2. Hermann, Grete (1926). "Die Frage der endlich vielen Schritte in der Theorie der Polynomideale". Mathematische Annalen. 95: 736–788. doi:10.1007/BF01206635. S2CID   115897210.. English translation in Communications in Computer Algebra 32/3 (1998): 8–30.