Bareiss algorithm

Last updated

In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be exact (there is no remainder). The method can also be used to compute the determinant of matrices with (approximated) real entries, avoiding the introduction of any round-off errors beyond those already present in the input.

Contents

History

The algorithm was originally announced by Jack Edmonds in 1966 published in 1967. The general Bareiss algorithm is distinct from the Bareiss algorithm for Toeplitz matrices.

In some Spanish-speaking countries, this algorithm is also known as Bareiss-Montante, because of René Mario Montante Pardo, a professor of the Universidad Autónoma de Nuevo León, Mexico, who popularized the method among his students.

Overview

Determinant definition has only multiplication, addition and subtraction operations. Obviously the determinant is integer if all matrix entries are integer. However actual computation of the determinant using the definition or Leibniz formula is impractical, as it requires O(n!) operations.

Gaussian elimination has O(n3) complexity, but introduces division, which results in round-off errors when implemented using floating point numbers.

Round-off errors can be avoided if all the numbers are kept as integer fractions instead of floating point. But then the size of each element grows in size exponentially with the number of rows. [1]

Bareiss brings up a question of performing an integer-preserving elimination while keeping the magnitudes of the intermediate coefficients reasonably small. Two algorithms are suggested: [2] [3]

  1. Division-free algorithm — performs matrix reduction to triangular form without any division operation.
  2. Fraction-free algorithm — uses division to keep the intermediate entries smaller, but due to the Sylvester's Identity the transformation is still integer-preserving (the division has zero remainder).

For completeness Bareiss also suggests fraction-producing multiplication-free elimination methods. [2]

The algorithm

The program structure of this algorithm is a simple triple-loop, as in the standard Gaussian elimination. However in this case the matrix is modified so that each Mk,k entry contains the leading principal minor [M]k,k. Algorithm correctness is easily shown by induction on k. [4]

  • Input: M — an n-square matrix
    assuming its leading principal minors [M]k,k are all non-zero.
  • Let M0,0 = 1 (Note: M0,0 is a special variable)
  • For k from 1 to n1:
    • For i from k+1 to n:
      • For j from k+1 to n:
        • Set
  • Output: The matrix is modified in-place,
    each Mk,k entry contains the leading minor [M]k,k,
    entry Mn,n contains the determinant of the original M.

If the assumption about principal minors turns out to be false, e.g. if Mk1,k1 = 0 and some Mi,k1 ≠ 0 (i = k,...,n) then we can exchange the k1-th row with the i-th row and change the sign of the final answer.

Analysis

During execution of the Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using the Hadamard inequality, to bound the size of these integers. Otherwise, the Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations.

It follows that, for an n × n matrix of maximum (absolute) value 2L for each entry, the Bareiss algorithm runs in O(n3) elementary operations with an O(nn/2 2nL) bound on the absolute value of intermediate values needed. Its computational complexity is thus O(n5L2 (log(n)2 + L2)) when using elementary arithmetic or O(n4L (log(n) + L) log(log(n) + L))) by using fast multiplication.

Related Research Articles

In mathematics, the determinant is a scalar value that is a certain function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Euclidean algorithm</span> Algorithm for computing greatest common divisors

In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements . It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

<span class="mw-page-title-main">Multiplication</span> Arithmetical operation

Multiplication is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product.

A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal numeral system.

In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers.

In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.

<span class="mw-page-title-main">Binary GCD algorithm</span>

The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm, is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction.

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

<span class="mw-page-title-main">Computational complexity of mathematical operations</span> Algorithmic runtime requirements for common math procedures

The following tables list the computational complexity of various algorithms for common mathematical operations.

In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an m×n matrix with elements aij in the form

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.

In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".

References

  1. Middeke, J.; Jeffrey, D.J.; Koutschan, C. (2020), "Common Factors in Fraction-Free Matrix Decompositions", Mathematics in Computer Science, 15 (4): 589–608, arXiv: 2005.12380 , doi: 10.1007/s11786-020-00495-9
  2. 1 2 Bareiss, Erwin H. (1968), "Sylvester's Identity and multistep integer-preserving Gaussian elimination" (PDF), Mathematics of Computation , 22 (103): 565–578, doi:10.2307/2004533, JSTOR   2004533
  3. Bareiss, Erwin H. (1966), MULTISTEP INTEGER-PRESERVING GAUSSIAN ELIMINATION (PDF). (Contains a clearer picture of the operations sequence)
  4. Yap, Chee Keng (2000), Fundamental Problems of Algorithmic Algebra, Oxford University Press