Zassenhaus algorithm

Last updated

In mathematics, the Zassenhaus algorithm [1] is a method to calculate a basis for the intersection and sum of two subspaces of a vector space. It is named after Hans Zassenhaus, but no publication of this algorithm by him is known. [2] It is used in computer algebra systems. [3]

Contents

Algorithm

Input

Let V be a vector space and U, W two finite-dimensional subspaces of V with the following spanning sets:

and

Finally, let be linearly independent vectors so that and can be written as

and

Output

The algorithm computes the base of the sum and a base of the intersection .

Algorithm

The algorithm creates the following block matrix of size :

Using elementary row operations, this matrix is transformed to the row echelon form. Then, it has the following shape:

Here, stands for arbitrary numbers, and the vectors for every and for every are nonzero.

Then with

is a basis of and with

is a basis of .

Proof of correctness

First, we define to be the projection to the first component.

Let Then and .

Also, is the kernel of , the projection restricted to H. Therefore, .

The Zassenhaus algorithm calculates a basis of H. In the first m columns of this matrix, there is a basis of .

The rows of the form (with ) are obviously in . Because the matrix is in row echelon form, they are also linearly independent. All rows which are different from zero ( and ) are a basis of H, so there are such s. Therefore, the s form a basis of .

Example

Consider the two subspaces and of the vector space .

Using the standard basis, we create the following matrix of dimension :

Using elementary row operations, we transform this matrix into the following matrix:

(Some entries have been replaced by "" because they are irrelevant to the result.)

Therefore is a basis of , and is a basis of .

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants.

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

In mathematics, a product is the result of multiplication, or an expression that identifies objects to be multiplied, called factors. For example, 21 is the product of 3 and 7, and is the product of and . When one factor is an integer, the product is called a multiple.

In mathematics, the tensor product of two vector spaces V and W is a vector space to which is associated a bilinear map that maps a pair to an element of denoted

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix  and a diagonal matrix such that . This is equivalent to . This property exists for any linear map: for a finite-dimensional vector space , a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of , and the diagonal entries of  are the corresponding eigenvalues of ; with respect to this eigenvector basis,  is represented by .

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:

In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra and functional analysis, the partial trace is a generalization of the trace. Whereas the trace is a scalar valued function on operators, the partial trace is an operator-valued function. The partial trace has applications in quantum information and decoherence which is relevant for quantum measurement and thereby to the decoherent approaches to interpretations of quantum mechanics, including consistent histories and the relative state interpretation.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In algebraic geometry, local cohomology is an algebraic analogue of relative cohomology. Alexander Grothendieck introduced it in seminars in Harvard in 1961 written up by Hartshorne (1967), and in 1961-2 at IHES written up as SGA2 - Grothendieck (1968), republished as Grothendieck (2005). Given a function defined on an open subset of an algebraic variety, local cohomology measures the obstruction to extending that function to a larger domain. The rational function , for example, is defined only on the complement of on the affine line over a field , and cannot be extended to a function on the entire space. The local cohomology module detects this in the nonvanishing of a cohomology class . In a similar manner, is defined away from the and axes in the affine plane, but cannot be extended to either the complement of the -axis or the complement of the -axis alone ; this obstruction corresponds precisely to a nonzero class in the local cohomology module .

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics.

In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the properties of the Pauli matrices. Here, a few classes of such matrices are summarized.

In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.

References

  1. Luks, Eugene M.; Rákóczi, Ferenc; Wright, Charles R. B. (April 1997), "Some algorithms for nilpotent permutation groups", Journal of Symbolic Computation, 23 (4): 335–354, doi: 10.1006/jsco.1996.0092 .
  2. Fischer, Gerd (2012), Lernbuch Lineare Algebra und Analytische Geometrie (in German), Vieweg+Teubner, pp. 207–210, doi:10.1007/978-3-8348-2379-3, ISBN   978-3-8348-2378-6
  3. The GAP Group (February 13, 2015), "24 Matrices", GAP Reference Manual, Release 4.7, retrieved 2015-06-11