Berezinian

Last updated

In mathematics and theoretical physics, the Berezinian or superdeterminant is a generalization of the determinant to the case of supermatrices. The name is for Felix Berezin. The Berezinian plays a role analogous to the determinant when considering coordinate changes for integration on a supermanifold.

Contents

Definition

The Berezinian is uniquely determined by two defining properties:

where str(X) denotes the supertrace of X. Unlike the classical determinant, the Berezinian is defined only for invertible supermatrices.

The simplest case to consider is the Berezinian of a supermatrix with entries in a field K. Such supermatrices represent linear transformations of a super vector space over K. A particular even supermatrix is a block matrix of the form

Such a matrix is invertible if and only if both A and D are invertible matrices over K. The Berezinian of X is given by

For a motivation of the negative exponent see the substitution formula in the odd case.

More generally, consider matrices with entries in a supercommutative algebra R. An even supermatrix is then of the form

where A and D have even entries and B and C have odd entries. Such a matrix is invertible if and only if both A and D are invertible in the commutative ring R0 (the even subalgebra of R). In this case the Berezinian is given by

or, equivalently, by

These formulas are well-defined since we are only taking determinants of matrices whose entries are in the commutative ring R0. The matrix

is known as the Schur complement of A relative to

An odd matrix X can only be invertible if the number of even dimensions equals the number of odd dimensions. In this case, invertibility of X is equivalent to the invertibility of JX, where

Then the Berezinian of X is defined as

Properties

Berezinian module

The determinant of an endomorphism of a free module M can be defined as the induced action on the 1-dimensional highest exterior power of M. In the supersymmetric case there is no highest exterior power, but there is a still a similar definition of the Berezinian as follows.

Suppose that M is a free module of dimension (p,q) over R. Let A be the (super)symmetric algebra S*(M*) of the dual M* of M. Then an automorphism of M acts on the ext module

(which has dimension (1,0) if q is even and dimension (0,1) if q is odd)) as multiplication by the Berezinian.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix A is denoted det(A), det A, or |A|.

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748.

Square matrix Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

Transpose Matrix operation which flips a matrix over its diagonal

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.

Orthogonal group Group of isometries of a Euclidean vector space or, more generally, of a vector space equipped with a quadratic form

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication. The orthogonal group is an algebraic group and a Lie group. It is compact.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix. It is also occasionally known as adjunct matrix, though this nomenclature appears to have decreased in usage.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In mathematics, the conjugate transpose of an m-by-n matrix with complex entries is the n-by-m matrix obtained from by taking the transpose and then taking the complex conjugate of each entry. It is often denoted as or .

Adjoint representation

In mathematics, the adjoint representation of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is , the Lie group of real n-by-n invertible matrices, then the adjoint representation is the group homomorphism that sends an invertible n-by-n matrix to an endomorphism of the vector space of all linear transformations of defined by: .

In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

In mathematics, the determinant of a skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depend on the size of the matrix. The value of this polynomial, when applied to the coefficients of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852) who indirectly named them after Johann Friedrich Pfaff. The Pfaffian is nonvanishing only for 2n × 2n skew-symmetric matrices, in which case it is a polynomial of degree n.

In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix

In mathematics and theoretical physics, a supermatrix is a Z2-graded analog of an ordinary matrix. Specifically, a supermatrix is a 2×2 block matrix with entries in a superalgebra. The most important examples are those with entries in a commutative superalgebra or an ordinary field.

In mathematical physics, the Berezin integral, named after Felix Berezin,, is a way to define integration for functions of Grassmann variables. It is not an integral in the Lebesgue sense; the word "integral" is used because the Berezin integral has properties analogous to the Lebesgue integral and because it extends the path integral in physics, where it is used as a sum over histories for fermions.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:

References