Supertrace

Last updated

In the theory of superalgebras, if A is a commutative superalgebra, V is a free right A-supermodule and T is an endomorphism from V to itself, then the supertrace of T, str(T) is defined by the following trace diagram:

Contents

Trace.png

More concretely, if we write out T in block matrix form after the decomposition into even and odd subspaces as follows,

then the supertrace

str(T) = the ordinary trace of T00 the ordinary trace of T11.

Let us show that the supertrace does not depend on a basis. Suppose e1, ..., ep are the even basis vectors and ep+1, ..., ep+q are the odd basis vectors. Then, the components of T, which are elements of A, are defined as

The grading of Tij is the sum of the gradings of T, ei, ej mod 2.

A change of basis to e1', ..., ep', e(p+1)', ..., e(p+q)' is given by the supermatrix

and the inverse supermatrix

where of course, AA1 = A1A = 1 (the identity).

We can now check explicitly that the supertrace is basis independent. In the case where T is even, we have

In the case where T is odd, we have

The ordinary trace is not basis independent, so the appropriate trace to use in the Z2-graded setting is the supertrace.

The supertrace satisfies the property

for all T1, T2 in End(V). In particular, the supertrace of a supercommutator is zero.

In fact, one can define a supertrace more generally for any associative superalgebra E over a commutative superalgebra A as a linear map tr: E -> A which vanishes on supercommutators. [1] Such a supertrace is not uniquely defined; it can always at least be modified by multiplication by an element of A.

Physics applications

In supersymmetric quantum field theories, in which the action integral is invariant under a set of symmetry transformations (known as supersymmetry transformations) whose algebras are superalgebras, the supertrace has a variety of applications. In such a context, the supertrace of the mass matrix for the theory can be written as a sum over spins of the traces of the mass matrices for particles of different spin: [2]

In anomaly-free theories where only renormalizable terms appear in the superpotential, the above supertrace can be shown to vanish, even when supersymmetry is spontaneously broken.

The contribution to the effective potential arising at one loop (sometimes referred to as the Coleman–Weinberg potential [3] ) can also be written in terms of a supertrace. If is the mass matrix for a given theory, the one-loop potential can be written as

where and are the respective tree-level mass matrices for the separate bosonic and fermionic degrees of freedom in the theory and is a cutoff scale.

See also

Related Research Articles

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.

In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics. They span the Lie algebra of the SU(3) group in the defining representation.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

In mathematics and theoretical physics, a supermatrix is a Z2-graded analog of an ordinary matrix. Specifically, a supermatrix is a 2×2 block matrix with entries in a superalgebra. The most important examples are those with entries in a commutative superalgebra or an ordinary field.

In physics, the von Neumann entropy, named after John von Neumann, is an extension of the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics. For a quantum-mechanical system described by a density matrix ρ, the von Neumann entropy is

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume. It plays an important role for discrete-stable distributions.

In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

N = 4 supersymmetric Yang–Mills (SYM) theory is a relativistic conformally invariant Lagrangian gauge theory describing fermions interacting via gauge field exchanges. In D=4 spacetime dimensions, N=4 is the maximal number of supersymmetries or supersymmetry charges.

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

<span class="mw-page-title-main">Faddeev–LeVerrier algorithm</span>

In mathematics, the Faddeev–LeVerrier algorithm is a recursive method to calculate the coefficients of the characteristic polynomial of a square matrix, A, named after Dmitry Konstantinovich Faddeev and Urbain Le Verrier. Calculation of this polynomial yields the eigenvalues of A as its roots; as a matrix polynomial in the matrix A itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity ; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix .

References

  1. N. Berline, E. Getzler, M. Vergne, Heat Kernels and Dirac Operators, Springer-Verlag, 1992, ISBN   0-387-53340-0, p. 39.
  2. Martin, Stephen P. (1998). "A Supesymmetry Primer". Perspectives on Supersymmetry. World Scientific. pp.  1–98. arXiv: hep-ph/9709356 . doi:10.1142/9789812839657_0001. ISBN   978-981-02-3553-6. ISSN   1793-1339.
  3. Coleman, Sidney; Weinberg, Erick (1973-03-15). "Radiative Corrections as the Origin of Spontaneous Symmetry Breaking". Physical Review D. American Physical Society (APS). 7 (6): 1888–1910. arXiv: hep-th/0507214 . doi:10.1103/physrevd.7.1888. ISSN   0556-2821.