In matrix analysis Stahl's theorem is a theorem proved in 2011 by Herbert Stahl concerning Laplace transforms for special matrix functions. [1] It originated in 1975 as the Bessis-Moussa-Villani (BMV) conjecture by Daniel Bessis, Pierre Moussa, and Marcel Villani. [2] In 2004 Elliott H. Lieb and Robert Seiringer gave two important reformulations of the BMV conjecture. [3] In 2015, Alexandre Eremenko gave a simplified proof of Stahl's theorem. [4]
In 2023, Otte Heinävaara proved a structure theorem for Hermitian matrices introducing tracial joint spectral measures that implies Stahl's theorem as a corollary. [5]
Let denote the trace of a matrix. If and are Hermitian matrices and is positive semidefinite, define , for all real . Then can be represented as the Laplace transform of a non-negative Borel measure on . In other words, for all real ,
for some non-negative measure depending upon and . [6]
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In functional analysis, a state of an operator system is a positive linear functional of norm 1. States in functional analysis generalize the notion of density matrices in quantum mechanics, which represent quantum states, both mixed states and pure states. Density matrices in turn generalize state vectors, which only represent pure states. For M an operator system in a C*-algebra A with identity, the set of all states ofM, sometimes denoted by S(M), is convex, weak-* closed in the Banach dual space M*. Thus the set of all states of M with the weak-* topology forms a compact Hausdorff space, known as the state space of M.
Complex dynamics, or holomorphic dynamics, is the study of dynamical systems obtained by iterating a complex analytic mapping. This article focuses on the case of algebraic dynamics, where a polynomial or rational function is iterated. In geometric terms, that amounts to iterating a mapping from some algebraic variety to itself. The related theory of arithmetic dynamics studies iteration over the rational numbers or the p-adic numbers instead of the complex numbers.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In theoretical physics, massive gravity is a theory of gravity that modifies general relativity by endowing the graviton with a nonzero mass. In the classical theory, this means that gravitational waves obey a massive wave equation and hence travel at speeds below the speed of light.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In physics and mathematics, the Golden–Thompson inequality is a trace inequality between exponentials of symmetric and Hermitian matrices proved independently by Golden (1965) and Thompson (1965). It has been developed in the context of statistical mechanics, where it has come to have a particular significance.
The maximal ergodic theorem is a theorem in ergodic theory, a discipline within mathematics.
In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In algebraic geometry, the Witten conjecture is a conjecture about intersection numbers of stable classes on the moduli space of curves, introduced by Edward Witten in the paper Witten, and generalized in Witten (1993). Witten's original conjecture was proved by Maxim Kontsevich in the paper Kontsevich (1992).
In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.
In theoretical particle physics, the gluon field strength tensor is a second order tensor field characterizing the gluon interaction between quarks.
In mathematics, and especially differential and algebraic geometry, K-stability is an algebro-geometric stability condition, for complex manifolds and complex algebraic varieties. The notion of K-stability was first introduced by Gang Tian and reformulated more algebraically later by Simon Donaldson. The definition was inspired by a comparison to geometric invariant theory (GIT) stability. In the special case of Fano varieties, K-stability precisely characterises the existence of Kähler–Einstein metrics. More generally, on any compact complex manifold, K-stability is conjectured to be equivalent to the existence of constant scalar curvature Kähler metrics.