Let denote the space of Hermitian matrices, denote the set consisting of positive semi-definite Hermitian matrices and denote the set of positive definite Hermitian matrices. For operators on an infinite dimensional Hilbert space we require that they be trace class and self-adjoint, in which case similar definitions apply, but we discuss only matrices, for simplicity.
For any real-valued function on an interval one may define a matrix function for any operator with eigenvalues in by defining it on the eigenvalues and corresponding projectors as
A function defined on an interval is said to be operator monotone if for all and all with eigenvalues in the following holds,
where the inequality means that the operator is positive semi-definite. One may check that is, in fact, not operator monotone!
Operator convex
A function is said to be operator convex if for all and all with eigenvalues in and , the following holds
Note that the operator has eigenvalues in since and have eigenvalues in
A function is operator concave if is operator convex;=, that is, the inequality above for is reversed.
Joint convexity
A function defined on intervals is said to be jointly convex if for all and all with eigenvalues in and all with eigenvalues in and any the following holds
A function is jointly concave if − is jointly convex, i.e. the inequality above for is reversed.
Trace function
Given a function the associated trace function on is given by
where has eigenvalues and stands for a trace of the operator.
Convexity and monotonicity of the trace function
Let f: ℝ → ℝ be continuous, and let n be any integer. Then, if is monotone increasing, so is on Hn.
Likewise, if is convex, so is on Hn, and it is strictly convex if f is strictly convex.
For , the function is operator monotone and operator concave.
For , the function is operator monotone and operator concave.
For , the function is operator convex. Furthermore,
is operator concave and operator monotone, while
is operator convex.
The original proof of this theorem is due to K. Löwner who gave a necessary and sufficient condition for f to be operator monotone.[5] An elementary proof of the theorem is discussed in [1] and a more general version of it in.[6]
Klein's inequality
For all Hermitian n×n matrices A and B and all differentiable convex functionsf: ℝ → ℝ with derivativef ' , or for all positive-definite Hermitian n×n matrices A and B, and all differentiable convex functions f:(0,∞) → ℝ, the following inequality holds,
In either case, if f is strictly convex, equality holds if and only if A = B. A popular choice in applications is f(t) = t log t, see below.
Proof
Let so that, for ,
,
varies from to .
Define
.
By convexity and monotonicity of trace functions, is convex, and so for all ,
,
which is,
,
and, in fact, the right hand side is monotone decreasing in .
Taking the limit yields,
,
which with rearrangement and substitution is Klein's inequality:
Note that if is strictly convex and , then is strictly convex. The final assertion follows from this and the fact that is monotone decreasing in .
In 1965, S. Golden [7] and C.J. Thompson [8] independently discovered that
For any matrices ,
This inequality can be generalized for three operators:[9] for non-negative operators ,
Peierls–Bogoliubov inequality
Let be such that Tr eR = 1. Defining g = Tr FeR, we have
The proof of this inequality follows from the above combined with Klein's inequality. Take f(x) = exp(x), A=R + F, and B = R + gI.[10]
Gibbs variational principle
Let be a self-adjoint operator such that is trace class. Then for any with
with equality if and only if
Lieb's concavity theorem
The following theorem was proved by E. H. Lieb in.[9] It proves and generalizes a conjecture of E. P. Wigner, M. M. Yanase, and Freeman Dyson.[11] Six years later other proofs were given by T. Ando [12] and B. Simon,[3] and several more have been given since then.
For all matrices , and all and such that and , with the real valued map on given by
The theorem and proof are due to E. H. Lieb,[9] Thm 6, where he obtains this theorem as a corollary of Lieb's concavity Theorem. The most direct proof is due to H. Epstein;[13] see M.B. Ruskai papers,[14][15] for a review of this argument.
If is an operator convex function, and and are commuting bounded linear operators, i.e. the commutator , the perspective
is jointly convex, i.e. if and with (i=1,2), ,
Ebadian et al. later extended the inequality to the case where and do not commute .[25]
Von Neumann's trace inequality and related results
Von Neumann's trace inequality, named after its originator John von Neumann, states that for any complex matrices and with singular values and respectively,[26]
with equality if and only if and share singular vectors.[27]
A simple corollary to this is the following result:[28] For Hermitian positive semi-definite complex matrices and where now the eigenvalues are sorted decreasingly ( and respectively),
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
In mathematics, a space, where is a real number, is a specific type of metric space. Intuitively, triangles in a space are "slimmer" than corresponding "model triangles" in a standard space of constant curvature . In a space, the curvature is bounded from above by . A notable special case is ; complete spaces are known as "Hadamard spaces" after the French mathematician Jacques Hadamard.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as
In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.
In quantum mechanics, and especially quantum information and the study of open quantum systems, the trace distanceT is a metric on the space of density matrices and gives a measure of the distinguishability between two states. It is the quantum generalization of the Kolmogorov distance for classical probability distributions.
In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In quantum information theory, strong subadditivity of quantum entropy (SSA) is the relation among the von Neumann entropies of various quantum subsystems of a larger quantum system consisting of three subsystems. It is a basic theorem in modern quantum information theory. It was conjectured by D. W. Robinson and D. Ruelle in 1966 and O. E. Lanford III and D. W. Robinson in 1968 and proved in 1973 by E.H. Lieb and M.B. Ruskai, building on results obtained by Lieb in his proof of the Wigner-Yanase-Dyson conjecture.
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
References
1 2 3 E. Carlen, Trace Inequalities and Quantum Entropy: An Introductory Course, Contemp. Math. 529 (2010) 73–140 doi:10.1090/conm/529/10428
1 2 B. Simon, Trace Ideals and their Applications, Cambridge Univ. Press, (1979); Second edition. Amer. Math. Soc., Providence, RI, (2005).
↑ M. Ohya, D. Petz, Quantum Entropy and Its Use, Springer, (1993).
↑ Löwner, Karl (1934). "Über monotone Matrixfunktionen". Mathematische Zeitschrift (in German). 38 (1). Springer Science and Business Media LLC: 177–216. doi:10.1007/bf01170633. ISSN0025-5874. S2CID121439134.
↑ D. Ruelle, Statistical Mechanics: Rigorous Results, World Scient. (1969).
↑ Wigner, Eugene P.; Yanase, Mutsuo M. (1964). "On the Positive Semidefinite Nature of a Certain Matrix Expression". Canadian Journal of Mathematics. 16. Canadian Mathematical Society: 397–406. doi:10.4153/cjm-1964-041-x. ISSN0008-414X. S2CID124032721.
↑ E. H. Lieb, W. E. Thirring, Inequalities for the Moments of the Eigenvalues of the Schrödinger Hamiltonian and Their Relation to Sobolev Inequalities, in Studies in Mathematical Physics, edited E. Lieb, B. Simon, and A. Wightman, Princeton University Press, 269–303 (1976).
↑ Z. Allen-Zhu, Y. Lee, L. Orecchia, Using Optimization to Obtain a Width-Independent, Parallel, Simpler, and Faster Positive SDP Solver, in ACM-SIAM Symposium on Discrete Algorithms, 1824–1831 (2016).
↑ L. Lafleche, C. Saffirio, Strong Semiclassical Limit from Hartree and Hartree-Fock to Vlasov-Poisson Equation, arXiv:2003.02926 [math-ph].
↑ V. Bosboom, M. Schlottbom, F. L. Schwenninger, On the unique solvability of radiative transfer equations with polarization, in Journal of Differential Equations, (2024).
↑ Mirsky, L. (December 1975). "A trace inequality of John von Neumann". Monatshefte für Mathematik. 79 (4): 303–306. doi:10.1007/BF01647331. S2CID122252038.
↑ Carlsson, Marcus (2021). "von Neumann's trace inequality for Hilbert-Schmidt operators". Expositiones Mathematicae. 39 (1): 149–157. doi:10.1016/j.exmath.2020.05.001.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.