In statistical mechanics, the transfer-matrix method is a mathematical technique which is used to write the partition function into a simpler form. It was introduced in 1941 by Hans Kramers and Gregory Wannier. [1] [2] In many one dimensional lattice models, the partition function is first written as an n-fold summation over each possible microstate, and also contains an additional summation of each component's contribution to the energy of the system within each microstate.
Higher-dimensional models contain even more summations. For systems with more than a few particles, such expressions can quickly become too complex to work out directly, even by computer.
Instead, the partition function can be rewritten in an equivalent way. The basic idea is to write the partition function in the form
where v0 and vN+1 are vectors of dimension p and the p × p matrices Wk are the so-called transfer matrices. In some cases, particularly for systems with periodic boundary conditions, the partition function may be written more simply as
where "tr" denotes the matrix trace. In either case, the partition function may be solved exactly using eigenanalysis. If the matrices are all the same matrix W, the partition function may be approximated as the Nth power of the largest eigenvalue of W, since the trace is the sum of the eigenvalues and the eigenvalues of the product of two diagonal matrices equals the product of their individual eigenvalues.
The transfer-matrix method is used when the total system can be broken into a sequence of subsystems that interact only with adjacent subsystems. For example, a three-dimensional cubical lattice of spins in an Ising model can be decomposed into a sequence of two-dimensional planar lattices of spins that interact only adjacently. The dimension p of the p × p transfer matrix equals the number of states the subsystem may have; the transfer matrix itself Wk encodes the statistical weight associated with a particular state of subsystem k − 1 being next to another state of subsystem k.
Importantly, transfer matrix methods allow to tackle probabilistic lattice models from an algebraic perspective, allowing for instance the use of results from representation theory.
As an example of observables that can be calculated from this method, the probability of a particular state occurring at position x is given by:
Where is the projection matrix for state , having elements
Transfer-matrix methods have been critical for many exact solutions of problems in statistical mechanics, including the Zimm–Bragg and Lifson–Roig models of the helix-coil transition, transfer matrix models for protein-DNA binding, as well as the famous exact solution of the two-dimensional Ising model by Lars Onsager.
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the row vector transpose of More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In linear algebra, the trace of a square matrix A, denoted tr(A), is the sum of the elements on its main diagonal, . It is only defined for a square matrix.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.
In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.
In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
The Kramers–Wannier duality is a symmetry in statistical physics. It relates the free energy of a two-dimensional square-lattice Ising model at a low temperature to that of another Ising model at a high temperature. It was discovered by Hendrik Kramers and Gregory Wannier in 1941. With the aid of this duality Kramers and Wannier found the exact location of the critical point for the Ising model on the square lattice.
A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.
In statistical mechanics, the corner transfer matrix describes the effect of adding a quadrant to a lattice. Introduced by Rodney Baxter in 1968 as an extension of the Kramers-Wannier row-to-row transfer matrix, it provides a powerful method of studying lattice models. Calculations with corner transfer matrices led Baxter to the exact solution of the hard hexagon model in 1980.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.
In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.