Transfer-matrix method

Last updated

In statistical mechanics, the transfer-matrix method is a mathematical technique which is used to write the partition function into a simpler form. It was introduced in 1941 by Hans Kramers and Gregory Wannier. [1] [2] In many one dimensional lattice models, the partition function is first written as an n-fold summation over each possible microstate, and also contains an additional summation of each component's contribution to the energy of the system within each microstate.

Contents

Overview

Higher-dimensional models contain even more summations. For systems with more than a few particles, such expressions can quickly become too complex to work out directly, even by computer.

Instead, the partition function can be rewritten in an equivalent way. The basic idea is to write the partition function in the form

where v0 and vN+1 are vectors of dimension p and the p × p matrices Wk are the so-called transfer matrices. In some cases, particularly for systems with periodic boundary conditions, the partition function may be written more simply as

where "tr" denotes the matrix trace. In either case, the partition function may be solved exactly using eigenanalysis. If the matrices are all the same matrix W, the partition function may be approximated as the Nth power of the largest eigenvalue of W, since the trace is the sum of the eigenvalues and the eigenvalues of the product of two diagonal matrices equals the product of their individual eigenvalues.

The transfer-matrix method is used when the total system can be broken into a sequence of subsystems that interact only with adjacent subsystems. For example, a three-dimensional cubical lattice of spins in an Ising model can be decomposed into a sequence of two-dimensional planar lattices of spins that interact only adjacently. The dimension p of the p × p transfer matrix equals the number of states the subsystem may have; the transfer matrix itself Wk encodes the statistical weight associated with a particular state of subsystem k  1 being next to another state of subsystem k.

Importantly, transfer matrix methods allow to tackle probabilistic lattice models from an algebraic perspective, allowing for instance the use of results from representation theory.

As an example of observables that can be calculated from this method, the probability of a particular state occurring at position x is given by:

Where is the projection matrix for state , having elements

Transfer-matrix methods have been critical for many exact solutions of problems in statistical mechanics, including the Zimm–Bragg and Lifson–Roig models of the helix-coil transition, transfer matrix models for protein-DNA binding, as well as the famous exact solution of the two-dimensional Ising model by Lars Onsager.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (which follows directly from the above properties).

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.

In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley (1852), who indirectly named them after Johann Friedrich Pfaff.

The classical XY model is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

<span class="mw-page-title-main">Vertex model</span>

A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model. This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.

In statistical mechanics, the eight-vertex model is a generalisation of the ice-type (six-vertex) models; it was discussed by Sutherland, and Fan & Wu, and solved by Baxter in the zero-field case.

In statistical mechanics, the corner transfer matrix describes the effect of adding a quadrant to a lattice. Introduced by Rodney Baxter in 1968 as an extension of the Kramers-Wannier row-to-row transfer matrix, it provides a powerful method of studying lattice models. Calculations with corner transfer matrices led Baxter to the exact solution of the hard hexagon model in 1980.

In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

<span class="mw-page-title-main">Hadamard product (matrices)</span> Matrix operation

In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French-Jewish mathematician Jacques Hadamard or German-Jewish mathematician Issai Schur.

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

References

  1. Kramers, H. A.; Wannier, G. H. (1941). "Statistics of the Two-Dimensional Ferromagnet. Part I". Physical Review. 60 (3): 252–262. Bibcode:1941PhRv...60..252K. doi:10.1103/PhysRev.60.252. ISSN   0031-899X.
  2. Kramers, H. A.; Wannier, G. H. (1941). "Statistics of the Two-Dimensional Ferromagnet. Part II". Physical Review. 60 (3): 263–276. Bibcode:1941PhRv...60..263K. doi:10.1103/PhysRev.60.263. ISSN   0031-899X.

Notes