Lattice model (physics)

Last updated
A three-dimensional lattice filled with two molecules A and B, here shown as black and white spheres. Lattices such as this are used - for example - in the Flory-Huggins solution theory LatticemodelAB.png
A three-dimensional lattice filled with two molecules A and B, here shown as black and white spheres. Lattices such as this are used - for example - in the Flory–Huggins solution theory

In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models (when they are solvable) includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.

Contents

Mathematical description

A number of lattice models can be described by the following data:

Examples

The Ising model is given by the usual cubic lattice graph where is an infinite cubic lattice in or a period cubic lattice in , and is the edge set of nearest neighbours (the same letter is used for the energy functional but the different usages are distinguishable based on context). The spin-variable space is . The energy functional is

The spin-variable space can often be described as a coset. For example, for the Potts model we have . In the limit , we obtain the XY model which has . Generalising the XY model to higher dimensions gives the -vector model which has .

Solvable models

We specialise to a lattice with a finite number of points, and a finite spin-variable space. This can be achieved by making the lattice periodic, with period in dimensions. Then the configuration space is also finite. We can define the partition function

and there are no issues of convergence (like those which emerge in field theory) since the sum is finite. In theory, this sum can be computed to obtain an expression which is dependent only on the parameters and . In practice, this is often difficult due to non-linear interactions between sites. Models with a closed-form expression for the partition function are known as exactly solvable.

Examples of exactly solvable models are the periodic 1D Ising model, and the periodic 2D Ising model with vanishing external magnetic field, but for dimension , the Ising model remains unsolved.

Mean field theory

Due to the difficulty of deriving exact solutions, in order to obtain analytic results we often must resort to mean field theory. This mean field may be spatially varying, or global.

Global mean field

The configuration space of functions is replaced by the convex hull of the spin space , when has a realisation in terms of a subset of . We'll denote this by . This arises as in going to the mean value of the field, we have .

As the number of lattice sites , the possible values of fill out the convex hull of . By making a suitable approximation, the energy functional becomes a function of the mean field, that is, The partition function then becomes

As , that is, in the thermodynamic limit, the saddle point approximation tells us the integral is asymptotically dominated by the value at which is minimised:

where is the argument minimising .

A simpler, but less mathematically rigorous approach which nevertheless sometimes gives correct results comes from linearising the theory about the mean field . Writing configurations as , truncating terms of then summing over configurations allows computation of the partition function.

Such an approach to the periodic Ising model in dimensions provides insight into phase transitions.

Spatially varying mean field

Suppose the continuum limit of the lattice is . Instead of averaging over all of , we average over neighbourhoods of . This gives a spatially varying mean field . We relabel with to bring the notation closer to field theory. This allows the partition function to be written as a path integral

where the free energy is a Wick rotated version of the action in quantum field theory.

Examples

Condensed matter physics

Polymer physics

High energy physics

See also

Related Research Articles

In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.

In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.

The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

In quantum field theory, a quartic interaction is a type of self-interaction in a scalar field. Other types of quartic interactions may be found under the topic of four-fermion interactions. A classical free scalar field satisfies the Klein–Gordon equation. If a scalar field is denoted , a quartic interaction is represented by adding a potential energy term to the Lagrangian density. The coupling constant is dimensionless in 4-dimensional spacetime.

In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.

In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x as

The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.

In statistical mechanics, the two-dimensional square lattice Ising model is a simple lattice model of interacting magnetic spins. The model is notable for having nontrivial interactions, yet having an analytical solution. The model was solved by Lars Onsager for the special case that the external magnetic field H = 0. An analytical solution for the general case for has yet to be found.

In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.

Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces, in that they endow a linear space with an "inner product" that takes values in a C*-algebra. Hilbert C*-modules were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras. In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, and provide the right framework to extend the notion of Morita equivalence to C*-algebras. They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, and groupoid C*-algebras.

In statistical mechanics, the Griffiths inequality, sometimes also called Griffiths–Kelly–Sherman inequality or GKS inequality, named after Robert B. Griffiths, is a correlation inequality for ferromagnetic spin systems. Informally, it says that in ferromagnetic spin systems, if the 'a-priori distribution' of the spin is invariant under spin flipping, the correlation of any monomial of the spins is non-negative; and the two point correlation of two monomial of the spins is non-negative.

Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.

<span class="mw-page-title-main">Causal fermion systems</span> Candidate unified theory of physics

The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory. Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale. As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.

Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.

Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems.

This article summarizes several identities in exterior calculus.

The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge . Correlation functions of the spin and energy operators are described by the minimal model. While the minimal model has been exactly solved, see also, e.g., the article on Ising critical exponents, the solution does not cover other observables such as connectivities of clusters.

In statistical mechanics, Lee–Yang theory, sometimes also known as Yang–Lee theory, is a scientific theory which seeks to describe phase transitions in large physical systems in the thermodynamic limit based on the properties of small, finite-size systems. The theory revolves around the complex zeros of partition functions of finite-size systems and how these may reveal the existence of phase transitions in the thermodynamic limit.

Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form , specifically to compute the spectrum of the Hamiltonian along . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking.

References