Exact diagonalization

Last updated

Exact diagonalization (ED) is a numerical technique used in physics to determine the eigenstates and energy eigenvalues of a quantum Hamiltonian. In this technique, a Hamiltonian for a discrete, finite system is expressed in matrix form and diagonalized using a computer. Exact diagonalization is only feasible for systems with a few tens of particles, due to the exponential growth of the Hilbert space dimension with the size of the quantum system. It is frequently employed to study lattice models, including the Hubbard model, Ising model, Heisenberg model, t-J model, and SYK model. [1] [2]

Contents

Expectation values from exact diagonalization

After determining the eigenstates and energies of a given Hamiltonian, exact diagonalization can be used to obtain expectation values of observables. For example, if is an observable, its thermal expectation value is

where is the partition function. If the observable can be written down in the initial basis for the problem, then this sum can be evaluated after transforming to the basis of eigenstates.

Green's functions may be evaluated similarly. For example, the retarded Green's function can be written

Exact diagonalization can also be used to determine the time evolution of a system after a quench. Suppose the system has been prepared in an initial state , and then for time evolves under a new Hamiltonian, . The state at time is

Memory requirements

The dimension of the Hilbert space describing a quantum system scales exponentially with system size. For example, consider a system of spins localized on fixed lattice sites. The dimension of the on-site basis is 2, because the state of each spin can be described as a superposition of spin-up and spin-down, denoted and . The full system has dimension , and the Hamiltonian represented as a matrix has size . This implies that computation time and memory requirements scale very unfavorably in exact diagonalization. In practice, the memory requirements can be reduced by taking advantage of symmetry of the problem, imposing conservation laws, working with sparse matrices, or using other techniques.

Number of sitesNumber of statesHamiltonian size in memory
4162048 B
95122 MB
166553634 GB
25335544329 PB
366.872e1040 ZB
Naive estimates for memory requirements in exact diagonalization of a spin-12 system performed on a computer. It is assumed the Hamiltonian is stored as a matrix of double-precision floating point numbers.

Comparison with other techniques

Exact diagonalization is useful for extracting exact information about finite systems. However, often small systems are studied to gain insight into infinite lattice systems. If the diagonalized system is too small, its properties will not reflect the properties of the system in the thermodynamic limit, and the simulation is said to suffer from finite size effects.

Unlike some other exact theory techniques, such as Auxiliary-field Monte Carlo, exact diagonalization obtains Green's functions directly in real time, as opposed to imaginary time. Unlike in these other techniques, exact diagonalization results do not need to be numerically analytically continued. This is an advantage, because numerical analytic continuation is an ill-posed and difficult optimization problem. [3]

Applications

Implementations

Numerous software packages implementing exact diagonalization of quantum Hamiltonians exist. These include ALPS [ permanent dead link ], DoQo, EdLib, edrixs, Quanty and many others.

Generalizations

Exact diagonalization results from many small clusters can be combined to obtain more accurate information about systems in the thermodynamic limit using the numerical linked cluster expansion. [10]

See also

Related Research Articles

<span class="mw-page-title-main">Quantum decoherence</span> Loss of quantum coherence

Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.

<span class="mw-page-title-main">Lattice model (physics)</span>

In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.

The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.

In quantum mechanics, einselections, short for "environment-induced superselection", is a name coined by Wojciech H. Zurek for a process which is claimed to explain the appearance of wavefunction collapse and the emergence of classical descriptions of reality from quantum descriptions. In this approach, classicality is described as an emergent property induced in open quantum systems by their environments. Due to the interaction with the environment, the vast majority of states in the Hilbert space of a quantum open system become highly unstable due to entangling interaction with the environment, which in effect monitors selected observables of the system. After a decoherence time, which for macroscopic objects is typically many orders of magnitude shorter than any other dynamical timescale, a generic quantum state decays into an uncertain state which can be expressed as a mixture of simple pointer states. In this way the environment induces effective superselection rules. Thus, einselection precludes stable existence of pure superpositions of pointer states. These 'pointer states' are stable despite environmental interaction. The einselected states lack coherence, and therefore do not exhibit the quantum behaviours of entanglement and superposition.

The Bose–Hubbard model gives a description of the physics of interacting spinless bosons on a lattice. It is closely related to the Hubbard model that originated in solid-state physics as an approximate description of superconducting systems and the motion of electrons between the atoms of a crystalline solid. The model was introduced by Gersch and Knollman in 1963 in the context of granular superconductors. The model rose to prominence in the 1980s after it was found to capture the essence of the superfluid-insulator transition in a way that was much more mathematically tractable than fermionic metal-insulator models.

The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.

In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy.

<span class="mw-page-title-main">Kicked rotator</span>

The kicked rotator, also spelled as kicked rotor, is a paradigmatic model for both Hamiltonian chaos and quantum chaos. It describes a free rotating stick in an inhomogeneous "gravitation like" field that is periodically switched on in short pulses. The model is described by the Hamiltonian

Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics.

Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.

The eigenstate thermalization hypothesis is a set of ideas which purports to explain when and why an isolated quantum mechanical system can be accurately described using equilibrium statistical mechanics. In particular, it is devoted to understanding how systems which are initially prepared in far-from-equilibrium states can evolve in time to a state which appears to be in thermal equilibrium. The phrase "eigenstate thermalization" was first coined by Mark Srednicki in 1994, after similar ideas had been introduced by Josh Deutsch in 1991. The principal philosophy underlying the eigenstate thermalization hypothesis is that instead of explaining the ergodicity of a thermodynamic system through the mechanism of dynamical chaos, as is done in classical mechanics, one should instead examine the properties of matrix elements of observable quantities in individual energy eigenstates of the system.

The Lieb–Robinson bound is a theoretical upper limit on the speed at which information can propagate in non-relativistic quantum systems. It demonstrates that information cannot travel instantaneously in quantum theory, even when the relativity limits of the speed of light are ignored. The existence of such a finite speed was discovered mathematically by Elliott H. Lieb and Derek W. Robinson in 1972. It turns the locality properties of physical systems into the existence of, and upper bound for this speed. The bound is now known as the Lieb–Robinson bound and the speed is known as the Lieb–Robinson velocity. This velocity is always finite but not universal, depending on the details of the system under consideration. For finite-range, e.g. nearest-neighbor, interactions, this velocity is a constant independent of the distance travelled. In long-range interacting systems, this velocity remains finite, but it can increase with the distance travelled.

The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

In statistical mechanics, Lee–Yang theory, sometimes also known as Yang–Lee theory, is a scientific theory which seeks to describe phase transitions in large physical systems in the thermodynamic limit based on the properties of small, finite-size systems. The theory revolves around the complex zeros of partition functions of finite-size systems and how these may reveal the existence of phase transitions in the thermodynamic limit.

Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form , specifically to compute the spectrum of the Hamiltonian along . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking.

Phase space crystal is the state of a physical system that displays discrete symmetry in phase space instead of real space. For a single-particle system, the phase space crystal state refers to the eigenstate of the Hamiltonian for a closed quantum system or the eigenoperator of the Liouvillian for an open quantum system. For a many-body system, phase space crystal is the solid-like crystalline state in phase space. The general framework of phase space crystals is to extend the study of solid state physics and condensed matter physics into phase space of dynamical systems. While real space has Euclidean geometry, phase space is embedded with classical symplectic geometry or quantum noncommutative geometry.

The Aubry–André model is a toy model of a one-dimensional crystal with periodically varying onsite energies. The model is employed to study both quasicrystals and the Anderson localization metal-insulator transition in disordered systems. It was first developed by Serge Aubry and Gilles André in 1980.

In quantum physics, exceptional points are singularities in the parameter space where two or more eigenstates coalesce. These points appear in dissipative systems, which make the Hamiltonian describing the system non-Hermitian.

The quantum boomerang effect is a quantum mechanical phenomenon whereby wavepackets launched through disordered media return, on average, to their starting points, as a consequence of Anderson localization and the inherent symmetries of the system. At early times, the initial parity asymmetry of the nonzero momentum leads to asymmetric behavior: nonzero displacement of the wavepackets from their origin. At long times, inherent time-reversal symmetry and the confining effects of Anderson localization lead to correspondingly symmetric behavior: both zero final velocity and zero final displacement.

Quantum computational chemistry is an emerging field that exploits quantum computing to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient.

References

  1. Weiße, Alexander; Fehske, Holger (2008). "Exact Diagonalization Techniques". Computational Many-Particle Physics. Lecture Notes in Physics. Vol. 739. Springer. pp. 529–544. Bibcode:2008LNP...739..529W. doi:10.1007/978-3-540-74686-7_18. ISBN   978-3-540-74685-0.
  2. Prelovšek, Peter (2017). "The Finite Temperature Lanczos Method and its Applications". The Physics of Correlated Insulators, Metals, and Superconductors. Modeling and Simulation. Vol. 7. Forschungszentrum Jülich. ISBN   978-3-95806-224-5.
  3. Bergeron, Dominic; Tremblay, A.-M. S. (5 August 2016). "Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation". Physical Review E. 94 (2): 023303. arXiv: 1507.01012 . Bibcode:2016PhRvE..94b3303B. doi:10.1103/PhysRevE.94.023303. PMID   27627408. S2CID   13294476.
  4. Medvedeva, Darya; Iskakov, Sergei; Krien, Friedrich; Mazurenko, Vladimir V.; Lichtenstein, Alexander I. (29 December 2017). "Exact diagonalization solver for extended dynamical mean-field theory". Physical Review B. 96 (23): 235149. arXiv: 1709.09176 . Bibcode:2017PhRvB..96w5149M. doi:10.1103/PhysRevB.96.235149. S2CID   119347649.
  5. Hamer, C. J.; Barber, M. N. (1 January 1981). "Finite-lattice methods in quantum Hamiltonian field theory. I. The Ising model". Journal of Physics A: Mathematical and General. 14 (1): 241–257. Bibcode:1981JPhA...14..241H. doi:10.1088/0305-4470/14/1/024.
  6. Lüscher, Andreas; Läuchli, Andreas M. (5 May 2009). "Exact diagonalization study of the antiferromagnetic spin-1/2 Heisenberg model on the square lattice in a magnetic field". Physical Review B. 79 (19): 195102. arXiv: 0812.3420 . Bibcode:2009PhRvB..79s5102L. doi:10.1103/PhysRevB.79.195102. S2CID   117436360.
  7. Nakano, Hiroki; Takahashi, Yoshinori; Imada, Masatoshi (15 March 2007). "Drude Weight of the Two-Dimensional Hubbard Model –Reexamination of Finite-Size Effect in Exact Diagonalization Study–". Journal of the Physical Society of Japan. 76 (3): 034705. arXiv: cond-mat/0701735 . Bibcode:2007JPSJ...76c4705N. doi:10.1143/JPSJ.76.034705. S2CID   118346915.
  8. Fu, Wenbo; Sachdev, Subir (15 July 2016). "Numerical study of fermion and boson models with infinite-range random interactions". Physical Review B. 94 (3): 035135. arXiv: 1603.05246 . Bibcode:2016PhRvB..94c5135F. doi:10.1103/PhysRevB.94.035135. S2CID   7332664.
  9. Wang, Y.; Fabbris, G.; Dean, M.P.M; Kotliar, G. (2019). EDRIXS: An open source toolkit for simulating spectra of resonant inelastic x-ray scattering. Vol. 243. pp. 151–165. arXiv: 1812.05735 . Bibcode:2019CoPhC.243..151W. doi:10.1016/j.cpc.2019.04.018. S2CID   118949898.{{cite encyclopedia}}: |journal= ignored (help)
  10. Tang, Baoming; Khatami, Ehsan; Rigol, Marcos (March 2013). "A short introduction to numerical linked-cluster expansions". Computer Physics Communications. 184 (3): 557–564. arXiv: 1207.3366 . Bibcode:2013CoPhC.184..557T. doi:10.1016/j.cpc.2012.10.008. S2CID   11638727.