Numerical sign problem

Last updated

In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy.

Contents

The sign problem is one of the major unsolved problems in the physics of many-particle systems. It often arises in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions.

Overview

In physics the sign problem is typically (but not exclusively) encountered in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions. Because the particles are strongly interacting, perturbation theory is inapplicable, and one is forced to use brute-force numerical methods. Because the particles are fermions, their wavefunction changes sign when any two fermions are interchanged (due to the anti-symmetry of the wave function, see Pauli principle). So unless there are cancellations arising from some symmetry of the system, the quantum-mechanical sum over all multi-particle states involves an integral over a function that is highly oscillatory, hence hard to evaluate numerically, particularly in high dimension. Since the dimension of the integral is given by the number of particles, the sign problem becomes severe in the thermodynamic limit. The field-theoretic manifestation of the sign problem is discussed below.

The sign problem is one of the major unsolved problems in the physics of many-particle systems, impeding progress in many areas:

The sign problem in field theory

[lower-alpha 1] In a field-theory approach to multi-particle systems, the fermion density is controlled by the value of the fermion chemical potential . One evaluates the partition function by summing over all classical field configurations, weighted by , where is the action of the configuration. The sum over fermion fields can be performed analytically, and one is left with a sum over the bosonic fields (which may have been originally part of the theory, or have been produced by a Hubbard–Stratonovich transformation to make the fermion action quadratic)

where represents the measure for the sum over all configurations of the bosonic fields, weighted by

where is now the action of the bosonic fields, and is a matrix that encodes how the fermions were coupled to the bosons. The expectation value of an observable is therefore an average over all configurations weighted by :

If is positive, then it can be interpreted as a probability measure, and can be calculated by performing the sum over field configurations numerically, using standard techniques such as Monte Carlo importance sampling.

The sign problem arises when is non-positive. This typically occurs in theories of fermions when the fermion chemical potential is nonzero, i.e. when there is a nonzero background density of fermions. If , there is no particle–antiparticle symmetry, and , and hence the weight , is in general a complex number, so Monte Carlo importance sampling cannot be used to evaluate the integral.

Reweighting procedure

A field theory with a non-positive weight can be transformed to one with a positive weight by incorporating the non-positive part (sign or complex phase) of the weight into the observable. For example, one could decompose the weighting function into its modulus and phase:

where is real and positive, so

Note that the desired expectation value is now a ratio where the numerator and denominator are expectation values that both use a positive weighting function . However, the phase is a highly oscillatory function in the configuration space, so if one uses Monte Carlo methods to evaluate the numerator and denominator, each of them will evaluate to a very small number, whose exact value is swamped by the noise inherent in the Monte Carlo sampling process. The "badness" of the sign problem is measured by the smallness of the denominator : if it is much less than 1, then the sign problem is severe. It can be shown [5] that

where is the volume of the system, is the temperature, and is an energy density. The number of Monte Carlo sampling points needed to obtain an accurate result therefore rises exponentially as the volume of the system becomes large, and as the temperature goes to zero.

The decomposition of the weighting function into modulus and phase is just one example (although it has been advocated as the optimal choice since it minimizes the variance of the denominator [6] ). In general one could write

where can be any positive weighting function (for example, the weighting function of the theory). [7] The badness of the sign problem is then measured by

which again goes to zero exponentially in the large-volume limit.

Methods for reducing the sign problem

The sign problem is NP-hard, implying that a full and generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time. [8] If (as is generally suspected) there are no polynomial-time solutions to NP problems (see P versus NP problem), then there is no generic solution to the sign problem. This leaves open the possibility that there may be solutions that work in specific cases, where the oscillations of the integrand have a structure that can be exploited to reduce the numerical errors.

In systems with a moderate sign problem, such as field theories at a sufficiently high temperature or in a sufficiently small volume, the sign problem is not too severe and useful results can be obtained by various methods, such as more carefully tuned reweighting, analytic continuation from imaginary to real , or Taylor expansion in powers of . [3] [9]

List: Current Approaches

There are various proposals for solving systems with a severe sign problem:

See also

Footnotes

  1. Sources for this section include Chandrasekharan & Wiese (1999) [5] and Kieu & Griffin (1994), [6] in addition to those cited.

Related Research Articles

<span class="mw-page-title-main">Uncertainty principle</span> Foundational principle in quantum physics

The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.

<span class="mw-page-title-main">Technicolor (physics)</span> Hypothetical model through which W and Z bosons acquire mass

Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name.

In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

<span class="mw-page-title-main">Fermi's interaction</span> Mechanism of beta decay proposed in 1933

In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.

In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem.

<span class="mw-page-title-main">Mass gap</span> Energy difference between ground state and lightest excited state(s)

In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.

<span class="mw-page-title-main">Chiral model</span> Model of mesons in the massless quark limit

In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).

The Kerr–Newman metric describes the spacetime geometry around a mass which is electrically charged and rotating. It is a vacuum solution which generalizes the Kerr metric by additionally taking into account the energy of an electromagnetic field, making it the most general asymptotically flat and stationary solution of the Einstein–Maxwell equations in general relativity. As an electrovacuum solution, it only includes those charges associated with the magnetic field; it does not include any free electric charges.

In theoretical physics, the Rarita–Schwinger equation is the relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.

In quantum field theory, the theta vacuum is the semi-classical vacuum state of non-abelian Yang–Mills theories specified by the vacuum angleθ that arises when the state is written as a superposition of an infinite set of topologically distinct vacuum states. The dynamical effects of the vacuum are captured in the Lagrangian formalism through the presence of a θ-term which in quantum chromodynamics leads to the fine tuning problem known as the strong CP problem. It was discovered in 1976 by Curtis Callan, Roger Dashen, and David Gross, and independently by Roman Jackiw and Claudio Rebbi.

The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.

In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.

Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics.

<span class="mw-page-title-main">Symmetry in quantum mechanics</span> Properties underlying modern physics

Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.

Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.

Infinite derivative gravity is a theory of gravity which attempts to remove cosmological and black hole singularities by adding extra terms to the Einstein–Hilbert action, which weaken gravity at short distances.

The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as

The symmetric logarithmic derivative is an important quantity in quantum metrology, and is related to the quantum Fisher information.

In gauge theory, topological Yang–Mills theory, also known as the theta term or -term is a gauge-invariant term which can be added to the action for four-dimensional field theories, first introduced by Edward Witten. It does not change the classical equations of motion, and its effects are only seen at the quantum level, having important consequences for CPT symmetry.

References

  1. Loh, E. Y.; Gubernatis, J. E.; Scalettar, R. T.; White, S. R.; Scalapino, D. J.; Sugar, R. L. (1990). "Sign problem in the numerical simulation of many-electron systems". Physical Review B. 41 (13): 9301–9307. Bibcode:1990PhRvB..41.9301L. doi:10.1103/PhysRevB.41.9301. PMID   9993272.
  2. de Forcrand, Philippe (2010). "Simulating QCD at finite density". Pos Lat. 010: 010. arXiv: 1005.0539 . Bibcode:2010arXiv1005.0539D.
  3. 1 2 Philipsen, O. (2008). "Lattice calculations at non-zero chemical potential: The QCD phase diagram". Proceedings of Science. 77: 011. doi: 10.22323/1.077.0011 .
  4. Anagnostopoulos, K. N.; Nishimura, J. (2002). "New approach to the complex-action problem and its application to a nonperturbative study of superstring theory". Physical Review D. 66 (10): 106008. arXiv: hep-th/0108041 . Bibcode:2002PhRvD..66j6008A. doi:10.1103/PhysRevD.66.106008. S2CID   119384615.
  5. 1 2 3 Chandrasekharan, Shailesh; Wiese, Uwe-Jens (1999). "Meron-Cluster Solution of Fermion Sign Problems". Physical Review Letters. 83 (16): 3116–3119. arXiv: cond-mat/9902128 . Bibcode:1999PhRvL..83.3116C. doi:10.1103/PhysRevLett.83.3116. S2CID   119061060.
  6. 1 2 Kieu, T. D.; Griffin, C. J. (1994). "Monte Carlo simulations with indefinite and complex-valued measures". Physical Review E. 49 (5): 3855–3859. arXiv: hep-lat/9311072 . Bibcode:1994PhRvE..49.3855K. doi:10.1103/PhysRevE.49.3855. PMID   9961673. S2CID   46652412.
  7. Barbour, I. M.; Morrison, S. E.; Klepfish, E. G.; Kogut, J. B.; Lombardo, M.-P. (1998). "Results on Finite Density QCD". Nuclear Physics B - Proceedings Supplements. 60 (1998): 220–233. arXiv: hep-lat/9705042 . Bibcode:1998NuPhS..60..220B. doi:10.1016/S0920-5632(97)00484-2. S2CID   16172956.
  8. Troyer, Matthias; Wiese, Uwe-Jens (2005). "Computational Complexity and Fundamental Limitations to Fermionic Quantum Monte Carlo Simulations". Physical Review Letters. 94 (17): 170201. arXiv: cond-mat/0408370 . Bibcode:2005PhRvL..94q0201T. doi:10.1103/PhysRevLett.94.170201. PMID   15904269. S2CID   11394699.
  9. Schmidt, Christian (2006). "Lattice QCD at Finite Density". Pos Lat. 021: 21.1. arXiv: hep-lat/0610116 . Bibcode:2006slft.confE..21S. doi: 10.22323/1.032.0021 . S2CID   14890549.
  10. Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo; Warrington, Neill (2022). "Complex paths around the sign problem". Reviews of Modern Physics. 94: 015006. arXiv: 2007.05436 . doi:10.1103/RevModPhys.94.015006.
  11. Aarts, Gert (2009). "Can Stochastic Quantization Evade the Sign Problem? The Relativistic Bose Gas at Finite Chemical Potential". Physical Review Letters. 102 (13): 131601. arXiv: 0810.2089 . Bibcode:2009PhRvL.102m1601A. doi:10.1103/PhysRevLett.102.131601. PMID   19392346. S2CID   12719451.
  12. Li, Zi-Xiang; Jiang, Yi-Fan; Yao, Hong (2015). "Solving the fermion sign problem in quantum Monte Carlo simulations by Majorana representation". Physical Review B. 91 (24): 241117. arXiv: 1408.2269 . Bibcode:2015PhRvB..91x1117L. doi:10.1103/PhysRevB.91.241117. S2CID   86865851.
  13. Li, Zi-Xiang; Jiang, Yi-Fan; Yao, Hong (2016). "Majorana-Time-Reversal Symmetries: A Fundamental Principle for Sign-Problem-Free Quantum Monte Carlo Simulations". Physical Review Letters. 117 (26): 267002. arXiv: 1601.05780 . Bibcode:2016PhRvL.117z7002L. doi:10.1103/PhysRevLett.117.267002. PMID   28059531. S2CID   24661656.
  14. Van Bemmel, H. J. M.; Ten Haaf, D. F. B.; Van Saarloos, W.; Van Leeuwen, J. M. J.; An, G. (1994). "Fixed-Node Quantum Monte Carlo Method for Lattice Fermions" (PDF). Physical Review Letters. 72 (15): 2442–2445. Bibcode:1994PhRvL..72.2442V. doi:10.1103/PhysRevLett.72.2442. hdl: 1887/5478 . PMID   10055881.
  15. Houcke, Kris Van; Kozik, Evgeny; Prokof'ev, Nikolay V.; Svistunov, Boris Vladimirovich (2010-01-01). "Diagrammatic Monte Carlo". Physics Procedia. 6: 95–105. arXiv: 0802.2923 . Bibcode:2010PhPro...6...95V. doi:10.1016/j.phpro.2010.09.034. hdl: 1854/LU-3234513 . ISSN   1875-3892. S2CID   16490610.