In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy.
The sign problem is one of the major unsolved problems in the physics of many-particle systems. It often arises in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions.
In physics the sign problem is typically (but not exclusively) encountered in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions. Because the particles are strongly interacting, perturbation theory is inapplicable, and one is forced to use brute-force numerical methods. Because the particles are fermions, their wavefunction changes sign when any two fermions are interchanged (due to the anti-symmetry of the wave function, see Pauli principle). So unless there are cancellations arising from some symmetry of the system, the quantum-mechanical sum over all multi-particle states involves an integral over a function that is highly oscillatory, hence hard to evaluate numerically, particularly in high dimension. Since the dimension of the integral is given by the number of particles, the sign problem becomes severe in the thermodynamic limit. The field-theoretic manifestation of the sign problem is discussed below.
The sign problem is one of the major unsolved problems in the physics of many-particle systems, impeding progress in many areas:
[lower-alpha 1] In a field-theory approach to multi-particle systems, the fermion density is controlled by the value of the fermion chemical potential . One evaluates the partition function by summing over all classical field configurations, weighted by , where is the action of the configuration. The sum over fermion fields can be performed analytically, and one is left with a sum over the bosonic fields (which may have been originally part of the theory, or have been produced by a Hubbard–Stratonovich transformation to make the fermion action quadratic)
where represents the measure for the sum over all configurations of the bosonic fields, weighted by
where is now the action of the bosonic fields, and is a matrix that encodes how the fermions were coupled to the bosons. The expectation value of an observable is therefore an average over all configurations weighted by :
If is positive, then it can be interpreted as a probability measure, and can be calculated by performing the sum over field configurations numerically, using standard techniques such as Monte Carlo importance sampling.
The sign problem arises when is non-positive. This typically occurs in theories of fermions when the fermion chemical potential is nonzero, i.e. when there is a nonzero background density of fermions. If , there is no particle–antiparticle symmetry, and , and hence the weight , is in general a complex number, so Monte Carlo importance sampling cannot be used to evaluate the integral.
A field theory with a non-positive weight can be transformed to one with a positive weight by incorporating the non-positive part (sign or complex phase) of the weight into the observable. For example, one could decompose the weighting function into its modulus and phase:
where is real and positive, so
Note that the desired expectation value is now a ratio where the numerator and denominator are expectation values that both use a positive weighting function . However, the phase is a highly oscillatory function in the configuration space, so if one uses Monte Carlo methods to evaluate the numerator and denominator, each of them will evaluate to a very small number, whose exact value is swamped by the noise inherent in the Monte Carlo sampling process. The "badness" of the sign problem is measured by the smallness of the denominator : if it is much less than 1, then the sign problem is severe. It can be shown [5] that
where is the volume of the system, is the temperature, and is an energy density. The number of Monte Carlo sampling points needed to obtain an accurate result therefore rises exponentially as the volume of the system becomes large, and as the temperature goes to zero.
The decomposition of the weighting function into modulus and phase is just one example (although it has been advocated as the optimal choice since it minimizes the variance of the denominator [6] ). In general one could write
where can be any positive weighting function (for example, the weighting function of the theory). [7] The badness of the sign problem is then measured by
which again goes to zero exponentially in the large-volume limit.
The sign problem is NP-hard, implying that a full and generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time. [8] If (as is generally suspected) there are no polynomial-time solutions to NP problems (see P versus NP problem), then there is no generic solution to the sign problem. This leaves open the possibility that there may be solutions that work in specific cases, where the oscillations of the integrand have a structure that can be exploited to reduce the numerical errors.
In systems with a moderate sign problem, such as field theories at a sufficiently high temperature or in a sufficiently small volume, the sign problem is not too severe and useful results can be obtained by various methods, such as more carefully tuned reweighting, analytic continuation from imaginary to real , or Taylor expansion in powers of . [3] [9]
There are various proposals for solving systems with a severe sign problem:
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name.
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.
In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.
In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem.
In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).
The Kerr–Newman metric describes the spacetime geometry around a mass which is electrically charged and rotating. It is a vacuum solution which generalizes the Kerr metric by additionally taking into account the energy of an electromagnetic field, making it the most general asymptotically flat and stationary solution of the Einstein–Maxwell equations in general relativity. As an electrovacuum solution, it only includes those charges associated with the magnetic field; it does not include any free electric charges.
In theoretical physics, the Rarita–Schwinger equation is the relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In quantum field theory, the theta vacuum is the semi-classical vacuum state of non-abelian Yang–Mills theories specified by the vacuum angleθ that arises when the state is written as a superposition of an infinite set of topologically distinct vacuum states. The dynamical effects of the vacuum are captured in the Lagrangian formalism through the presence of a θ-term which in quantum chromodynamics leads to the fine tuning problem known as the strong CP problem. It was discovered in 1976 by Curtis Callan, Roger Dashen, and David Gross, and independently by Roman Jackiw and Claudio Rebbi.
The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.
In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.
Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics.
Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.
Infinite derivative gravity is a theory of gravity which attempts to remove cosmological and black hole singularities by adding extra terms to the Einstein–Hilbert action, which weaken gravity at short distances.
The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as
The symmetric logarithmic derivative is an important quantity in quantum metrology, and is related to the quantum Fisher information.
In gauge theory, topological Yang–Mills theory, also known as the theta term or -term is a gauge-invariant term which can be added to the action for four-dimensional field theories, first introduced by Edward Witten. It does not change the classical equations of motion, and its effects are only seen at the quantum level, having important consequences for CPT symmetry.