Connes' embedding problem, formulated by Alain Connes in the 1970s, is a major problem in von Neumann algebra theory. During that time, the problem was reformulated in several different areas of mathematics. Dan Voiculescu developing his free entropy theory found that Connes' embedding problem is related to the existence of microstates. Some results of von Neumann algebra theory can be obtained assuming positive solution to the problem. The problem is connected to some basic questions in quantum theory, which led to the realization that it also has important implications in computer science.
The problem admits a number of equivalent formulations. [1] Notably, it is equivalent to the following long standing problems:
In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen announced a result in quantum complexity theory [2] that implies a negative answer to Connes' embedding problem. [3] [4] However, an error was discovered in September 2020 in an earlier result they used; a new proof avoiding the earlier result was published as a preprint in September. [5] A broad outline was published in Communications of the ACM in November 2021, [6] and an article explaining the connection between MIP*=RE and the Connes Embedding Problem appeared in October 2022. [7]
Let be a free ultrafilter on the natural numbers and let R be the hyperfinite type II1 factor with trace . One can construct the ultrapower as follows: let be the von Neumann algebra of norm-bounded sequences and let . The quotient turns out to be a II1 factor with trace , where is any representative sequence of .
Connes' embedding problem asks whether every type II1 factor on a separable Hilbert space can be embedded into some .
A positive solution to the problem would imply that invariant subspaces exist for a large class of operators in type II1 factors (Uffe Haagerup); all countable discrete groups are hyperlinear. A positive solution to the problem would be implied by equality between free entropy and free entropy defined by microstates (Dan Voiculescu). In January 2020, a group of researchers [2] claimed to have resolved the problem in the negative, i.e., there exist type II1 von Neumann factors that do not embed in an ultrapower of the hyperfinite II1 factor.
The isomorphism class of is independent of the ultrafilter if and only if the continuum hypothesis is true (Ge-Hadwin and Farah-Hart-Sherman), but such an embedding property does not depend on the ultrafilter because von Neumann algebras acting on separable Hilbert spaces are, roughly speaking, very small.
The problem admits a number of equivalent formulations. [1]
The ultraproduct is a mathematical construction that appears mainly in abstract algebra and mathematical logic, in particular in model theory and set theory. An ultraproduct is a quotient of the direct product of a family of structures. All factors need to have the same signature. The ultrapower is the special case of this construction in which all factors are equal.
In mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra.
In functional analysis, a state of an operator system is a positive linear functional of norm 1. States in functional analysis generalize the notion of density matrices in quantum mechanics, which represent quantum states, both mixed states and pure states. Density matrices in turn generalize state vectors, which only represent pure states. For M an operator system in a C*-algebra A with identity, the set of all states ofM, sometimes denoted by S(M), is convex, weak-* closed in the Banach dual space M*. Thus the set of all states of M with the weak-* topology forms a compact Hausdorff space, known as the state space of M.
The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance of the same physical variable, and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
The Hamiltonian constraint arises from any theory that admits a Hamiltonian formulation and is reparametrisation-invariant. The Hamiltonian constraint of general relativity is an important non-trivial example.
A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities, a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling", were considered, and much research has been dedicated to the question of why this is the case.
In mathematics, a Dirac comb is a periodic function with the formula for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb, hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, a crossed product is a basic method of constructing a new von Neumann algebra from a von Neumann algebra acted on by a group. It is related to the semidirect product construction for groups.
In the theory of von Neumann algebras, a subfactor of a factor is a subalgebra that is a factor and contains . The theory of subfactors led to the discovery of the Jones polynomial in knot theory.
In the theory of von Neumann algebras, a part of the mathematical field of functional analysis, Tomita–Takesaki theory is a method for constructing modular automorphisms of von Neumann algebras from the polar decomposition of a certain involution. It is essential for the theory of type III factors, and has led to a good structure theory for these previously intractable objects.
In mathematics, affiliated operators were introduced by Murray and von Neumann in the theory of von Neumann algebras as a technique for using unbounded operators to study modules generated by a single vector. Later Atiyah and Singer showed that index theorems for elliptic operators on closed manifolds with infinite fundamental group could naturally be phrased in terms of unbounded operators affiliated with the von Neumann algebra of the group. Algebraic properties of affiliated operators have proved important in L2 cohomology, an area between analysis and geometry that evolved from the study of such index theorems.
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms.
In mathematics, a commutation theorem for traces explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace.
Quantum characteristics are phase-space trajectories that arise in the phase space formulation of quantum mechanics through the Wigner transform of Heisenberg operators of canonical coordinates and momenta. These trajectories obey the Hamilton equations in quantum form and play the role of characteristics in terms of which time-dependent Weyl's symbols of quantum operators can be expressed. In the classical limit, quantum characteristics reduce to classical trajectories. The knowledge of quantum characteristics is equivalent to the knowledge of quantum dynamics.
In quantum mechanics, the Redfield equation is a Markovian master equation that describes the time evolution of the reduced density matrix ρ of a strongly coupled quantum system that is weakly coupled to an environment. The equation is named in honor of Alfred G. Redfield, who first applied it, doing so for nuclear magnetic resonance spectroscopy. It is also known as the Redfield relaxation theory.
This is a glossary for the terminology in a mathematical field of functional analysis.
In many-body physics, the problem of analytic continuation is that of numerically extracting the spectral density of a Green function given its values on the imaginary axis. It is a necessary post-processing step for calculating dynamical properties of physical systems from Quantum Monte Carlo simulations, which often compute Green function values only at imaginary times or Matsubara frequencies.