In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. [1] QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. [2] Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, [3] a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. [4] [5] Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, [6] which was verified in a cavity QED experiment. [7]
QSL have been used to explore the limits of computation [8] [9] and complexity. In 2017, QSLs were studied in a quantum oscillator at high temperature. [10] In 2018, it was shown that QSL are not restricted to the quantum domain and that similar bounds hold in classical systems. [11] [12] In 2021, both the Mandelstam-Tamm and the Margolus–Levitin QSL bounds were concurrently tested in a single experiment [13] which indicated there are "two different regimes: one where the Mandelstam-Tamm limit constrains the evolution at all times, and a second where a crossover to the Margolus-Levitin limit occurs at longer times."
In Quantum sensing, QSL impose fundamental constraints on the maximum achievable time resolution of quantum sensors. These limits stem from the requirement that quantum states must evolve to orthogonal states to extract precise information. For example, in applications like Ramsey interferometry, the QSL determines the minimum time required for phase accumulation during control sequences, directly impacting the sensor's temporal resolution and sensitivity. [14]
The speed limit theorems can be stated for pure states, and for mixed states; they take a simpler form for pure states. An arbitrary pure state can be written as a linear combination of energy eigenstates:
The task is to provide a lower bound for the time interval required for the initial state to evolve into a state orthogonal to . The time evolution of a pure state is given by the Schrödinger equation:
Orthogonality is obtained when
and the minimum time interval required to achieve this condition is called the orthogonalization interval [2] or orthogonalization time. [15]
For pure states, the Mandelstam–Tamm theorem states that the minimum time required for a state to evolve into an orthogonal state is bounded below:
where
is the variance of the system's energy and is the Hamiltonian operator. The quantum evolution is independent of the particular Hamiltonian used to transport the quantum system along a given curve in the projective Hilbert space; the distance along this curve is measured by the Fubini–Study metric. [16] This is sometimes called the quantum angle, as it can be understood as the arccos of the inner product of the initial and final states.
The Mandelstam–Tamm limit can also be stated for mixed states and for time-varying Hamiltonians. In this case, the Bures metric must be employed in place of the Fubini–Study metric. A mixed state can be understood as a sum over pure states, weighted by classical probabilities; likewise, the Bures metric is a weighted sum of the Fubini–Study metric. For a time-varying Hamiltonian and time-varying density matrix the variance of the energy is given by
The Mandelstam–Tamm limit then takes the form
where is the Bures distance between the starting and ending states. The Bures distance is geodesic, giving the shortest possible distance of any continuous curve connecting two points, with understood as an infinitessimal path length along a curve parametrized by Equivalently, the time taken to evolve from to is bounded as
where
is the time-averaged uncertainty in energy. For a pure state evolving under a time-varying Hamiltonian, the time taken to evolve from one pure state to another pure state orthogonal to it is bounded as [17]
This follows, as for a pure state, one has the density matrix The quantum angle (Fubini–Study distance) is then and so one concludes when the initial and final states are orthogonal.
For the case of a pure state, Margolus and Levitin [3] obtain a different limit, that
where is the average energy, This form applies when the Hamiltonian is not time-dependent, and the ground-state energy is defined to be zero.
The Margolus–Levitin theorem can also be generalized to the case where the Hamiltonian varies with time, and the system is described by a mixed state. [17] In this form, it is given by
with the ground-state defined so that it has energy zero at all times.
This provides a result for time varying states. Although it also provides a bound for mixed states, the bound (for mixed states) can be so loose as to be uninformative. [18] The Margolus–Levitin theorem has not yet been established in time-dependent quantum systems, whose Hamiltonians are driven by arbitrary time-dependent parameters, except for the adiabatic case. [19]
In addition to the original Margolus–Levitin limit, a dual bound exists for quantum systems with a bounded energy spectrum. This dual bound, also known as the Ness–Alberti–Sagi limit or the Ness limit, depends on the difference between the state's mean energy and the energy of the highest occupied eigenstate. In bounded systems, the minimum time required for a state to evolve to an orthogonal state is bounded by
where is the energy of the highest occupied eigenstate and is the mean energy of the state. This bound complements the original Margolus–Levitin limit and the Mandelstam–Tamm limit, forming a trio of constraints on quantum evolution speed. [20]
A 2009 result by Lev B. Levitin and Tommaso Toffoli states that the precise bound for the Mandelstam–Tamm theorem is attained only for a qubit state. [15] This is a two-level state in an equal superposition
for energy eigenstates and . The states and are unique up to degeneracy of the energy level and an arbitrary phase factor This result is sharp, in that this state also satisfies the Margolus–Levitin bound, in that and so This result establishes that the combined limits are strict:
Levitin and Toffoli also provide a bound for the average energy in terms of the maximum. For any pure state the average energy is bounded as
where is the maximum energy eigenvalue appearing in (This is the quarter-pinched sphere theorem in disguise, transported to complex projective space.) Thus, one has the bound
The strict lower bound is again attained for the qubit state with .
The quantum speed limit bounds establish an upper bound at which computation can be performed. Computational machinery is constructed out of physical matter that follows quantum mechanics, and each operation, if it is to be unambiguous, must be a transition of the system from one state to an orthogonal state. Suppose the computing machinery is a physical system evolving under Hamiltonian that does not change with time. Then, according to the Margolus–Levitin theorem, the number of operations per unit time per unit energy is bounded above by
This establishes a strict upper limit on the number of calculations that can be performed by physical matter. The processing rate of all forms of computation cannot be higher than about 6 × 1033 operations per second per joule of energy. This is including "classical" computers, since even classical computers are still made of matter that follows quantum mechanics. [21] [22]
This bound is not merely a fanciful limit: it has practical ramifications for quantum-resistant cryptography. Imagining a computer operating at this limit, a brute-force search to break a 128-bit encryption key requires only modest resources. Brute-forcing a 256-bit key requires planetary-scale computers, while a brute-force search of 512-bit keys is effectively unattainable within the lifetime of the universe, even if galactic-sized computers were applied to the problem.
The Bekenstein bound limits the amount of information that can be stored within a volume of space. The maximal rate of change of information within that volume of space is given by the quantum speed limit. This product of limits is sometimes called the Bremermann–Bekenstein limit; it is saturated by Hawking radiation. [1] That is, Hawking radiation is emitted at the maximal allowed rate set by these bounds.
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces, and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In quantum mechanics, a density matrix is a matrix that describes an ensemble of physical systems as quantum states. It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles. Mixed ensembles arise in quantum mechanics in two different situations:
Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
In physics, the Bekenstein bound is an upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level. It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite.
The Wigner quasiprobability distribution is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space.
In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs.
In quantum optics, the Jaynes–Cummings model is a theoretical model that describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity. It is named after Edwin Thompson Jaynes and Fred Cummings in the 1960s and was confirmed experimentally in 1987.
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.
The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.
The Koopman–von Neumann (KvN) theory is a description of classical mechanics as an operatorial theory similar to quantum mechanics, based on a Hilbert space of complex, square-integrable wavefunctions. As its name suggests, the KvN theory is loosely related to work by Bernard Koopman and John von Neumann in 1931 and 1932, respectively. As explained in this entry, however, the historical origins of the theory and its name are complicated.
In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988.
The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. It is one of the central quantities used to qualify the utility of an input state, especially in Mach–Zehnder interferometer-based phase or parameter estimation. It is shown that the quantum Fisher information can also be a sensitive probe of a quantum phase transition. The quantum Fisher information of a state with respect to the observable is defined as
Bell diagonal states are a class of bipartite qubit states that are frequently used in quantum information and quantum computation theory.