In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. [1] QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. [2] Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, [3] a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. [4] [5] Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, [6] which was verified in a cavity QED experiment. [7]
QSL have been used to explore the limits of computation [8] [9] and complexity. In 2017, QSLs were studied in a quantum oscillator at high temperature. [10] In 2018, it was shown that QSL are not restricted to the quantum domain and that similar bounds hold in classical systems. [11] [12] In 2021, both the Mandelstam-Tamm and the Margolus–Levitin QSL bounds were concurrently tested in a single experiment [13] which indicated there are "two different regimes: one where the Mandelstam-Tamm limit constrains the evolution at all times, and a second where a crossover to the Margolus-Levitin limit occurs at longer times."
The speed limit theorems can be stated for pure states, and for mixed states; they take a simpler form for pure states. An arbitrary pure state can be written as a linear combination of energy eigenstates:
The task is to provide a lower bound for the time interval required for the initial state to evolve into a state orthogonal to . The time evolution of a pure state is given by the Schrödinger equation:
Orthogonality is obtained when
and the minimum time interval required to achieve this condition is called the orthogonalization interval [2] or orthogonalization time. [14]
For pure states, the Mandelstam–Tamm theorem states that the minimum time required for a state to evolve into an orthogonal state is bounded below:
where
is the variance of the system's energy and is the Hamiltonian operator. The quantum evolution is independent of the particular Hamiltonian used to transport the quantum system along a given curve in the projective Hilbert space; the distance along this curve is measured by the Fubini–Study metric. [15] This is sometimes called the quantum angle, as it can be understood as the arccos of the inner product of the initial and final states.
The Mandelstam–Tamm limit can also be stated for mixed states and for time-varying Hamiltonians. In this case, the Bures metric must be employed in place of the Fubini–Study metric. A mixed state can be understood as a sum over pure states, weighted by classical probabilities; likewise, the Bures metric is a weighted sum of the Fubini–Study metric. For a time-varying Hamiltonian and time-varying density matrix the variance of the energy is given by
The Mandelstam–Tamm limit then takes the form
where is the Bures distance between the starting and ending states. The Bures distance is geodesic, giving the shortest possible distance of any continuous curve connecting two points, with understood as an infinitessimal path length along a curve parametrized by Equivalently, the time taken to evolve from to is bounded as
where
is the time-averaged uncertainty in energy. For a pure state evolving under a time-varying Hamiltonian, the time taken to evolve from one pure state to another pure state orthogonal to it is bounded as [16]
This follows, as for a pure state, one has the density matrix The quantum angle (Fubini–Study distance) is then and so one concludes when the initial and final states are orthogonal.
For the case of a pure state, Margolus and Levitin [3] obtain a different limit, that
where is the average energy, This form applies when the Hamiltonian is not time-dependent, and the ground-state energy is defined to be zero.
The Margolus–Levitin theorem can also be generalized to the case where the Hamiltonian varies with time, and the system is described by a mixed state. [16] In this form, it is given by
with the ground-state defined so that it has energy zero at all times.
This provides a result for time varying states. Although it also provides a bound for mixed states, the bound (for mixed states) can be so loose as to be uninformative. [17] The Margolus–Levitin theorem has not yet been established in time-dependent quantum systems, whose Hamiltonians are driven by arbitrary time-dependent parameters, except for the adiabatic case. [18]
In addition to the original Margolus–Levitin limit, a dual bound exists for quantum systems with a bounded energy spectrum. This dual bound, also known as the Ness–Alberti–Sagi limit or the Ness limit, depends on the difference between the state's mean energy and the energy of the highest occupied eigenstate. In bounded systems, the minimum time required for a state to evolve to an orthogonal state is bounded by
where is the energy of the highest occupied eigenstate and is the mean energy of the state. This bound complements the original Margolus–Levitin limit and the Mandelstam–Tamm limit, forming a trio of constraints on quantum evolution speed. [19]
A 2009 result by Lev B. Levitin and Tommaso Toffoli states that the precise bound for the Mandelstam–Tamm theorem is attained only for a qubit state. [14] This is a two-level state in an equal superposition
for energy eigenstates and . The states and are unique up to degeneracy of the energy level and an arbitrary phase factor This result is sharp, in that this state also satisfies the Margolus–Levitin bound, in that and so This result establishes that the combined limits are strict:
Levitin and Toffoli also provide a bound for the average energy in terms of the maximum. For any pure state the average energy is bounded as
where is the maximum energy eigenvalue appearing in (This is the quarter-pinched sphere theorem in disguise, transported to complex projective space.) Thus, one has the bound
The strict lower bound is again attained for the qubit state with .
The quantum speed limit bounds establish an upper bound at which computation can be performed. Computational machinery is constructed out of physical matter that follows quantum mechanics, and each operation, if it is to be unambiguous, must be a transition of the system from one state to an orthogonal state. Suppose the computing machinery is a physical system evolving under Hamiltonian that does not change with time. Then, according to the Margolus–Levitin theorem, the number of operations per unit time per unit energy is bounded above by
This establishes a strict upper limit on the number of calculations that can be performed by physical matter. The processing rate of all forms of computation cannot be higher than about 6 × 1033 operations per second per joule of energy. This is including "classical" computers, since even classical computers are still made of matter that follows quantum mechanics. [20] [21]
This bound is not merely a fanciful limit: it has practical ramifications for quantum-resistant cryptography. Imagining a computer operating at this limit, a brute-force search to break a 128-bit encryption key requires only modest resources. Brute-forcing a 256-bit key requires planetary-scale computers, while a brute-force search of 512-bit keys is effectively unattainable within the lifetime of the universe, even if galactic-sized computers were applied to the problem.
The Bekenstein bound limits the amount of information that can be stored within a volume of space. The maximal rate of change of information within that volume of space is given by the quantum speed limit. This product of limits is sometimes called the Bremermann–Bekenstein limit; it is saturated by Hawking radiation. [1] That is, Hawking radiation is emitted at the maximal allowed rate set by these bounds.
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces, and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.
Quantum teleportation is a technique for transferring quantum information from a sender at one location to a receiver some distance away. While teleportation is commonly portrayed in science fiction as a means to transfer physical objects from one location to the next, quantum teleportation only transfers quantum information. The sender does not have to know the particular quantum state being transferred. Moreover, the location of the recipient can be unknown, but to complete the quantum teleportation, classical information needs to be sent from sender to receiver. Because classical information needs to be sent, quantum teleportation cannot occur faster than the speed of light.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In quantum mechanics, a density matrix is a matrix that describes an ensemble of physical systems as quantum states. It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles. Mixed ensembles arise in quantum mechanics in two different situations:
Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:
In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space.
In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs.
In quantum optics, the Jaynes–Cummings model is a theoretical model that describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity. It is named after Edwin Thompson Jaynes and Fred Cummings in the 1960s and was confirmed experimentally in 1987.
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.
The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.
This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.
The Koopman–von Neumann (KvN) theory is a description of classical mechanics as an operatorial theory similar to quantum mechanics, based on a Hilbert space of complex, square-integrable wavefunctions. As its name suggests, the KvN theory is loosely related to work by Bernard Koopman and John von Neumann in 1931 and 1932, respectively. As explained in this entry, however, the historical origins of the theory and its name are complicated.
In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988.
The quantum Cramér–Rao bound is the quantum analogue of the classical Cramér–Rao bound. It bounds the achievable precision in parameter estimation with a quantum system: