The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:
In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged.
At the 1911 Solvay conference, Einstein gave a lecture on the quantum hypothesis, which states that for atomic oscillators. After Einstein's lecture, Hendrik Lorentz commented that, classically, if a simple pendulum is shortened by holding the wire between two fingers and sliding down, it seems that its energy will change smoothly as the pendulum is shortened. This seems to show that the quantum hypothesis is invalid for macroscopic systems, and if macroscopic systems do not follow the quantum hypothesis, then as the macroscopic system becomes microscopic, it seems the quantum hypothesis would be invalidated. Einstein replied that although both the energy and the frequency would change, their ratio would still be conserved, thus saving the quantum hypothesis. [2]
Before the conference, Einstein had just read a paper by Paul Ehrenfest on the adiabatic hypothesis. [3] We know that he had read it because he mentioned it in a letter to Michele Besso written before the conference. [4] [5]
Diabatic | Adiabatic |
---|---|
Rapidly changing conditions prevent the system from adapting its configuration during the process, hence the spatial probability density remains unchanged. Typically there is no eigenstate of the final Hamiltonian with the same functional form as the initial state. The system ends in a linear combination of states that sum to reproduce the initial probability density. | Gradually changing conditions allow the system to adapt its configuration, hence the probability density is modified by the process. If the system starts in an eigenstate of the initial Hamiltonian, it will end in the corresponding eigenstate of the final Hamiltonian. [6] |
At some initial time a quantum-mechanical system has an energy given by the Hamiltonian ; the system is in an eigenstate of labelled . Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian at some later time . The system will evolve according to the time-dependent Schrödinger equation, to reach a final state . The adiabatic theorem states that the modification to the system depends critically on the time during which the modification takes place.
For a truly adiabatic process we require ; in this case the final state will be an eigenstate of the final Hamiltonian , with a modified configuration:
The degree to which a given change approximates an adiabatic process depends on both the energy separation between and adjacent states, and the ratio of the interval to the characteristic timescale of the evolution of for a time-independent Hamiltonian, , where is the energy of .
Conversely, in the limit we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged:
The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of corresponds to ). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap. [7]
The term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process.
The classical and quantum mechanics definition [8] is instead closer to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static, and a pressure wave is not). Adiabatic in the context of mechanics is often used as a synonym for slow process.
In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous).
The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided, and the system tries to conserve the state and the quantum numbers. [9]
The quantum mechanical concept of adiabatic is related to adiabatic invariant, it is often used in the old quantum theory and has no direct relation with heat exchange.
As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here. [10]
The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian.
If is increased adiabatically then the system at time will be in an instantaneous eigenstate of the current Hamiltonian , corresponding to the initial eigenstate of . For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, , remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions.
For a rapidly increased spring constant, the system undergoes a diabatic process in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, , that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of which sum to reproduce the form of the initial state.
For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. [11] The states, labelled and using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states:
With the field absent, the energetic separation of the diabatic states is equal to ; the energy of state increases with increasing magnetic field (a low-field-seeking state), while the energy of state decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written
where is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ( and ), however, as is not a diagonal matrix, it is clear that these states are not eigenstates of due to the off-diagonal coupling constant.
The eigenvectors of the matrix are the eigenstates of the system, which we will label and , with corresponding eigenvalues
It is important to realise that the eigenvalues and are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies and correspond to the expectation values for the energy of the system in the diabatic states and .
Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field will ensure the system remains in an eigenstate of the Hamiltonian throughout the process (follows the red curve). A diabatic increase in magnetic field will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state . For finite magnetic field slew rates there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.
These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules.
Under a slowly changing Hamiltonian with instantaneous eigenstates and corresponding energies , a quantum system evolves from the initial state to the final state where the coefficients undergo the change of phase
with the dynamical phase
and geometric phase
In particular, , so if the system begins in an eigenstate of , it remains in an eigenstate of during the evolution with a change of phase only.
Sakurai in Modern Quantum Mechanics [12] |
---|
This proof is partly inspired by one given by Sakurai in Modern Quantum Mechanics. [12] The instantaneous eigenstates and energies , by assumption, satisfy the time-independent Schrödinger equation at all times . Thus, they constitute a basis that can be used to expand the state at any time . The evolution of the system is governed by the time-dependent Schrödinger equation where (see Notation for differentiation § Newton's notation). Insert the expansion of , use , differentiate with the product rule, take the inner product with and use orthonormality of the eigenstates to obtain This coupled first-order differential equation is exact and expresses the time-evolution of the coefficients in terms of inner products between the eigenstates and the time-differentiated eigenstates. But it is possible to re-express the inner products for in terms of matrix elements of the time-differentiated Hamiltonian . To do so, differentiate both sides of the time-independent Schrödinger equation with respect to time using the product rule to get Again take the inner product with and use and orthonormality to find Insert this into the differential equation for the coefficients to obtain This differential equation describes the time-evolution of the coefficients, but now in terms of matrix elements of . To arrive at the adiabatic theorem, neglect the right hand side. This is valid if the rate of change of the Hamiltonian is small and there is a finite gap between the energies. This is known as the adiabatic approximation. Under the adiabatic approximation, which integrates precisely to the adiabatic theorem with the phases defined in the statement of the theorem. The dynamical phase is real because it involves an integral over a real energy. To see that the geometric phase is purely real, differentiate the normalization of the eigenstates and use the product rule to find that Thus, is purely imaginary, so the geometric phase is purely real. |
Adiabatic approximation [13] [14] |
---|
Proof with the details of the adiabatic approximation [13] [14] We are going to formulate the statement of the theorem as follows:
And now we are going to prove the theorem. Consider the time-dependent Schrödinger equation with Hamiltonian We would like to know the relation between an initial state and its final state at in the adiabatic limit First redefine time as : At every point in time can be diagonalized with eigenvalues and eigenvectors . Since the eigenvectors form a complete basis at any time we can expand as: where The phase is called the dynamic phase factor. By substitution into the Schrödinger equation, another equation for the variation of the coefficients can be obtained: The term gives , and so the third term of left side cancels out with the right side, leaving Now taking the inner product with an arbitrary eigenfunction , the on the left gives , which is 1 only for m = n and otherwise vanishes. The remaining part gives For the will oscillate faster and faster and intuitively will eventually suppress nearly all terms on the right side. The only exceptions are when has a critical point, i.e. . This is trivially true for . Since the adiabatic theorem assumes a gap between the eigenenergies at any time this cannot hold for . Therefore, only the term will remain in the limit . In order to show this more rigorously we first need to remove the term. This can be done by defining We obtain: This equation can be integrated: or written in vector notation Here is a matrix and is basically a Fourier transform. It follows from the Riemann-Lebesgue lemma that as . As last step take the norm on both sides of the above equation: and apply Grönwall's inequality to obtain Since it follows for . This concludes the proof of the adiabatic theorem. In the adiabatic limit the eigenstates of the Hamiltonian evolve independently of each other. If the system is prepared in an eigenstate its time evolution is given by: So, for an adiabatic process, a system starting from nth eigenstate also remains in that nth eigenstate like it does for the time-independent processes, only picking up a couple of phase factors. The new phase factor can be canceled out by an appropriate choice of gauge for the eigenfunctions. However, if the adiabatic evolution is cyclic, then becomes a gauge-invariant physical quantity, known as the Berry phase. |
Generic proof in parameter space |
---|
Let's start from a parametric Hamiltonian , where the parameters are slowly varying in time, the definition of slow here is defined essentially by the distance in energy by the eigenstates (through the uncertainty principle, we can define a timescale that shall be always much lower than the time scale considered). This way we clearly also identify that while slowly varying the eigenstates remains clearly separated in energy (e.g. also when we generalize this to the case of bands as in the TKNN formula the bands shall remain clearly separated). Given they do not intersect the states are ordered and in this sense this is also one of the meanings of the name topological order. We do have the instantaneous Schrödinger equation: And instantaneous eigenstates: The generic solution: plugging in the full Schrödinger equation and multiplying by a generic eigenvector: And if we introduce the adiabatic approximation: for each We have and where And C is the path in the parameter space, This is the same as the statement of the theorem but in terms of the coefficients of the total wave function and its initial state. [15] Now this is slightly more general than the other proofs given we consider a generic set of parameters, and we see that the Berry phase acts as a local geometric quantity in the parameter space. Finally integrals of local geometric quantities can give topological invariants as in the case of the Gauss-Bonnet theorem. [16] In fact if the path C is closed then the Berry phase persists to Gauge transformation and becomes a physical quantity. |
Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation. [17]
This does explain many phenomena in the scope of:
This section's factual accuracy is disputed .(January 2016) |
We will now pursue a more rigorous analysis. [18] Making use of bra–ket notation, the state vector of the system at time can be written
where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator
It is instructive to examine the limiting cases, in which is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change).
Consider a system Hamiltonian undergoing continuous change from an initial value , at time , to a final value , at time , where . The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation
which is equivalent to the Schrödinger equation.
along with the initial condition . Given knowledge of the system wave function at , the evolution of the system up to a later time can be obtained using
The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of on .
To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition , we have:
We can expand
In the perturbative limit we can take just the first two terms and substitute them into our equation for , recognizing that
is the system Hamiltonian, averaged over the interval , we have:
After expanding the products and making the appropriate cancellations, we are left with:
giving
where is the root mean square deviation of the system Hamiltonian averaged over the interval of interest.
The sudden approximation is valid when (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by
which is a statement of the time-energy form of the Heisenberg uncertainty principle.
In the limit we have infinitely rapid, or diabatic passage:
The functional form of the system remains unchanged:
This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged:
In the limit we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions,
If the system is initially in an eigenstate of , after a period it will have passed into the corresponding eigenstate of .
This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state:
In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener, [19] for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).
The key figure of merit in this approach is the Landau–Zener velocity: where is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and and are the energies of the two diabatic (crossing) states. A large results in a large diabatic transition probability and vice versa.
Using the Landau–Zener formula the probability, , of a diabatic transition is given by
For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations.
The equations to be solved can be obtained from the time-dependent Schrödinger equation:
where is a vector containing the adiabatic state amplitudes, is the time-dependent adiabatic Hamiltonian, [11] and the overdot represents a time derivative.
Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system: for a system that began with .
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense fervent in the development of quantum mechanics.
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written
In physics, the Schrödinger picture or Schrödinger representation is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators are mostly constant with respect to time. This differs from the Heisenberg picture which keeps the states constant while the observables evolve in time, and from the interaction picture in which both the states and the observables evolve in time. The Schrödinger and Heisenberg pictures are related as active and passive transformations and commutation relations between operators are preserved in the passage between the two pictures.
In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).
In quantum mechanics, a two-state system is a quantum system that can exist in any quantum superposition of two independent quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.
In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
The Jaynes–Cummings model is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity.
In spectroscopy, the Autler–Townes effect, is a dynamical Stark effect corresponding to the case when an oscillating electric field is tuned in resonance to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes.
This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.
In quantum field theory, the Gell-Mann and Low theorem is a mathematical statement that allows one to relate the ground state of an interacting system to the ground state of the corresponding non-interacting theory. It was proved in 1951 by Murray Gell-Mann and Francis E. Low. The theorem is useful because, among other things, by relating the ground state of the interacting theory to its non-interacting ground state, it allows one to express Green's functions as expectation values of interaction picture fields in the non-interacting vacuum. While typically applied to the ground state, the Gell-Mann and Low theorem applies to any eigenstate of the Hamiltonian. Its proof relies on the concept of starting with a non-interacting Hamiltonian and adiabatically switching on the interactions.
In quantum mechanics, dynamical pictures are the multiple equivalent ways to mathematically formulate the dynamics of a quantum system.
In quantum mechanics, a translation operator is defined as an operator which shifts particles and fields by a certain amount in a certain direction. It is a special case of the shift operator from functional analysis.
In quantum mechanics, magnetic resonance is a resonant effect that can appear when a magnetic dipole is exposed to a static magnetic field and perturbed with another, oscillating electromagnetic field. Due to the static field, the dipole can assume a number of discrete energy eigenstates, depending on the value of its angular momentum (azimuthal) quantum number. The oscillating field can then make the dipole transit between its energy states with a certain probability and at a certain rate. The overall transition probability will depend on the field's frequency and the rate will depend on its amplitude. When the frequency of that field leads to the maximum possible transition probability between two states, a magnetic resonance has been achieved. In that case, the energy of the photons composing the oscillating field matches the energy difference between said states. If the dipole is tickled with a field oscillating far from resonance, it is unlikely to transition. That is analogous to other resonant effects, such as with the forced harmonic oscillator. The periodic transition between the different states is called Rabi cycle and the rate at which that happens is called Rabi frequency. The Rabi frequency should not be confused with the field's own frequency. Since many atomic nuclei species can behave as a magnetic dipole, this resonance technique is the basis of nuclear magnetic resonance, including nuclear magnetic resonance imaging and nuclear magnetic resonance spectroscopy.
In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, which was verified in a cavity QED experiment.