The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:
In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged.
|Rapidly changing conditions prevent the system from adapting its configuration during the process, hence the spatial probability density remains unchanged. Typically there is no eigenstate of the final Hamiltonian with the same functional form as the initial state. The system ends in a linear combination of states that sum to reproduce the initial probability density.||Gradually changing conditions allow the system to adapt its configuration, hence the probability density is modified by the process. If the system starts in an eigenstate of the initial Hamiltonian, it will end in the corresponding eigenstate of the final Hamiltonian.|
At some initial time a quantum-mechanical system has an energy given by the Hamiltonian ; the system is in an eigenstate of labelled . Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian at some later time . The system will evolve according to the time-dependent Schrödinger equation, to reach a final state . The adiabatic theorem states that the modification to the system depends critically on the time during which the modification takes place.
For a truly adiabatic process we require ; in this case the final state will be an eigenstate of the final Hamiltonian , with a modified configuration:
The degree to which a given change approximates an adiabatic process depends on both the energy separation between and adjacent states, and the ratio of the interval to the characteristic time-scale of the evolution of for a time-independent Hamiltonian, , where is the energy of .
Conversely, in the limit we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged:
The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of corresponds to ). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap.
Note that the term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process.
The Classical and Quantum mechanics definitionis closer instead to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static and a pressure wave is not). Adiabatic in the context of Mechanics is often used as a synonym for slow process.
In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous).
The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided and the system tries to conserve the state and the quantum numbers.
The Quantum mechanical concept of adiabatic is related to Adiabatic invariant, it is often used in the Old quantum theory and has no direct relation with heat exchange.
As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here.
The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian.
If is increased adiabatically then the system at time will be in an instantaneous eigenstate of the current Hamiltonian , corresponding to the initial eigenstate of . For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, , remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions.
For a rapidly increased spring constant, the system undergoes a diabatic process in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, , that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of which sum to reproduce the form of the initial state.
For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. and using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states:The states, labelled
With the field absent, the energetic separation of the diabatic states is equal to ; the energy of state increases with increasing magnetic field (a low-field-seeking state), while the energy of state decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written
where is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ( and ), however, as is not a diagonal matrix, it is clear that these states are not eigenstates of the new Hamiltonian that includes the magnetic field contribution.
The eigenvectors of the matrix are the eigenstates of the system, which we will label and , with corresponding eigenvalues
It is important to realise that the eigenvalues and are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies and correspond to the expectation values for the energy of the system in the diabatic states and .
Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field will ensure the system remains in an eigenstate of the Hamiltonian throughout the process (follows the red curve). A diabatic increase in magnetic field will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state . For finite magnetic field slew rates there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.
These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules.
Under a slowly changing Hamiltonian with instantaneous eigenstates and corresponding energies , a quantum system evolves from the initial state
to the final state
where the coefficients undergo the change of phase
with the dynamical phase
and geometric phase
In particular, , so if the system begins in an eigenstate of , it remains in an eigenstate of during the evolution with a change of phase only.
This proof is partly inspired by one given by Sakurai in Modern Quantum Mechanics. and energies , by assumption, satisfy the time-independent Schrödinger equationThe instantaneous eigenstates
at all times . Thus, they constitute a basis that can be used to expand the state
at any time . The evolution of the system is governed by the time-dependent Schrödinger equation
where (see Notation for differentiation § Newton's notation). Insert the expansion of , use , differentiate with the product rule, take the inner product with and use orthonormality of the eigenstates to obtain
This coupled first-order differential equation is exact and expresses the time-evolution of the coefficients in terms of inner products between the eigenstates and the time-differentiated eigenstates. But it is possible to re-express the inner products for in terms of matrix elements of the time-differentiated Hamiltonian . To do so, differentiate both sides of the time-independent Schrödinger equation with respect to time using the product rule to get
Again take the inner product with and use and orthonormality to find
Insert this into the differential equation for the coefficients to obtain
This differential equation describes the time-evolution of the coefficients, but now in terms of matrix elements of . To arrive at the adiabatic theorem, neglect the right hand side. This is valid if the rate of change of the Hamiltonian is small and there is a finite gap between the energies. This is known as the adiabatic approximation. Under the adiabatic approximation,
which integrates precisely to the adiabatic theorem
with the phases defined in the statement of the theorem.
The dynamical phase is real because it involves an integral over a real energy. To see that the geometric phase is real, too, differentiate the normalization of the eigenstates and use the product rule to find that
Thus, is purely imaginary, so and thus the geometric phase are purely real.
Proof with the details of the adiabatic approximationWe are going to formulate the statement of the theorem as follows:
And now we are going to prove the theorem.
Consider the time-dependent Schrödinger equation
with Hamiltonian We would like to know the relation between an initial state and its final state at in the adiabatic limit
First redefine time as :
At every point in time can be diagonalized with eigenvalues and eigenvectors . Since the eigenvectors form a complete basis at any time we can expand as:
The phase is called the dynamic phase factor. By substitution into the Schrödinger equation, another equation for the variation of the coefficients can be obtained:
The term gives , and so the third term of left side cancels out with the right side, leaving
Now taking the inner product with an arbitrary eigenfunction , the on the left gives , which is 1 only for m = n and otherwise vanishes. The remaining part gives
For the will oscillate faster and faster and intuitively will eventually suppress nearly all terms on the right side. The only exceptions are when has a critical point, i.e. . This is trivially true for . Since the adiabatic theorem assumes a gap between the eigenenergies at any time this cannot hold for . Therefore, only the term will remain in the limit .
In order to show this more rigorously we first need to remove the term. This can be done by defining
This equation can be integrated:
or written in vector notation
Here is a matrix and
It follows from the Riemann-Lebesgue lemma that as . As last step take the norm on both sides of the above equation:
and apply Grönwall's inequality to obtain
Since it follows for . This concludes the proof of the adiabatic theorem.
In the adiabatic limit the eigenstates of the Hamiltonian evolve independently of each other. If the system is prepared in an eigenstate its time evolution is given by:
So, for an adiabatic process, a system starting from nth eigenstate also remains in that nth eigenstate like it does for the time-independent processes, only picking up a couple of phase factors. The new phase factor can be canceled out by an appropriate choice of gauge for the eigenfunctions. However, if the adiabatic evolution is cyclic, then becomes a gauge-invariant physical quantity, known as the Berry phase.
Let's start from a parametric Hamiltonian , where the parameters are slowly varying in time, the definition of slow here is defined essentially by the distance in energy by the eigenstates (through the uncertainty principle, we can define a timescale that shall be always much lower than the time scale considered ).
This way we clearly also identify that while slowly varying the eigenstates remains clearly separated in energy (e.g. also when we generalize this to the case of bands as in the TKNN formula the bands shall remain clearly separated). Given they do not intersect the states are ordered and in this sense this is also one of the meanings of the name topological order.
We do have the instantaneous schroedinger equation:
And instantaneous eigenstates:
The generic solution:
plugging in the full schroedinger and multiplying by a generic eigenvector:
And if we introduce the adiabatic approximation:
And C is the path in the parameter space,
This is the same as the statement of the theorem but in terms of the coefficients of the total wave function and its initial state.
Now this is slightly more general than the other proofs given we consider a generic set of parameters, and we see that the Berry phase acts as a local geometric quantity in the parameter space. Finally integrals of local geometric quantities can give topological invariants as in the case of the Gauss-Bonnet theorem.In fact if the path C is closed then the Berry phase persists to Gauge transformation and becomes a physical quantity.
Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation.
This does explain many phenomena in the scope of:
This section's factual accuracy is disputed .(January 2016)
We will now pursue a more rigorous analysis. can be writtenMaking use of bra–ket notation, the state vector of the system at time
where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator
It is instructive to examine the limiting cases, in which is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change).
Consider a system Hamiltonian undergoing continuous change from an initial value , at time , to a final value , at time , where . The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation
which is equivalent to the Schrödinger equation.
along with the initial condition . Given knowledge of the system wave function at , the evolution of the system up to a later time can be obtained using
The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of on .
To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition , we have:
We can expand
In the perturbative limit we can take just the first two terms and substitute them into our equation for , recognizing that
is the system Hamiltonian, averaged over the interval , we have:
After expanding the products and making the appropriate cancellations, we are left with:
where is the root mean square deviation of the system Hamiltonian averaged over the interval of interest.
The sudden approximation is valid when (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by
which is a statement of the time-energy form of the Heisenberg uncertainty principle.
In the limit we have infinitely rapid, or diabatic passage:
The functional form of the system remains unchanged:
This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged:
In the limit we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions,
If the system is initially in an eigenstate of , after a period it will have passed into the corresponding eigenstate of .
This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state:
In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener,for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).
The key figure of merit in this approach is the Landau–Zener velocity:
where is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and and are the energies of the two diabatic (crossing) states. A large results in a large diabatic transition probability and vice versa.
Using the Landau–Zener formula the probability, , of a diabatic transition is given by
For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide variety of numerical solution algorithms for ordinary differential equations.
The equations to be solved can be obtained from the time-dependent Schrödinger equation:
where is a vector containing the adiabatic state amplitudes, is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative.
Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system:
for a system that began with .
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the accuracy with which the values for certain pairs of physical quantities of a particle, such as position, x, and momentum, p, can be predicted from initial conditions.
The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject. The equation is named after Erwin Schrödinger, who postulated the equation in 1925, and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are very useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential take the form of a plane wave modulated by a periodic function. Mathematically, they are written:
In physics, the Rabi cycle is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an oscillatory driving field. The effect is important in quantum optics, magnetic resonance and quantum computing, and is named after Isidor Isaac Rabi.
Creation and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization.
A rotational transition is an abrupt change in angular momentum in quantum physics. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred.
In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter, to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
In particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number.
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
The Jaynes–Cummings model is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity.
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.
The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.
In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as
In ion trapping and atomic physics experiments, the Lamb Dicke regime is a quantum regime in which the coupling between an ion or atom's internal qubit states and its motional states is sufficiently small so that transitions that change the motional quantum number by more than one are strongly suppressed.
The Gell-Mann and Low theorem is a theorem in quantum field theory that allows one to relate the ground state of an interacting system to the ground state of the corresponding non-interacting theory. It was proved in 1951 by Murray Gell-Mann and Francis E. Low. The theorem is useful because, among other things, by relating the ground state of the interacting theory to its non-interacting ground state, it allows one to express Green's functions as expectation values of interaction picture fields in the non-interacting vacuum. While typically applied to the ground state, the Gell-Mann and Low theorem applies to any eigenstate of the Hamiltonian. Its proof relies on the concept of starting with a non-interacting Hamiltonian and adiabatically switching on the interactions.
In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle.
The Koopman–von Neumann mechanics is a description of classical mechanics in terms of Hilbert space, introduced by Bernard Koopman and John von Neumann in 1931 and 1932, respectively.
In mathematics, the Weil–Brezin map, named after André Weil and Jonathan Brezin, is a unitary transformation that maps a Schwartz function on the real line to a smooth function on the Heisenberg manifold. The Weil–Brezin map gives a geometric interpretation of the Fourier transform, the Plancherel theorem and the Poisson summation formula. The image of Gaussian functions under the Weil–Brezin map are nil-theta functions, which are related to theta functions. The Weil–Brezin map is sometimes referred to as the Zak transform, which is widely applied in the field of physics and signal processing; however, the Weil–Brezin Map is defined via Heisenberg group geometrically, whereas there is no direct geometric or group theoretic interpretation from the Zak transform.