In quantum physics, Fermi's golden rule is a formula that describes the transition rate (the probability of a transition per unit time) from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak perturbation. This transition rate is effectively independent of time (so long as the strength of the perturbation is independent of time) and is proportional to the strength of the coupling between the initial and final states of the system (described by the square of the matrix element of the perturbation) as well as the density of states. It is also applicable when the final state is discrete, i.e. it is not part of a continuum, if there is some decoherence in the process, like relaxation or collision of the atoms, or like noise in the perturbation, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth.
Although the rule is named after Enrico Fermi, most of the work leading to it is due to Paul Dirac, who twenty years earlier had formulated a virtually identical equation, including the three components of a constant, the matrix element of the perturbation and an energy difference. [1] [2] It was given this name because, on account of its importance, Fermi called it "golden rule No. 2". [3]
Most uses of the term Fermi's golden rule are referring to "golden rule No. 2", but Fermi's "golden rule No. 1" is of a similar form and considers the probability of indirect transitions per unit time. [4]
Fermi's golden rule describes a system that begins in an eigenstate of an unperturbed Hamiltonian H0 and considers the effect of a perturbing Hamiltonian H' applied to the system. If H' is time-independent, the system goes only into those states in the continuum that have the same energy as the initial state. If H' is oscillating sinusoidally as a function of time (i.e. it is a harmonic perturbation) with an angular frequency ω, the transition is into states with energies that differ by ħω from the energy of the initial state.
In both cases, the transition probability per unit of time from the initial state to a set of final states is essentially constant. It is given, to first-order approximation, by where is the matrix element (in bra–ket notation) of the perturbation H' between the final and initial states, and is the density of states (number of continuum states divided by in the infinitesimally small energy interval to ) at the energy of the final states. This transition probability is also called "decay probability" and is related to the inverse of the mean lifetime. Thus, the probability of finding the system in state is proportional to .
The standard way to derive the equation is to start with time-dependent perturbation theory and to take the limit for absorption under the assumption that the time of the measurement is much larger than the time needed for the transition. [5] [6]
Derivation in time-dependent perturbation theory | |
---|---|
Statement of the problemThe golden rule is a straightforward consequence of the Schrödinger equation, solved to lowest order in the perturbation H' of the Hamiltonian. The total Hamiltonian is the sum of an “original” Hamiltonian H0 and a perturbation: . In the interaction picture, we can expand an arbitrary quantum state’s time evolution in terms of energy eigenstates of the unperturbed system , with . Discrete spectrum of final statesWe first consider the case where the final states are discrete. The expansion of a state in the perturbed system at a time t is . The coefficients an(t) are yet unknown functions of time yielding the probability amplitudes in the Dirac picture. This state obeys the time-dependent Schrödinger equation: Expanding the Hamiltonian and the state, we see that, to first order, where En and |n⟩ are the stationary eigenvalues and eigenfunctions of H0. This equation can be rewritten as a system of differential equations specifying the time evolution of the coefficients : This equation is exact, but normally cannot be solved in practice. For a weak constant perturbation H' that turns on at t = 0, we can use perturbation theory. Namely, if , it is evident that , which simply says that the system stays in the initial state . For states , becomes non-zero due to , and these are assumed to be small due to the weak perturbation. The coefficient which is unity in the unperturbed state, will have a weak contribution from . Hence, one can plug in the zeroth-order form into the above equation to get the first correction for the amplitudes : whose integral can be expressed as with , for a state with ai(0) = 1, ak(0) = 0, transitioning to a state with ak(t). The probability of transition from the initial state (ith) to the final state (fth) is given by It is important to study a periodic perturbation with a given frequency since arbitrary perturbations can be constructed from periodic perturbations of different frequencies. Since must be Hermitian, we must assume , where is a time independent operator. The solution for this case is [7] This expression is valid only when the denominators in the above expression is non-zero, i.e., for a given initial state with energy , the final state energy must be such that Not only the denominators must be non-zero, but also must not be small since is supposed to be small. Consider now the case where the perturbation frequency is such that where is a small quantity. Unlike the previous case, not all terms in the sum over in the above exact equation for matters, but depends only on and vice versa. Thus, omitting all other terms, we can write The two independent solutions are where and the constants and are fixed by the normalization condition. If the system at is in the state, then the probability of finding the system in the state is given by which is a periodic function with frequency ; this function varies between and . At the exact resonance, i.e., , the above formula reduces to which varies periodically between and , that is to say, the system periodically switches from one state to the other. The situation is different if the final states are in the continuous spectrum. Continuous spectrum of final statesSince the continuous spectrum lies above the discrete spectrum, and it is clear from the previous section, major role is played by the energies lying near the resonance energy , i.e., when . In this case, it is sufficient to keep only the first term for . Assuming that perturbations are turned on from time , we have then The squared modulus of is Therefore, the transition probability per unit time, for large t, is given by Note that the delta function in the expression above arises due to the following argument. Defining the time derivative of is , which behaves like a delta function at large t (for more information, please see Sinc function#Relationship to the Dirac delta distribution). The constant decay rate of the golden rule follows. [8] As a constant, it underlies the exponential particle decay laws of radioactivity. (For excessively long times, however, the secular growth of the ak(t) terms invalidates lowest-order perturbation theory, which requires ak ≪ ai.) |
Only the magnitude of the matrix element enters the Fermi's golden rule. The phase of this matrix element, however, contains separate information about the transition process. It appears in expressions that complement the golden rule in the semiclassical Boltzmann equation approach to electron transport. [9]
While the Golden rule is commonly stated and derived in the terms above, the final state (continuum) wave function is often rather vaguely described, and not normalized correctly (and the normalisation is used in the derivation). The problem is that in order to produce a continuum there can be no spatial confinement (which would necessarily discretise the spectrum), and therefore the continuum wave functions must have infinite extent, and in turn this means that the normalisation is infinite, not unity. If the interactions depend on the energy of the continuum state, but not any other quantum numbers, it is usual to normalise continuum wave-functions with energy labelled , by writing where is the Dirac delta function, and effectively a factor of the square-root of the density of states is included into . [10] In this case, the continuum wave function has dimensions of , and the Golden Rule is now where refers to the continuum state with the same energy as the discrete state . For example, correctly normalized continuum wave functions for the case of a free electron in the vicinity of a hydrogen atom are available in Bethe and Salpeter. [11]
The following paraphrases the treatment of Cohen-Tannoudji. [10] As before, the total Hamiltonian is the sum of an “original” Hamiltonian H0 and a perturbation: . We can still expand an arbitrary quantum state’s time evolution in terms of energy eigenstates of the unperturbed system, but these now consist of discrete states and continuum states. We assume that the interactions depend on the energy of the continuum state, but not any other quantum numbers. The expansion in the relevant states in the Dirac picture is where , and are the energies of states , respectively. The integral is over the continuum , i.e. is in the continuum.
Substituting into the time-dependent Schrödinger equation and premultiplying by produces where , and premultiplying by produces We made use of the normalisation . Integrating the latter and substituting into the former, It can be seen here that at time depends on at all earlier times , i.e. it is non-Markovian. We make the Markov approximation, i.e. that it only depends on at time (which is less restrictive than the approximation that used above, and allows the perturbation to be strong) where and . Integrating over , The fraction on the right is a nascent Dirac delta function, meaning it tends to as (ignoring its imaginary part which leads to a very small energy (Lamb) shift, while the real part produces decay [10] ). Finally which can have solutions: , i.e., the decay of population in the initial discrete state is where
The Fermi's golden rule can be used for calculating the transition probability rate for an electron that is excited by a photon from the valence band to the conduction band in a direct band-gap semiconductor, and also for when the electron recombines with the hole and emits a photon. [12] Consider a photon of frequency and wavevector , where the light dispersion relation is and is the index of refraction.
Using the Coulomb gauge where and , the vector potential of light is given by where the resulting electric field is
For an electron in the valence band, the Hamiltonian is where is the potential of the crystal, and are the charge and mass of an electron, and is the momentum operator. Here we consider process involving one photon and first order in . The resulting Hamiltonian is where is the perturbation of light.
From here on we consider vertical optical dipole transition, and thus have transition probability based on time-dependent perturbation theory that with where is the light polarization vector. and are the Bloch wavefunction of the initial and final states. Here the transition probability needs to satisfy the energy conservation given by . From perturbation it is evident that the heart of the calculation lies in the matrix elements shown in the bracket.
For the initial and final states in valence and conduction bands, we have and , respectively and if the operator does not act on the spin, the electron stays in the same spin state and hence we can write the Bloch wavefunction of the initial and final states as where is the number of unit cells with volume . Calculating using these wavefunctions, and focusing on emission (photoluminescence) rather than absorption, we are led to the transition rate where defined as the optical transition dipole moment is qualitatively the expectation value and in this situation takes the form
Finally, we want to know the total transition rate . Hence we need to sum over all possible initial and final states that can satisfy the energy conservation (i.e. an integral of the Brillouin zone in the k-space), and take into account spin degeneracy, which after calculation results in where is the joint valence-conduction density of states (i.e. the density of pair of states; one occupied valence state, one empty conduction state). In 3D, this is but the joint DOS is different for 2D, 1D, and 0D.
We note that in a general way we can express the Fermi's golden rule for semiconductors as [13]
In the same manner, the stationary DC photocurrent with amplitude proportional to the square of the field of light is where is the relaxation time, and are the difference of the group velocity and Fermi-Dirac distribution between possible the initial and final states. Here defines the optical transition dipole. Due to the commutation relation between position and the Hamiltonian, we can also rewrite the transition dipole and photocurrent in terms of position operator matrix using . This effect can only exist in systems with broken inversion symmetry and nonzero components of the photocurrent can be obtained by symmetry arguments.
In a scanning tunneling microscope, the Fermi's golden rule is used in deriving the tunneling current. It takes the form where is the tunneling matrix element.
When considering energy level transitions between two discrete states, Fermi's golden rule is written as where is the density of photon states at a given energy, is the photon energy, and is the angular frequency. This alternative expression relies on the fact that there is a continuum of final (photon) states, i.e. the range of allowed photon energies is continuous. [14]
Fermi's golden rule predicts that the probability that an excited state will decay depends on the density of states. This can be seen experimentally by measuring the decay rate of a dipole near a mirror: as the presence of the mirror creates regions of higher and lower density of states, the measured decay rate depends on the distance between the mirror and the dipole. [15] [16]
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.
A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi.
The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons by the relatively immobile ions in the metal that act like obstructions to the flow of electrons.
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.
In physics, the Lamb shift, named after Willis Lamb, is an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb shift is a deviation from theory seen in the differing energies contained by the 2S1/2 and 2P1/2 orbitals of the hydrogen atom.
In quantum mechanics, a two-state system is a quantum system that can exist in any quantum superposition of two independent quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.
In particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais.
In atomic, molecular, and optical physics, the Einstein coefficients are quantities describing the probability of absorption or emission of a photon by an atom or molecule. The Einstein A coefficients are related to the rate of spontaneous emission of light, and the Einstein B coefficients are related to the absorption and stimulated emission of light. Throughout this article, "light" refers to any electromagnetic radiation, not necessarily in the visible spectrum.
In spectroscopy, the Autler–Townes effect, is a dynamical Stark effect corresponding to the case when an oscillating electric field is tuned in resonance to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes.
Defect types include atom vacancies, adatoms, steps, and kinks that occur most frequently at surfaces due to the finite material size causing crystal discontinuity. What all types of defects have in common, whether surface or bulk defects, is that they produce dangling bonds that have specific electron energy levels different from those of the bulk. This difference occurs because these states cannot be described with periodic Bloch waves due to the change in electron potential energy caused by the missing ion cores just outside the surface. Hence, these are localized states that require separate solutions to the Schrödinger equation so that electron energies can be properly described. The break in periodicity results in a decrease in conductivity due to defect scattering.
Free carrier absorption occurs when a material absorbs a photon, and a carrier is excited from an already-excited state to another, unoccupied state in the same band. This intraband absorption is different from interband absorption because the excited carrier is already in an excited band, such as an electron in the conduction band or a hole in the valence band, where it is free to move. In interband absorption, the carrier starts in a fixed, nonconducting band and is excited to a conducting one.
In condensed matter physics, Lindhard theory is a method of calculating the effects of electric field screening by electrons in a solid. It is based on quantum mechanics and the random phase approximation. It is named after Danish physicist Jens Lindhard, who first developed the theory in 1954.
The electron-longitudinal acoustic phonon interaction is an interaction that can take place between an electron and a longitudinal acoustic (LA) phonon in a material such as a semiconductor.
The kicked rotator, also spelled as kicked rotor, is a paradigmatic model for both Hamiltonian chaos and quantum chaos. It describes a free rotating stick in an inhomogeneous "gravitation like" field that is periodically switched on in short pulses. The model is described by the Hamiltonian
In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics.
In thermal quantum field theory, the Matsubara frequency summation is a technique used to simplify calculations involving Euclidean path integrals.
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons, electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes are governed by the rates at which various related physical phenomena occur, such as the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level to macroscale are the laws of thermodynamics, including conservation of energy.