In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in simulations the energy might fluctuate on a short time scale and increase or decrease on a very long time scale due to numerical integration artifacts that arise with the use of a finite time step Δt. This is somewhat similar to the flying ice cube problem, whereby numerical errors in handling equipartition of energy can change vibrational energy into translational energy.
More specifically, the energy tends to increase exponentially; its increase can be understood intuitively because each step introduces a small perturbation δv to the true velocity vtrue, which (if uncorrelated with v, which will be true for simple integration methods) results in a second-order increase in the energy
(The cross term in v · δv is zero because of no correlation.)
Energy drift - usually damping - is substantial for numerical integration schemes that are not symplectic, such as the Runge-Kutta family. Symplectic integrators usually used in molecular dynamics, such as the Verlet integrator family, exhibit increases in energy over very long time scales, though the error remains roughly constant. These integrators do not in fact reproduce the actual Hamiltonian mechanics of the system; instead, they reproduce a closely related "shadow" Hamiltonian whose value they conserve many orders of magnitude more closely. [1] [2] The accuracy of the energy conservation for the true Hamiltonian is dependent on the time step. [3] [4] The energy computed from the modified Hamiltonian of a symplectic integrator is from the true Hamiltonian.
Energy drift is similar to parametric resonance in that a finite, discrete timestepping scheme will result in nonphysical, limited sampling of motions with frequencies close to the frequency of velocity updates. Thus the restriction on the maximum step size that will be stable for a given system is proportional to the period of the fastest fundamental modes of the system's motion. For a motion with a natural frequency ω, artificial resonances are introduced when the frequency of velocity updates, is related to ω as
where n and m are integers describing the resonance order. For Verlet integration, resonances up to the fourth order frequently lead to numerical instability, leading to a restriction on the timestep size of
where ω is the frequency of the fastest motion in the system and p is its period. [5] The fastest motions in most biomolecular systems involve the motions of hydrogen atoms; it is thus common to use constraint algorithms to restrict hydrogen motion and thus increase the maximum stable time step that can be used in the simulation. However, because the time scales of heavy-atom motions are not widely divergent from those of hydrogen motions, in practice this allows only about a twofold increase in time step. Common practice in all-atom biomolecular simulation is to use a time step of 1 femtosecond (fs) for unconstrained simulations and 2 fs for constrained simulations, although larger time steps may be possible for certain systems or choices of parameters.
Energy drift can also result from imperfections in evaluating the energy function, usually due to simulation parameters that sacrifice accuracy for computational speed. For example, cutoff schemes for evaluating the electrostatic forces introduce systematic errors in the energy with each time step as particles move back and forth across the cutoff radius if sufficient smoothing is not used. Particle mesh Ewald summation is one solution for this effect, but introduces artifacts of its own. Errors in the system being simulated can also induce energy drifts characterized as "explosive" that are not artifacts, but are reflective of the instability of the initial conditions; this may occur when the system has not been subjected to sufficient structural minimization before beginning production dynamics. In practice, energy drift may be measured as a percent increase over time, or as a time needed to add a given amount of energy to the system.
The practical effects of energy drift depend on the simulation conditions, the thermodynamic ensemble being simulated, and the intended use of the simulation under study; for example, energy drift has much more severe consequences for simulations of the microcanonical ensemble than the canonical ensemble where the temperature is held constant. However, it has been shown that long microcanonical ensemble simulations can be performed with insignificant energy drift, including those of flexible molecules which incorporate constraints and Ewald summations. [1] [2] Energy drift is often used as a measure of the quality of the simulation, and has been proposed as one quality metric to be routinely reported in a mass repository of molecular dynamics trajectory data analogous to the Protein Data Bank. [6]
Differential rotation is seen when different parts of a rotating object move with different angular velocities at different latitudes and/or depths of the body and/or in time. This indicates that the object is not solid. In fluid objects, such as accretion disks, this leads to shearing. Galaxies and protostars usually show differential rotation; examples in the Solar System include the Sun, Jupiter and Saturn.
Dynamic mechanical analysis is a technique used to study and characterize materials. It is most useful for studying the viscoelastic behavior of polymers. A sinusoidal stress is applied and the strain in the material is measured, allowing one to determine the complex modulus. The temperature of the sample or the frequency of the stress are often varied, leading to variations in the complex modulus; this approach can be used to locate the glass transition temperature of the material, as well as to identify transitions corresponding to other molecular motions.
In physics, Liouville's theorem, named after the French mathematician Joseph Liouville, is a key theorem in classical statistical and Hamiltonian mechanics. It asserts that the phase-space distribution function is constant along the trajectories of the system—that is that the density of system points in the vicinity of a given system point traveling through phase-space is constant with time. This time-independent density is in statistical mechanics known as the classical a priori probability.
In physics, the Rabi cycle is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an optical driving field. The effect is important in quantum optics, magnetic resonance and quantum computing, and is named after Isidor Isaac Rabi.
Verlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Jean Baptiste Delambre and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by P. H. Cowell and A. C. C. Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the trajectories of electrical particles in a magnetic field . The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method.
A property of a physical system, such as the entropy of a gas, that stays approximately constant when changes occur slowly is called an adiabatic invariant. By this it is meant that if a system is varied between two end points, as the time for the variation between the end points is increased to infinity, the variation of an adiabatic invariant between the two end points goes to zero.
In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.
The Rabi problem concerns the response of an atom to an applied harmonic electric field, with an applied frequency very close to the atom's natural frequency. It provides a simple and generally solvable example of light–atom interactions and is named after Isidor Isaac Rabi.
The Rabi frequency is the frequency at which the probability amplitudes of two atomic energy levels fluctuate in an oscillating electromagnetic field. It is proportional to the Transition Dipole Moment of the two levels and to the amplitude of the Electromagnetic field. Population transfer between the levels of such a 2-level system illuminated with light exactly resonant with the difference in energy between the two levels will occur at the Rabi frequency; when the incident light is detuned from this energy difference then the population transfer occurs at the generalized Rabi frequency. The Rabi frequency is a semiclassical concept since it treats the atom as an object with quantized energy levels and the electromagnetic field as a continuous wave.
The Jaynes–Cummings model is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity.
In spectroscopy, the Autler–Townes effect, is a dynamical Stark effect corresponding to the case when an oscillating electric field is tuned in resonance to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes.
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.
In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics. It is a symplectic integrator and hence it yields better results than the standard Euler method.
In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form
The Hamiltonian Monte Carlo algorithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples which converge to being distributed according to a target probability distribution for which direct sampling is difficult. This sequence can be used to estimate integrals with respect to the target distribution.
In mathematics, a multisymplectic integrator is a numerical method for the solution of a certain class of partial differential equations, that are said to be multisymplectic. Multisymplectic integrators are geometric integrators, meaning that they preserve the geometry of the problems; in particular, the numerical method preserves energy and momentum in some sense, similar to the partial differential equation itself. Examples of multisymplectic integrators include the Euler box scheme and the Preissman box scheme.
Ramsey interferometry, also known as the separated oscillating fields method, is a form of particle interferometry that uses the phenomenon of magnetic resonance to measure transition frequencies of particles. It was developed in 1949 by Norman Ramsey, who built upon the ideas of his mentor, Isidor Isaac Rabi, who initially developed a technique for measuring particle transition frequencies. Ramsey's method is used today in atomic clocks and in the S.I. definition of the second. Most precision atomic measurements, such as modern atom interferometers and quantum logic gates, have a Ramsey-type configuration. A more modern method, known as Ramsey–Bordé interferometry uses a Ramsey configuration and was developed by French physicist Christian Bordé and is known as the Ramsey–Bordé interferometer. Bordé's main idea was to use atomic recoil to create a beam splitter of different geometries for an atom-wave. The Ramsey–Bordé interferometer specifically uses two pairs of counter-propagating interaction waves, and another method named the "photon-echo" uses two co-propagating pairs of interaction waves.
Magnetic resonance is a quantum mechanical resonant effect that can appear when a magnetic dipole is exposed to a static magnetic field and perturbed with another, oscillating electromagnetic field. Due to the static field, the dipole can assume a number of discrete energy eigenstates, depending on the value of its angular momentum quantum number. The oscillating field can then make the dipole transit between its energy states with a certain probability and at a certain rate. The overall transition probability will depend on the field's frequency and the rate will depend on its amplitude. When the frequency of that field leads to the maximum possible transition probability between two states, a magnetic resonance has been achieved. In that case, the energy of the photons composing the oscillating field matches the energy difference between said states. If the dipole is tickled with a field oscillating far from resonance, it is unlikely to transition. That is analogous to other resonant effects, such as with the forced harmonic oscillator. The periodic transition between the different states is called Rabi cycle and the rate at which that happens is called Rabi frequency. The Rabi frequency should not be confused with the field's own frequency. Since many atomic nuclei species can behave as a magnetic dipole, this resonance technique is the basis of nuclear magnetic resonance, including nuclear magnetic resonance imaging and nuclear magnetic resonance spectroscopy.
Cavity optomechanics is a branch of physics which focuses on the interaction between light and mechanical objects on low-energy scales. It is a cross field of optics, quantum optics, solid-state physics and materials science. The motivation for research on cavity optomechanics comes from fundamental effects of quantum theory and gravity, as well as technological applications.