Crooks fluctuation theorem

Last updated

The Crooks fluctuation theorem (CFT), sometimes known as the Crooks equation, [1] is an equation in statistical mechanics that relates the work done on a system during a non-equilibrium transformation to the free energy difference between the final and the initial state of the transformation. During the non-equilibrium transformation the system is at constant volume and in contact with a heat reservoir. The CFT is named after the chemist Gavin E. Crooks (then at University of California, Berkeley) who discovered it in 1998.

The most general statement of the CFT relates the probability of a space-time trajectory to the time-reversal of the trajectory . The theorem says if the dynamics of the system satisfies microscopic reversibility, then the forward time trajectory is exponentially more likely than the reverse, given that it produces entropy,

If one defines a generic reaction coordinate of the system as a function of the Cartesian coordinates of the constituent particles ( e.g. , a distance between two particles), one can characterize every point along the reaction coordinate path by a parameter , such that and correspond to two ensembles of microstates for which the reaction coordinate is constrained to different values. A dynamical process where is externally driven from zero to one, according to an arbitrary time scheduling, will be referred as forward transformation , while the time reversal path will be indicated as backward transformation. Given these definitions, the CFT sets a relation between the following five quantities:

The CFT equation reads as follows:

In the previous equation the difference corresponds to the work dissipated in the forward transformation, . The probabilities and become identical when the transformation is performed at infinitely slow speed, i.e. for equilibrium transformations. In such cases, and

Using the time reversal relation , and grouping together all the trajectories yielding the same work (in the forward and backward transformation), i.e. determining the probability distribution (or density) of an amount of work being exerted by a random system trajectory from to , we can write the above equation in terms of the work distribution functions as follows

Note that for the backward transformation, the work distribution function must be evaluated by taking the work with the opposite sign. The two work distributions for the forward and backward processes cross at . This phenomenon has been experimentally verified using optical tweezers for the process of unfolding and refolding of a small RNA hairpin and an RNA three-helix junction. [2]

The CFT implies the Jarzynski equality.

Notes

  1. G. Crooks, "Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences", Physical Review E, 60, 2721 (1999)
  2. Collin, D.; Ritort, F.; Jarzynski, C.; Smith, S. B.; Tinoco, I.; Bustamante, C. (8 September 2005). "Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies". Nature. 437 (7056): 231–234. arXiv: cond-mat/0512266 . Bibcode:2005Natur.437..231C. doi:10.1038/nature04061. PMC   1752236 . PMID   16148928.

Related Research Articles

In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given one is solving for x, and thus the condition number of the (local) inverse must be used.

<span class="mw-page-title-main">Noether's theorem</span> Statement relating differentiable symmetries to conserved quantities

Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.

<span class="mw-page-title-main">Lyapunov exponent</span> The rate of separation of infinitesimally close trajectories

In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

<span class="mw-page-title-main">Helmholtz free energy</span> Thermodynamic potential

In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.

In theoretical physics, the renormalization group (RG) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.

<span class="mw-page-title-main">Partition function (statistical mechanics)</span> Function in thermodynamics and statistical physics

In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.

In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.

In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe in 1976, and has become associated with Alexander Polyakov after he made use of it in quantizing the string in 1981. The action reads:

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

The isothermal–isobaric ensemble is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions.

In theoretical physics, a source is an abstract concept, developed by Julian Schwinger, motivated by the physical effects of surrounding particles involved in creating or destroying another particle. So, one can perceive sources as the origin of the physical properties carried by the created or destroyed particle, and thus one can use this concept to study all quantum processes including the spacetime localized properties and the energy forms, i.e., mass and momentum, of the phenomena. The probability amplitude of the created or the decaying particle is defined by the effect of the source on a localized spacetime region such that the affected particle captures its physics depending on the tensorial and spinorial nature of the source. An example that Julian Schwinger referred to is the creation of meson due to the mass correlations among five mesons.

In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.

In mathematics, Reidemeister torsion is a topological invariant of manifolds introduced by Kurt Reidemeister for 3-manifolds and generalized to higher dimensions by Wolfgang Franz and Georges de Rham . Analytic torsion is an invariant of Riemannian manifolds defined by Daniel B. Ray and Isadore M. Singer as an analytic analogue of Reidemeister torsion. Jeff Cheeger and Werner Müller proved Ray and Singer's conjecture that Reidemeister torsion and analytic torsion are the same for compact Riemannian manifolds.

In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The theorem is named after its discoverer, Emmy Noether.

In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on L2(X). In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock.

The Bennett acceptance ratio method (BAR) is an algorithm for estimating the difference in free energy between two systems . It was suggested by Charles H. Bennett in 1976.

In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. There also exist equations involving moment measures and factorial moment measures that are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory of point processes and queueing theory as well as the related fields stochastic geometry, continuum percolation theory, and spatial statistics.

In queueing theory, a discipline within the mathematical theory of probability, a heavy traffic approximation involves the matching of a queueing model with a diffusion process under some limiting conditions on the model's parameters. The first such result was published by John Kingman, who showed that when the utilisation parameter of an M/M/1 queue is near 1, a scaled version of the queue length process can be accurately approximated by a reflected Brownian motion.

Higher-spin theory or higher-spin gravity is a common name for field theories that contain massless fields of spin greater than two. Usually, the spectrum of such theories contains the graviton as a massless spin-two field, which explains the second name. Massless fields are gauge fields and the theories should be (almost) completely fixed by these higher-spin symmetries. Higher-spin theories are supposed to be consistent quantum theories and, for this reason, to give examples of quantum gravity. Most of the interest in the topic is due to the AdS/CFT correspondence where there is a number of conjectures relating higher-spin theories to weakly coupled conformal field theories. It is important to note that only certain parts of these theories are known at present and not many examples have been worked out in detail except some specific toy models.