In statistical mechanics, the radial distribution function, (or pair correlation function) in a system of particles (atoms, molecules, colloids, etc.), describes how density varies as a function of distance from a reference particle.
If a given particle is taken to be at the origin O, and if is the average number density of particles, then the local time-averaged density at a distance from O is . This simplified definition holds for a homogeneous and isotropic system. A more general case will be considered below.
In simplest terms it is a measure of the probability of finding a particle at a distance of away from a given reference particle, relative to that for an ideal gas. The general algorithm involves determining how many particles are within a distance of and away from a particle. This general theme is depicted to the right, where the red particle is our reference particle, and the blue particles are those whose centers are within the circular shell, dotted in orange.
The radial distribution function is usually determined by calculating the distance between all particle pairs and binning them into a histogram. The histogram is then normalized with respect to an ideal gas, where particle histograms are completely uncorrelated. For three dimensions, this normalization is the number density of the system multiplied by the volume of the spherical shell, which symbolically can be expressed as .
Given a potential energy function, the radial distribution function can be computed either via computer simulation methods like the Monte Carlo method, or via the Ornstein–Zernike equation, using approximative closure relations like the Percus–Yevick approximation or the hypernetted-chain theory. It can also be determined experimentally, by radiation scattering techniques or by direct visualization for large enough (micrometer-sized) particles via traditional or confocal microscopy.
The radial distribution function is of fundamental importance since it can be used, using the Kirkwood–Buff solution theory, to link the microscopic details to macroscopic properties. Moreover, by the reversion of the Kirkwood–Buff theory, it is possible to attain the microscopic details of the radial distribution function from the macroscopic properties. The radial distribution function may also be inverted to predict the potential energy function using the Ornstein–Zernike equation or structure-optimized potential refinement. [1]
Consider a system of particles in a volume (for an average number density ) and at a temperature (let us also define ; is the Boltzmann constant). The particle coordinates are , with . The potential energy due to the interaction between particles is and we do not consider the case of an externally applied field.
The appropriate averages are taken in the canonical ensemble , with the configurational integral, taken over all possible combinations of particle positions. The probability of an elementary configuration, namely finding particle 1 in , particle 2 in , etc. is given by
. | (1) |
The total number of particles is huge, so that in itself is not very useful. However, one can also obtain the probability of a reduced configuration, where the positions of only particles are fixed, in , with no constraints on the remaining particles. To this end, one has to integrate ( 1 ) over the remaining coordinates :
If the particles are non-interacting, in the sense that the potential energy of each particle does not depend on any of the other particles, , then the partition function factorizes, and the probability of an elementary configuration decomposes with independent arguments to a product of single particle probabilities,
Note how for non-interacting particles the probability is symmetric in its arguments. This is not true in general, and the order in which the positions occupy the argument slots of matters. Given a set of positions, the way that the particles can occupy those positions is The probability that those positions ARE occupied is found by summing over all configurations in which a particle is at each of those locations. This can be done by taking every permutation, , in the symmetric group on objects, , to write . For fewer positions, we integrate over extraneous arguments, and include a correction factor to prevent overcounting,This quantity is called the n-particle density function. For indistinguishable particles, one could permute all the particle positions, , without changing the probability of an elementary configuration, , so that the n-particle density function reduces to Integrating the n-particle density gives the permutation factor , counting the number of ways one can sequentially pick particles to place at the positions out of the total particles. Now let's turn to how we interpret this functions for different values of .
For , we have the one-particle density. For a crystal it is a periodic function with sharp maxima at the lattice sites. For a non-interacting gas, it is independent of the position and equal to the overall number density, , of the system. To see this first note that in the volume occupied by the gas, and 0 everywhere else. The partition function in this case is
from which the definition gives the desired result
In fact, for this special case every n-particle density is independent of coordinates, and can be computed explicitlyFor , the non-interacting n-particle density is approximately . [2] With this in hand, the n-point correlation function is defined by factoring out the non-interacting contribution[ citation needed ], Explicitly, this definition reads where it is clear that the n-point correlation function is dimensionless.
The second-order correlation function is of special importance, as it is directly related (via a Fourier transform) to the structure factor of the system and can thus be determined experimentally using X-ray diffraction or neutron diffraction. [3]
If the system consists of spherically symmetric particles, depends only on the relative distance between them, . We will drop the sub- and superscript: . Taking particle 0 as fixed at the origin of the coordinates, is the average number of particles (among the remaining ) to be found in the volume around the position .
We can formally count these particles and take the average via the expression , with the ensemble average, yielding:
(5) |
where the second equality requires the equivalence of particles . The formula above is useful for relating to the static structure factor , defined by , since we have:
and thus:
This equation is only valid in the sense of distributions, since is not normalized: , so that diverges as the volume , leading to a Dirac peak at the origin for the structure factor. Since this contribution is inaccessible experimentally we can subtract it from the equation above and redefine the structure factor as a regular function:
Finally, we rename and, if the system is a liquid, we can invoke its isotropy:
. | (6) |
Evaluating ( 6 ) in and using the relation between the isothermal compressibility and the structure factor at the origin yields the compressibility equation:
. | (7) |
It can be shown [4] that the radial distribution function is related to the two-particle potential of mean force by:
. | (8) |
In the dilute limit, the potential of mean force is the exact pair potential under which the equilibrium point configuration has a given .
If the particles interact via identical pairwise potentials: , the average internal energy per particle is: [5] : Section 2.5
. | (9) |
Developing the virial equation yields the pressure equation of state:
. | (10) |
The radial distribution function is an important measure because several key thermodynamic properties, such as potential energy and pressure can be calculated from it.
For a 3-D system where particles interact via pairwise potentials, the potential energy of the system can be calculated as follows: [6]
where N is the number of particles in the system, is the number density, is the pair potential.
The pressure of the system can also be calculated by relating the 2nd virial coefficient to . The pressure can be calculated as follows: [6]
Note that the results of potential energy and pressure will not be as accurate as directly calculating these properties because of the averaging involved with the calculation of .
For dilute systems (e.g. gases), the correlations in the positions of the particles that accounts for are only due to the potential engendered by the reference particle, neglecting indirect effects. In the first approximation, it is thus simply given by the Boltzmann distribution law:
. | (11) |
If were zero for all – i.e., if the particles did not exert any influence on each other, then for all and the mean local density would be equal to the mean density : the presence of a particle at O would not influence the particle distribution around it and the gas would be ideal. For distances such that is significant, the mean local density will differ from the mean density , depending on the sign of (higher for negative interaction energy and lower for positive ).
As the density of the gas increases, the low-density limit becomes less and less accurate since a particle situated in experiences not only the interaction with the particle at O but also with the other neighbours, themselves influenced by the reference particle. This mediated interaction increases with the density, since there are more neighbours to interact with: it makes physical sense to write a density expansion of , which resembles the virial equation:
. | (12) |
This similarity is not accidental; indeed, substituting ( 12 ) in the relations above for the thermodynamic parameters (Equations 7 , 9 and 10 ) yields the corresponding virial expansions. [7] The auxiliary function is known as the cavity distribution function. [5] : Table 4.1 It has been shown that for classical fluids at a fixed density and a fixed positive temperature, the effective pair potential that generates a given under equilibrium is unique up to an additive constant, if it exists. [8]
In recent years, some attention has been given to develop pair correlation functions for spatially-discrete data such as lattices or networks. [9]
One can determine indirectly (via its relation with the structure factor ) using neutron scattering or x-ray scattering data. The technique can be used at very short length scales (down to the atomic level [10] ) but involves significant space and time averaging (over the sample size and the acquisition time, respectively). In this way, the radial distribution function has been determined for a wide variety of systems, ranging from liquid metals [11] to charged colloids. [12] Going from the experimental to is not straightforward and the analysis can be quite involved. [13]
It is also possible to calculate directly by extracting particle positions from traditional or confocal microscopy. [14] This technique is limited to particles large enough for optical detection (in the micrometer range), but it has the advantage of being time-resolved so that, aside from the statical information, it also gives access to dynamical parameters (e.g. diffusion constants [15] ) and also space-resolved (to the level of the individual particle), allowing it to reveal the morphology and dynamics of local structures in colloidal crystals, [16] glasses, [17] [18] gels, [19] [20] and hydrodynamic interactions. [21]
Direct visualization of a full (distance-dependent and angle-dependent) pair correlation function was achieved by a scanning tunneling microscopy in the case of 2D molecular gases. [22]
It has been noted that radial distribution functions alone are insufficient to characterize structural information. Distinct point processes may possess identical or practically indistinguishable radial distribution functions, known as the degeneracy problem. [23] [24] In such cases, higher order correlation functions are needed to further describe the structure.
Higher-order distribution functions with were less studied, since they are generally less important for the thermodynamics of the system; at the same time, they are not accessible by conventional scattering techniques. They can however be measured by coherent X-ray scattering and are interesting insofar as they can reveal local symmetries in disordered systems. [25]
An electric field is the physical field that surrounds electrically charged particles. Charged particles exert attractive forces on each other when their charges are opposite, and repulse each other when their charges are the same. Because these forces are exerted mutually, two charges must be present for the forces to take place. The electric field of a single charge describes their capacity to exert such forces on another charged object. These forces are described by Coulomb's law, which says that the greater the magnitude of the charges, the greater the force, and the greater the distance between them, the weaker the force. Thus, we may informally say that the greater the charge of an object, the stronger its electric field. Similarly, an electric field is stronger nearer charged objects and weaker further away. Electric fields originate from electric charges and time-varying electric currents. Electric fields and magnetic fields are both manifestations of the electromagnetic field, Electromagnetism is one of the four fundamental interactions of nature.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
Electric potential is defined as the amount of work/energy needed per unit of electric charge to move the charge from a reference point to a specific point in an electric field. More precisely, the electric potential is the energy per unit charge for a test charge that is so small that the disturbance of the field under consideration is negligible. The motion across the field is supposed to proceed with negligible acceleration, so as to avoid the test charge acquiring kinetic energy or producing radiation. By definition, the electric potential at the reference point is zero units. Typically, the reference point is earth or a point at infinity, although any point can be used.
In physics, Gauss's law, also known as Gauss's flux theorem, is one of Maxwell's equations. It is an application of the divergence theorem, and it relates the distribution of electric charge to the resulting electric field.
The gravitational binding energy of a system is the minimum energy which must be added to it in order for the system to cease being in a gravitationally bound state. A gravitationally bound system has a lower gravitational potential energy than the sum of the energies of its parts when these are completely separated—this is what keeps the system aggregated in accordance with the minimum total potential energy principle.
In physics, the screened Poisson equation is a Poisson equation, which arises in the Klein–Gordon equation, electric field screening in plasmas, and nonlocal granular fluidity in granular flow.
In fluid dynamics, Stokes' law is an empirical law for the frictional force – also called drag force – exerted on spherical objects with very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.
Electrostatics is a branch of physics that studies slow-moving or stationary electric charges.
In quantum mechanics, a spherically symmetric potential is a system of which the potential only depends on the radial distance from the spherical center and a location in space. A particle in a spherically symmetric potential will behave accordingly to said potential and can therefore be used as an approximation, for example, of the electron in a hydrogen atom or of the formation of chemical bonds.
Stellar dynamics is the branch of astrophysics which describes in a statistical way the collective motions of stars subject to their mutual gravity. The essential difference from celestial mechanics is that the number of body
In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.
In mathematics, the Hankel transform expresses any given function f(r) as the weighted sum of an infinite number of Bessel functions of the first kind Jν(kr). The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor k along the r axis. The necessary coefficient Fν of each Bessel function in the sum, as a function of the scaling factor k constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval.
Electric potential energy is a potential energy that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects.
In physics, the Einstein relation is a previously unexpected connection revealed independently by William Sutherland in 1904, Albert Einstein in 1905, and by Marian Smoluchowski in 1906 in their works on Brownian motion. The more general form of the equation in the classical case is
In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space. Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems.
Ewald summation, named after Paul Peter Ewald, is a method for computing long-range interactions in periodic systems. It was first developed as the method for calculating the electrostatic energies of ionic crystals, and is now commonly used for calculating long-range interactions in computational chemistry. Ewald summation is a special case of the Poisson summation formula, replacing the summation of interaction energies in real space with an equivalent summation in Fourier space. In this method, the long-range interaction is divided into two parts: a short-range contribution, and a long-range contribution which does not have a singularity. The short-range contribution is calculated in real space, whereas the long-range contribution is calculated using a Fourier transform. The advantage of this method is the rapid convergence of the energy compared with that of a direct summation. This means that the method has high accuracy and reasonable speed when computing long-range interactions, and it is thus the de facto standard method for calculating long-range interactions in periodic systems. The method requires charge neutrality of the molecular system to accurately calculate the total Coulombic interaction. A study of the truncation errors introduced in the energy and force calculations of disordered point-charge systems is provided by Kolafa and Perram.
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
{{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link)