Self-averaging

Last updated

A self-averaging physical property of a disordered system is one that can be described by averaging over a sufficiently large sample. The concept was introduced by Ilya Mikhailovich Lifshitz.

Contents

Definition

Frequently in physics one comes across situations where quenched randomness plays an important role. Any physical property X of such a system, would require an averaging over all disorder realisations. The system can be completely described by the average [X] where [...] denotes averaging over realisations (“averaging over samples”) provided the relative variance RX = VX / [X]2  0 as N→∞, where VX = [X2]  [X]2 and N denotes the size of the realisation. In such a scenario a single large system is sufficient to represent the whole ensemble. Such quantities are called self-averaging. Away from criticality, when the larger lattice is built from smaller blocks, then due to the additivity property of an extensive quantity, the central limit theorem guarantees that RX ~ N1 thereby ensuring self-averaging. On the other hand, at the critical point, the question whether is self-averaging or not becomes nontrivial, due to long range correlations.

Physics study of matter and its motion, along with related concepts such as energy and force

Physics is the natural science that studies matter and its motion and behavior through space and time and that studies the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, and its main goal is to understand how the universe behaves.

A physical property is any property that is measurable, whose value describes a state of a physical system. The changes in the physical properties of a system can be used to describe its changes between momentary states. Physical properties are often referred to as observables. They are not modal properties. Quantifiable physical property is called physical quantity.

In probability theory, the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.

Non self-averaging systems

At the pure critical point randomness is classified as relevant if, by the standard definition of relevance, it leads to a change in the critical behaviour (i.e., the critical exponents) of the pure system. It has been shown by recent renormalization group and numerical studies that self-averaging property is lost if randomness or disorder is relevant. [1] Most importantly as N → ∞, RX at the critical point approaches a constant. Such systems are called non self-averaging. Thus unlike the self-averaging scenario, numerical simulations cannot lead to an improved picture in larger lattices (large N), even if the critical point is exactly known. In summary, various types of self-averaging can be indexed with the help of the asymptotic size dependence of a quantity like RX. If RX falls off to zero with size, it is self-averaging whereas if RX approaches a constant as N → ∞, the system is non-self-averaging.

Strong and weak self-averaging

There is a further classification of self-averaging systems as strong and weak. If the exhibited behavior is RX ~ N−1 as suggested by the central limit theorem, mentioned earlier, the system is said to be strongly self-averaging. Some systems shows a slower power law decay RX ~ Nz with 0 < z < 1. Such systems are classified weakly self-averaging. The known critical exponents of the system determine the exponent z.

Power law mathematical relationship between two quantities

In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.

It must also be added that relevant randomness does not necessarily imply non self-averaging, especially in a mean-field scenario. [2] The RG arguments mentioned above need to be extended to situations with sharp limit of Tc distribution and long range interactions.

Related Research Articles

Phase transition transitions between solid, liquid and gaseous states of matter, and, in rare cases, plasma

The term phase transition is most commonly used to describe transitions between solid, liquid, and gaseous states of matter, as well as plasma in rare cases. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change, often discontinuously, as a result of the change of some external condition, such as temperature, pressure, or others. For example, a liquid may become gas upon heating to the boiling point, resulting in an abrupt change in volume. The measurement of the external conditions at which the transformation occurs is termed the phase transition. Phase transitions commonly occur in nature and are used today in many technologies.

In mathematics the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation diverge at a rate given by

Percolation theory mathematical theory concerning the behavior of connected clusters in a random graph

In statistical physics and mathematics, percolation theory describes the behaviour of connected clusters in a random graph. The applications of percolation theory to materials science and other domains are discussed in the article percolation.

Scale-free network network whose degree distribution follows a power law

A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as

Random walk mathematical formalization of a path that consists of a succession of random steps

A random walk is a mathematical object, known as a stochastic or random process, that describes a path that consists of a succession of random steps on some mathematical space such as the integers. An elementary example of a random walk is the random walk on the integer number line, , which starts at 0 and at each step moves +1 or −1 with equal probability. Other examples include the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the financial status of a gambler: all can be approximated by random walk models, even though they may not be truly random in reality. As illustrated by those examples, random walks have applications to engineering and many scientific fields including ecology, psychology, computer science, physics, chemistry, biology as well as economics. Random walks explain the observed behaviors of many processes in these fields, and thus serve as a fundamental model for the recorded stochastic activity. As a more mathematical application, the value of π can be approximated by the use of random walk in an agent-based modeling environment. The term random walk was first introduced by Karl Pearson in 1905.

In theoretical physics, the renormalization group (RG) refers to a mathematical apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.

Percolation movement and filtering of fluids through porous materials

In physics, chemistry and materials science, percolation refers to the movement and filtering of fluids through porous materials.

In physics, critical phenomena is the collective name associated with the physics of critical points. Most of them stem from the divergence of the correlation length, but also the dynamics slows down. Critical phenomena include scaling relations among different quantities, power-law divergences of some quantities described by critical exponents, universality, fractal behaviour, and ergodicity breaking. Critical phenomena take place in second order phase transitions, although not exclusively.

In statistical mechanics, the n-vector model or O(n) model is a simple system of interacting spins on a crystalline lattice. It was developed by H. Eugene Stanley as a generalization of the Ising model, XY model and Heisenberg model. In the n-vector model, n-component unit-length classical spins are placed on the vertices of a lattice. The Hamiltonian of the n-vector model is given by:

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced by Leo Kadanoff in the 1960s, but a simpler version of the concept was already implicit in the van der Waals equation and in the earlier Landau theory of phase transitions, which did not incorporate scaling correctly.

In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory. An elegant criterion to obtain the critical dimension within mean field theory is due to V. Ginzburg.

Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features. For instance, for ferromagnetic systems, the critical exponents depend only on:

In condensed matter physics, Anderson localization is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree of randomness (disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor with impurities or defects.

In probability theory, the Schramm–Loewner evolution with parameter κ, also known as stochastic Loewner evolution (SLEκ), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter κ and a domain in the complex plane U, it gives a family of random curves in U, with κ controlling how much the curve turns. There are two main variants of SLE, chordal SLE which gives a family of random curves from two fixed boundary points, and radial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property.

An important question in statistical mechanics is the dependence of model behaviour on the dimension of the system. The shortcut model was introduced in the course of studying this dependence. The model interpolates between discrete regular lattices of integer dimension.

In the context of the physical and mathematical theory of percolation, a percolation transition is characterized by a set of universal critical exponents, which describe the fractal properties of the percolating medium at large scales and sufficiently close to the transition. The exponents are universal in the sense that they only depend on the type of percolation model and on the space dimension. They are expected to not depend on microscopic details such as the lattice structure, or whether site or bond percolation is considered. This article deals with the critical exponents of random percolation.

In mathematics, the connective constant is a numerical quantity associated with self-avoiding walks on a lattice. It is studied in connection with the notion of universality in two-dimensional statistical physics models. While the connective constant depends on the choice of lattice so itself is not universal, it is nonetheless an important quantity that appears in conjectures for universal laws. Furthermore, the mathematical techniques used to understand the connective constant, for example in the recent rigorous proof by Duminil-Copin and Smirnov that the connective constant of the hexagonal lattice has the precise value , may provide clues to a possible approach for attacking other important open problems in the study of self-avoiding walks, notably the conjecture that self-avoiding walks converge in the scaling limit to the Schramm–Loewner evolution.

Robustness, the ability to withstand failures and perturbations, is a critical attribute of many complex systems including complex networks.

Physicists often use various lattices to apply their favorite models in them. For instance, the most favorite lattice is perhaps the square lattice. There are 14 Bravais space lattice where every cell has exactly the same number of nearest, next nearest, nearest of next nearest etc neighbors and hence they are called regular lattice. Often physicists and mathematicians study phenomena which require disordered lattice where each cell do not have exactly the same number of neighbors rather the number of neighbors can vary wildly. For instance, if one wants to study the spread of disease, viruses, rumors etc then the last thing one would look for is the square lattice. In such cases a disordered lattice is necessary. One way of constructing a disordered lattice is by doing the following.

References

  1. -A. Aharony and A.B. Harris (1996). "Absence of Self-Averaging and Universal Fluctuations in Random Systems near Critical Points". Phys. Rev. Lett. 77 (18): 3700–3703. Bibcode:1996PhRvL..77.3700A. doi:10.1103/PhysRevLett.77.3700. PMID   10062286.
  2. - S Roy and SM Bhattacharjee (2006). "Is small-world network disordered?". Physics Letters A. 352: 13. arXiv: cond-mat/0409012 . Bibcode:2006PhLA..352...13R. doi:10.1016/j.physleta.2005.10.105.