Dynamic scaling

Last updated

Dynamic scaling (sometimes known as Family-Vicsek scaling [1] [2] ) is a litmus test that shows whether an evolving system exhibits self-similarity. In general a function is said to exhibit dynamic scaling if it satisfies:

Contents

Here the exponent is fixed by the dimensional requirement . The numerical value of should remain invariant despite the unit of measurement of is changed by some factor since is a dimensionless quantity.

Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time. That is, the system is similar to itself at different times. The litmus test of such self-similarity is provided by the dynamic scaling.

History

The term "dynamic scaling" as one of the essential concepts to describe the dynamics of critical phenomena seems to originate in the seminal paper of Pierre Hohenberg and Bertrand Halperin (1977), namely they suggested "[...] that the wave vector- and frequencydependent susceptibility of a ferromagnet near its Curie point may be expressed as a function independent of provided that the length and frequency scales, as well as the magnetization and magnetic field, are rescaled by appropriate powers of . [3]

Later Tamás Vicsek and Fereydoon Family proposed the idea of dynamic scaling in the context of diffusion-limited aggregation (DLA) of clusters in two dimensions. [2] The form of their proposal for dynamic scaling was:

where the exponents satisfy the following relation:

Test

In such systems we can define a certain time-dependent stochastic variable . We are interested in computing the probability distribution of at various instants of time i.e. . The numerical value of and the typical or mean value of generally changes over time. The question is: what happens to the corresponding dimensionless variables? If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similar. When this happens we say that the system is self-similar.

One way of verifying dynamic scaling is to plot dimensionless variables as a function of of the data extracted at various different time. Then if all the plots of vs obtained at different times collapse onto a single universal curve then it is said that the systems at different time are similar and it obeys dynamic scaling. The idea of data collapse is deeply rooted to the Buckingham Pi theorem. [4] Essentially such systems can be termed as temporal self-similarity since the same system is similar at different times.

Examples

Many phenomena investigated by physicists are not static but evolve probabilistically with time (i.e. Stochastic process). The universe itself is perhaps one of the best examples. It has been expanding ever since the Big Bang. Similarly, growth of networks like the Internet are also ever growing systems. Another example is polymer degradation [5] where degradation does not occur in a blink of an eye but rather over quite a long time. Spread of biological and computer viruses too does not happen over night.

Many other seemingly disparate systems which are found to exhibit dynamic scaling. For example:

Related Research Articles

<span class="mw-page-title-main">Electroweak interaction</span> Unified description of electromagnetism and the weak interaction

In particle physics, the electroweak interaction or electroweak force is the unified description of two of the four known fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around 5.5x1012 K (from the Large Hadron Collider).

<span class="mw-page-title-main">Self-similarity</span> Whole of an object being mathematically similar to part of itself

In mathematics, a self-similar object is exactly or approximately similar to a part of itself. Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales. Self-similarity is a typical property of fractals. Scale invariance is an exact form of self-similarity where at any magnification there is a smaller piece of the object that is similar to the whole. For instance, a side of the Koch snowflake is both symmetrical and scale-invariant; it can be continually magnified 3x without changing shape. The non-trivial similarity evident in fractals is distinguished by their fine structure, or detail on arbitrarily small scales. As a counterexample, whereas any portion of a straight line may resemble the whole, further detail is not revealed.

<span class="mw-page-title-main">Technicolor (physics)</span> Hypothetical model through which W and Z bosons acquire mass

Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name.

<span class="mw-page-title-main">Ornstein–Uhlenbeck process</span> Stochastic process modeling random walk with friction

In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.

Quantum metrology is the study of making high-resolution and highly sensitive measurements of physical parameters using quantum theory to describe the physical systems, particularly exploiting quantum entanglement and quantum squeezing. This field promises to develop measurement techniques that give better precision than the same measurement performed in a classical framework. Together with quantum hypothesis testing, it represents an important theoretical model at the basis of quantum sensing.

<span class="mw-page-title-main">One-way quantum computer</span> Method of quantum computing

The one-way or measurement-based quantum computer (MBQC) is a method of quantum computing that first prepares an entangled resource state, usually a cluster state or graph state, then performs single qubit measurements on it. It is "one-way" because the resource state is destroyed by the measurements.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy.

<span class="mw-page-title-main">Active matter</span> Matter behavior at system scale

Active matter is matter composed of large numbers of active "agents", each of which consumes energy in order to move or to exert mechanical forces. Such systems are intrinsically out of thermal equilibrium. Unlike thermal systems relaxing towards equilibrium and systems with boundary conditions imposing steady currents, active matter systems break time reversal symmetry because energy is being continually dissipated by the individual constituents. Most examples of active matter are biological in origin and span all the scales of the living, from bacteria and self-organising bio-polymers such as microtubules and actin, to schools of fish and flocks of birds. However, a great deal of current experimental work is devoted to synthetic systems such as artificial self-propelled particles. Active matter is a relatively new material classification in soft matter: the most extensively studied model, the Vicsek model, dates from 1995.

The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.

<span class="mw-page-title-main">Kicked rotator</span>

The kicked rotator, also spelled as kicked rotor, is a paradigmatic model for both Hamiltonian chaos and quantum chaos. It describes a free rotating stick in an inhomogeneous "gravitation like" field that is periodically switched on in short pulses. The model is described by the Hamiltonian

In theoretical physics, the logarithmic Schrödinger equation is one of the nonlinear modifications of Schrödinger's equation. It is a classical wave equation with applications to extensions of quantum mechanics, quantum optics, nuclear physics, transport and diffusion phenomena, open quantum systems and information theory, effective quantum gravity and physical vacuum models and theory of superfluidity and Bose–Einstein condensation. Its relativistic version was first proposed by Gerald Rosen. It is an example of an integrable model.

The Vicsek model is a mathematical model used to describe active matter. One motivation of the study of active matter by physicists is the rich phenomenology associated to this field. Collective motion and swarming are among the most studied phenomena. Within the huge number of models that have been developed to catch such behavior from a microscopic description, the most famous is the model introduced by Tamás Vicsek et al. in 1995.

Linear optical quantum computing or linear optics quantum computation (LOQC) is a paradigm of quantum computation, allowing universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.

Physicists often use various lattices to apply their favorite models in them. For instance, the most favorite lattice is perhaps the square lattice. There are 14 Bravais space lattice where every cell has exactly the same number of nearest, next nearest, nearest of next nearest etc. neighbors and hence they are called regular lattice. Often physicists and mathematicians study phenomena which require disordered lattice where each cell do not have exactly the same number of neighbors rather the number of neighbors can vary wildly. For instance, if one wants to study the spread of disease, viruses, rumors etc. then the last thing one would look for is the square lattice. In such cases a disordered lattice is necessary. One way of constructing a disordered lattice is by doing the following.

The KLM scheme or KLM protocol is an implementation of linear optical quantum computing (LOQC), developed in 2000 by Emanuel Knill, Raymond Laflamme and Gerard J. Milburn. This protocol makes it possible to create universal quantum computers solely with linear optical tools. The KLM protocol uses linear optical elements, single-photon sources and photon detectors as resources to construct a quantum computation scheme involving only ancilla resources, quantum teleportations and error corrections.

Supersymmetric theory of stochastic dynamics or stochastics (STS) is an exact theory of stochastic (partial) differential equations (SDEs), the class of mathematical models with the widest applicability covering, in particular, all continuous time dynamical systems, with and without noise. The main utility of the theory from the physical point of view is a rigorous theoretical explanation of the ubiquitous spontaneous long-range dynamical behavior that manifests itself across disciplines via such phenomena as 1/f, flicker, and crackling noises and the power-law statistics, or Zipf's law, of instantonic processes like earthquakes and neuroavalanches. From the mathematical point of view, STS is interesting because it bridges the two major parts of mathematical physics – the dynamical systems theory and topological field theories. Besides these and related disciplines such as algebraic topology and supersymmetric field theories, STS is also connected with the traditional theory of stochastic differential equations and the theory of pseudo-Hermitian operators.

In the theory of dynamical systems, a bailout embedding is a system defined as

The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. The quantum Fisher information of a state with respect to the observable is defined as

A set of networks that satisfies given structural characteristics can be treated as a network ensemble. Brought up by Ginestra Bianconi in 2007, the entropy of a network ensemble measures the level of the order or uncertainty of a network ensemble.

References

  1. Family, F.; Vicsek, T. (1985). "Scaling of the active zone in the Eden process on percolation networks and the ballistic deposition model". Journal of Physics A: Mathematical and General. 18 (2): L75–L81. Bibcode:1985JPhA...18L..75F. doi:10.1088/0305-4470/18/2/005.
  2. 1 2 Vicsek, Tamás; Family, Fereydoon (1984-05-07). "Dynamic Scaling for Aggregation of Clusters". Physical Review Letters. American Physical Society (APS). 52 (19): 1669–1672. Bibcode:1984PhRvL..52.1669V. doi:10.1103/physrevlett.52.1669. ISSN   0031-9007.
  3. Hohenberg, Pierre Claude; Halperin, Bertrand Israel (1 July 1977). "Theory of dynamic critical phenomena". Reviews of Modern Physics. 49 (3): 435–479. Bibcode:1977RvMP...49..435H. doi:10.1103/RevModPhys.49.435. S2CID   122636335.."
  4. Barenblatt, G. I. (1996). Scaling, self-similarity, and intermediate asymptotics. Cambridge New York: Cambridge University Press. ISBN   978-0-521-43522-2. OCLC   33946899.
  5. Ziff, R M; McGrady, E D (1985-10-21). "The kinetics of cluster fragmentation and depolymerisation". Journal of Physics A: Mathematical and General. IOP Publishing. 18 (15): 3027–3037. Bibcode:1985JPhA...18.3027Z. doi:10.1088/0305-4470/18/15/026. hdl: 2027.42/48803 . ISSN   0305-4470.
  6. van Dongen, P. G. J.; Ernst, M. H. (1985-04-01). "Dynamic Scaling in the Kinetics of Clustering". Physical Review Letters. American Physical Society (APS). 54 (13): 1396–1399. Bibcode:1985PhRvL..54.1396V. doi:10.1103/physrevlett.54.1396. ISSN   0031-9007. PMID   10031021.
  7. Kreer, Markus; Penrose, Oliver (1994). "Proof of dynamical scaling in Smoluchowski's coagulation equation with constant kernel". Journal of Statistical Physics. 75 (3): 389–407. Bibcode:1994JSP....75..389K. doi:10.1007/BF02186868. S2CID   17392921.
  8. Hassan, M. K.; Hassan, M. Z. (2009-02-19). "Emergence of fractal behavior in condensation-driven aggregation". Physical Review E. 79 (2): 021406. arXiv: 0901.2761 . Bibcode:2009PhRvE..79b1406H. doi:10.1103/physreve.79.021406. ISSN   1539-3755. PMID   19391746. S2CID   26023004.
  9. Hassan, M. K.; Hassan, M. Z. (2008-06-13). "Condensation-driven aggregation in one dimension". Physical Review E. American Physical Society (APS). 77 (6): 061404. arXiv: 0806.4872 . Bibcode:2008PhRvE..77f1404H. doi:10.1103/physreve.77.061404. ISSN   1539-3755. PMID   18643263. S2CID   32261771.
  10. Hassan, Md. Kamrul; Hassan, Md. Zahedul; Islam, Nabila (2013-10-24). "Emergence of fractals in aggregation with stochastic self-replication". Physical Review E. 88 (4): 042137. arXiv: 1307.7804 . Bibcode:2013PhRvE..88d2137H. doi:10.1103/physreve.88.042137. ISSN   1539-3755. PMID   24229145. S2CID   30562144.
  11. Hassan, M Kamrul; Hassan, M Zahedul; Pavel, Neeaj I (2011-04-04). "Dynamic scaling, data-collapse and self-similarity in Barabási–Albert networks". Journal of Physics A: Mathematical and Theoretical. IOP Publishing. 44 (17): 175101. arXiv: 1101.4730 . Bibcode:2011JPhA...44q5101K. doi:10.1088/1751-8113/44/17/175101. ISSN   1751-8113. S2CID   15700641.
  12. Hassan, M.K.; Pavel, N.I.; Pandit, R.K.; Kurths, J. (2014). "Dyadic Cantor set and its kinetic and stochastic counterpart". Chaos, Solitons & Fractals. Elsevier BV. 60: 31–39. arXiv: 1401.0249 . Bibcode:2014CSF....60...31H. doi:10.1016/j.chaos.2013.12.010. ISSN   0960-0779. S2CID   14494072.
  13. Kardar, Mehran; Parisi, Giorgio; Zhang, Yi-Cheng (3 March 1986). "Dynamic Scaling of Growing Interfaces". Physical Review Letters. 56 (9): 889–892. Bibcode:1986PhRvL..56..889K. doi:10.1103/PhysRevLett.56.889. PMID   10033312..
  14. D'souza, Raissa M. (1997). "Anomalies in Simulations of Nearest Neighbor Ballistic Deposition". International Journal of Modern Physics C. World Scientific Pub Co Pte Lt. 08 (4): 941–951. Bibcode:1997IJMPC...8..941D. doi:10.1142/s0129183197000813. ISSN   0129-1831.
  15. Kreer, Markus (2022). "An elementary proof for dynamical scaling for certain fractional non-homogeneous Poisson processes". Statistics & Probability Letters. Elsevier B.V. 182 (61): 109296. arXiv: 2103.07381 . doi:10.1016/j.spl.2021.109296. ISSN   0167-7152. S2CID   232222701.