Kinetic Monte Carlo

Last updated

The kinetic Monte Carlo (KMC) method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with known transition rates among states. It is important to understand that these rates are inputs to the KMC algorithm, the method itself cannot predict them.

Contents

The KMC method is essentially the same as the dynamic Monte Carlo method and the Gillespie algorithm.

Algorithms

One possible classification of KMC algorithms is as rejection-KMC (rKMC) and rejection-free-KMC (rfKMC).

Rejection-free KMC

At each step, the system can jump into several ending states, the transfer rates between the initial state and all the possible ending states are supposed to be known. Transfer rates.svg
At each step, the system can jump into several ending states, the transfer rates between the initial state and all the possible ending states are supposed to be known.
Choice of the final state : a random var is chosen between 0 and Gtot; the probability that the system jumps into state i is proportional to Gi. State decision in KMC.svg
Choice of the final state : a random var is chosen between 0 and Γtot; the probability that the system jumps into state i is proportional to Γi.

A rfKMC algorithm, often only called KMC, for simulating the time evolution of a system, where some processes can occur with known rates r, can be written for instance as follows:

  1. Set the time .
  2. Choose an initial state k.
  3. Form the list of all possible transition rates in the system , from state k into a generic state i. States that do not communicate with k will have .
  4. Calculate the cumulative function for . The total rate is .
  5. Get a uniform random number .
  6. Find the event to carry out i by finding the i for which (this can be achieved efficiently using binary search).
  7. Carry out event i (update the current state ).
  8. Get a new uniform random number .
  9. Update the time with , where .
  10. Return to step 3.

(Note: because the average value of is equal to unity, the same average time scale can be obtained by instead using in step 9. In this case, however, the delay associated with transition i will not be drawn from the Poisson distribution described by the rate , but will instead be the mean of that distribution.)

This algorithm is known in different sources variously as the residence-time algorithm or the n-fold way or the Bortz-Kalos-Lebowitz (BKL) algorithm. It is important to note that the timestep involved is a function of the probability that all events i, did not occur.

Rejection KMC

Rejection KMC has typically the advantage of an easier data handling, and faster computations for each attempted step, since the time consuming action of getting all is not needed. On the other hand, the time evolved at each step is smaller than for rfKMC. The relative weight of pros and cons varies with the case at hand, and with available resources.

An rKMC associated with the same transition rates as above can be written as follows:

  1. Set the time .
  2. Choose an initial state k.
  3. Get the number of all possible transition rates, from state k into a generic state i.
  4. Find the candidate event to carry out i by uniformly sampling from the transitions above.
  5. Accept the event with probability , where is a suitable upper bound for . It is often easy to find without having to compute all (e.g., for Metropolis transition rate probabilities).
  6. If accepted, carry out event i (update the current state ).
  7. Get a new uniform random number .
  8. Update the time with , where .
  9. Return to step 3.

(Note: can change from one MC step to another.) This algorithm is usually called a standard algorithm.

Theoretical [1] and numerical [2] [3] comparisons between the algorithms were provided.

Time-dependent Algorithms

If the rates are time dependent, step 9 in the rfKMC must be modified by: [4]

.

The reaction (step 6) has to be chosen after this by

Another very similar algorithm is called the First Reaction Method (FRM). It consists of choosing the first-occurring reaction, meaning to choose the smallest time , and the corresponding reaction number i, from the formula

,

where the are N random numbers.

Comments on the algorithm

The key property of the KMC algorithm (and of the FRM one) is that if the rates are correct, if the processes associated with the rates are of the Poisson process type, and if different processes are independent (i.e. not correlated) then the KMC algorithm gives the correct time scale for the evolution of the simulated system. There was some debate about the correctness of the time scale for rKMC algorithms, but this was also rigorously shown to be correct. [1]

If furthermore the transitions follow detailed balance, the KMC algorithm can be used to simulate thermodynamic equilibrium. However, KMC is widely used to simulate non-equilibrium processes, [5] in which case detailed balance need not be obeyed.

The rfKMC algorithm is efficient in the sense that every iteration is guaranteed to produce a transition. However, in the form presented above it requires operations for each transition, which is not too efficient. In many cases this can be much improved on by binning the same kinds of transitions into bins, and/or forming a tree data structure of the events. A constant-time scaling algorithm of this type has recently been developed and tested. [6]

The major disadvantage with rfKMC is that all possible rates and reactions have to be known in advance. The method itself can do nothing about predicting them. The rates and reactions must be obtained from other methods, such as diffusion (or other) experiments, molecular dynamics or density-functional theory simulations.

Examples of use

KMC has been used in simulations of the following physical systems:

  1. Surface diffusion
  2. Surface growth [7]
  3. Vacancy diffusion in alloys (this was the original use [8] )
  4. Coarsening of domain evolution
  5. Defect mobility and clustering in ion or neutron irradiated solids including, but not limited to, damage accumulation and amorphization/recrystallization models.
  6. Viscoelasticity of physically crosslinked networks [9]

To give an idea what the "objects" and "events" may be in practice, here is one concrete simple example, corresponding to example 2 above.

Consider a system where individual atoms are deposited on a surface one at a time (typical of physical vapor deposition), but also may migrate on the surface with some known jump rate . In this case the "objects" of the KMC algorithm are simply the individual atoms.

If two atoms come right next to each other, they become immobile. Then the flux of incoming atoms determines a rate rdeposit, and the system can be simulated with KMC considering all deposited mobile atoms which have not (yet) met a counterpart and become immobile. This way there are the following events possible at each KMC step:

After an event has been selected and carried out with the KMC algorithm, one then needs to check whether the new or just jumped atom has become immediately adjacent to some other atom. If this has happened, the atom(s) which are now adjacent needs to be moved away from the list of mobile atoms, and correspondingly their jump events removed from the list of possible events.

Naturally in applying KMC to problems in physics and chemistry, one has to first consider whether the real system follows the assumptions underlying KMC well enough. Real processes do not necessarily have well-defined rates, the transition processes may be correlated, in case of atom or particle jumps the jumps may not occur in random directions, and so on. When simulating widely disparate time scales one also needs to consider whether new processes may be present at longer time scales. If any of these issues are valid, the time scale and system evolution predicted by KMC may be skewed or even completely wrong.

History

The first publication which described the basic features of the KMC method (namely using a cumulative function to select an event and a time scale calculation of the form 1/R) was by Young and Elcock in 1966. [8] The residence-time algorithm was also published at about the same time. [10]

Apparently independent of the work of Young and Elcock, Bortz, Kalos and Lebowitz [2] developed a KMC algorithm for simulating the Ising model, which they called the n-fold way. The basics of their algorithm is the same as that of Young, [8] but they do provide much greater detail on the method.

The following year Dan Gillespie published what is now known as the Gillespie algorithm to describe chemical reactions. [11] The algorithm is similar and the time advancement scheme essentially the same as in KMC.

There is as of the writing of this (June 2006) no definitive treatise of the theory of KMC, but Fichthorn and Weinberg have discussed the theory for thermodynamic equilibrium KMC simulations in detail. [12] A good introduction is given also by Art Voter, [13] and by A.P.J. Jansen, [14] , and a recent review is (Chatterjee 2007) [15] or (Chotia 2008). [16] The justification of KMC as a coarse-graining of the Langevin dynamics using the quasi-stationary distribution approach has been developed by T. Lelièvre and collaborators. [17] [18]


In March 2006 the, probably, first commercial software using Kinetic Monte Carlo to simulate the diffusion and activation/deactivation of dopants in Silicon and Silicon like materials is released by Synopsys, reported by Martin-Bragado et al. [19]

Varieties of KMC

The KMC method can be subdivided by how the objects are moving or reactions occurring. At least the following subdivisions are used:

Related Research Articles

<span class="mw-page-title-main">Metropolis–Hastings algorithm</span> Monte Carlo algorithm

In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. This sequence can be used to approximate the distribution or to compute an integral. Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.

In finance, the binomial options pricing model (BOPM) provides a generalizable numerical method for the valuation of options. Essentially, the model uses a "discrete-time" model of the varying price over time of the underlying financial instrument, addressing cases where the closed-form Black–Scholes formula is wanting.

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

The classical XY model is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2.

The Eyring equation is an equation used in chemical kinetics to describe changes in the rate of a chemical reaction against temperature. It was developed almost simultaneously in 1935 by Henry Eyring, Meredith Gwynne Evans and Michael Polanyi. The equation follows from the transition state theory, also known as activated-complex theory. If one assumes a constant enthalpy of activation and constant entropy of activation, the Eyring equation is similar to the empirical Arrhenius equation, despite the Arrhenius equation being empirical and the Eyring equation based on statistical mechanical justification.

In probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others, presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power. As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.

A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.

<span class="mw-page-title-main">Umbrella sampling</span> Sampling technique used in physics

Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977. It is a particular physical application of the more general importance sampling in statistics.

In thermodynamics, the excess chemical potential is defined as the difference between the chemical potential of a given species and that of an ideal gas under the same conditions . The chemical potential of a particle species is therefore given by an ideal part and an excess part.

The Wang and Landau algorithm, proposed by Fugao Wang and David P. Landau, is a Monte Carlo method designed to estimate the density of states of a system. The method performs a non-Markovian random walk to build the density of states by quickly visiting all the available energy spectrum. The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation.

The BFM is a lattice model for simulating the conformation and dynamics of polymer systems. There are two versions of the BFM used: The earlier version was first introduced by I. Carmesin and Kurt Kremer in 1988, and the later version by J. Scott Shaffer in 1994. Conversion between models is possible.

The Swendsen–Wang algorithm is the first non-local or cluster algorithm for Monte Carlo simulation for large systems near criticality. It has been introduced by Robert Swendsen and Jian-Sheng Wang in 1987 at Carnegie Mellon.

The Nosé–Hoover thermostat is a deterministic algorithm for constant-temperature molecular dynamics simulations. It was originally developed by Nosé and was improved further by Hoover. Although the heat bath of Nosé–Hoover thermostat consists of only one imaginary particle, simulation systems achieve realistic constant-temperature condition. Therefore, the Nosé–Hoover thermostat has been commonly used as one of the most accurate and efficient methods for constant-temperature molecular dynamics simulations.

<span class="mw-page-title-main">Metadynamics</span> Scientific computer simulation method

Metadynamics is a computer simulation method in computational physics, chemistry and biology. It is used to estimate the free energy and other state functions of a system, where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Alessandro Laio and Michele Parrinello in 2002 and is usually applied within molecular dynamics simulations. MTD closely resembles a number of newer methods such as adaptively biased molecular dynamics, adaptive reaction coordinate forces and local elevation umbrella sampling. More recently, both the original and well-tempered metadynamics were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting. MTD is related to the Wang–Landau sampling.

The Monte Carlo method for electron transport is a semiclassical Monte Carlo (MC) approach of modeling semiconductor transport. Assuming the carrier motion consists of free flights interrupted by scattering mechanisms, a computer is utilized to simulate the trajectories of particles as they move across the device under the influence of an electric field using classical mechanics. The scattering events and the duration of particle flight is determined through the use of random numbers.

Local elevation is a technique used in computational chemistry or physics, mainly in the field of molecular simulation. It was developed in 1994 by Huber, Torda and van Gunsteren to enhance the searching of conformational space in molecular dynamics simulations and is available in the GROMOS software for molecular dynamics simulation. The method was, together with the conformational flooding method, the first to introduce memory dependence into molecular simulations. Many recent methods build on the principles of the local elevation technique, including the Engkvist-Karlström, adaptive biasing force, Wang–Landau, metadynamics, adaptively biased molecular dynamics, adaptive reaction coordinate forces, and local elevation umbrella sampling methods. The basic principle of the method is to add a memory-dependent potential energy term in the simulation so as to prevent the simulation to revisit already sampled configurations, which leads to the increased probability of discovering new configurations. The method can be seen as a continuous variant of the Tabu search method.

The Hamiltonian Monte Carlo algorithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples which converge to being distributed according to a target probability distribution for which direct sampling is difficult. This sequence can be used to estimate integrals with respect to the target distribution.

In statistics and physics, multicanonical ensemble is a Markov chain Monte Carlo sampling technique that uses the Metropolis–Hastings algorithm to compute integrals where the integrand has a rough landscape with multiple local minima. It samples states according to the inverse of the density of states, which has to be known a priori or be computed using other techniques like the Wang and Landau algorithm. Multicanonical sampling is an important technique for spin systems like the Ising model or spin glasses.

In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.

In mathematics and physics, surface growth refers to models used in the dynamical study of the growth of a surface, usually by means of a stochastic differential equation of a field.

References

  1. 1 2 Serebrinsky, Santiago A. (31 March 2011). "Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains". Physical Review E. American Physical Society (APS). 83 (3): 037701. Bibcode:2011PhRvE..83c7701S. doi:10.1103/physreve.83.037701. ISSN   1539-3755. PMID   21517635.
  2. 1 2 Bortz, A.B.; Kalos, M.H.; Lebowitz, J.L. (1975). "A new algorithm for Monte Carlo simulation of Ising spin systems". Journal of Computational Physics. Elsevier BV. 17 (1): 10–18. Bibcode:1975JCoPh..17...10B. doi:10.1016/0021-9991(75)90060-1. ISSN   0021-9991.
  3. Sadiq, Abdullah (1984). "A new algorithm for the Monte Carlo simulation of spin-exchange kinetics of Ising systems". Journal of Computational Physics. Elsevier BV. 55 (3): 387–396. Bibcode:1984JCoPh..55..387S. doi:10.1016/0021-9991(84)90028-7. ISSN   0021-9991.
  4. Prados, A.; Brey, J. J.; Sánchez-Rey, B. (1997). "A dynamical monte carlo algorithm for master equations with time-dependent transition rates". Journal of Statistical Physics. Springer Science and Business Media LLC. 89 (3–4): 709–734. Bibcode:1997JSP....89..709P. doi:10.1007/bf02765541. ISSN   0022-4715. S2CID   122985615.
  5. Meng, B.; Weinberg, W. H. (1994). "Monte Carlo simulations of temperature programmed desorption spectra". The Journal of Chemical Physics. AIP Publishing. 100 (7): 5280–5289. Bibcode:1994JChPh.100.5280M. doi:10.1063/1.467192. ISSN   0021-9606.
  6. Slepoy, Alexander; Thompson, Aidan P.; Plimpton, Steven J. (28 May 2008). "A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks". The Journal of Chemical Physics. AIP Publishing. 128 (20): 205101. Bibcode:2008JChPh.128t5101S. doi:10.1063/1.2919546. ISSN   0021-9606. PMID   18513044.
  7. Meng, B.; Weinberg, W.H. (1996). "Dynamical Monte Carlo studies of molecular beam epitaxial growth models: interfacial scaling and morphology". Surface Science. Elsevier BV. 364 (2): 151–163. Bibcode:1996SurSc.364..151M. doi:10.1016/0039-6028(96)00597-3. ISSN   0039-6028.
  8. 1 2 3 Young, W M; Elcock, E W (1966). "Monte Carlo studies of vacancy migration in binary ordered alloys: I". Proceedings of the Physical Society. IOP Publishing. 89 (3): 735–746. Bibcode:1966PPS....89..735Y. doi:10.1088/0370-1328/89/3/329. ISSN   0370-1328.
  9. Baeurle, Stephan A.; Usami, Takao; Gusev, Andrei A. (2006). "A new multiscale modeling approach for the prediction of mechanical properties of polymer-based nanomaterials". Polymer. Elsevier BV. 47 (26): 8604–8617. doi:10.1016/j.polymer.2006.10.017. ISSN   0032-3861.
  10. D.R. Cox and H.D. Miller, The Theory of Stochastic Processes (Methuen, London), 1965, pp. 6–7.
  11. Gillespie, Daniel T (1976). "A general method for numerically simulating the stochastic time evolution of coupled chemical reactions". Journal of Computational Physics. Elsevier BV. 22 (4): 403–434. Bibcode:1976JCoPh..22..403G. doi:10.1016/0021-9991(76)90041-3. ISSN   0021-9991.
  12. Fichthorn, Kristen A.; Weinberg, W. H. (15 July 1991). "Theoretical foundations of dynamical Monte Carlo simulations". The Journal of Chemical Physics. AIP Publishing. 95 (2): 1090–1096. Bibcode:1991JChPh..95.1090F. doi:10.1063/1.461138. ISSN   0021-9606.
  13. A. F. Voter, Introduction to the Kinetic Monte Carlo Method, in Radiation Effects in Solids, edited by K. E. Sickafus and E. A. Kotomin (Springer, NATO Publishing Unit, Dordrecht, The Netherlands, 2005).
  14. A.P.J. Jansen, An Introduction To Monte Carlo Simulations of Surface Reactions, Condensed Matter, abstract cond-mat/0303028.
  15. Chatterjee, Abhijit; Vlachos, Dionisios G. (28 February 2007). "An overview of spatial microscopic and accelerated kinetic Monte Carlo methods". Journal of Computer-Aided Materials Design. Springer Science and Business Media LLC. 14 (2): 253–308. Bibcode:2007JCMD...14..253C. doi:10.1007/s10820-006-9042-9. ISSN   0928-1045. S2CID   53336314.
  16. Chotia, Amodsen; Viteau, Matthieu; Vogt, Thibault; Comparat, Daniel; Pillet, Pierre (30 April 2008). "Kinetic Monte Carlo modeling of dipole blockade in Rydberg excitation experiment". New Journal of Physics. IOP Publishing. 10 (4): 045031. arXiv: 0803.4481 . Bibcode:2008NJPh...10d5031C. doi: 10.1088/1367-2630/10/4/045031 . ISSN   1367-2630.
  17. Di Gesù, Giacomo; Lelièvre, Tony; Le Peutrec, Dorian; Nectoux, Boris (2016). "Jump Markov models and transition state theory: the Quasi-Stationary Distribution approach". Faraday Discussions. 195: 469–495. arXiv: 1605.02643 . Bibcode:2016FaDi..195..469D. doi:10.1039/C6FD00120C. ISSN   1364-5498. PMID   27740662. S2CID   25564764.
  18. Lelièvre, Tony (2018). "Mathematical foundations of Accelerated Molecular Dynamics methods". In Andreoni, Wanda; Yip, Sidney (eds.). Handbook of Materials Modeling. Springer. pp. 773–803. arXiv: 1801.05347 . doi:10.1007/978-3-319-44677-6_27. ISBN   978-3-319-44677-6.
  19. Martin-Bragado, Ignacio; Tian, S.; Johnson, M.; Castrillo, P.; Pinacho, R.; Rubio, J.; Jaraiz, M. (2006). "Modeling charged defects, dopant diffusion and activation mechanisms for TCAD simulations using kinetic Monte Carlo". Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. Elsevier BV. 253 (1–2): 63–67. Bibcode:2006NIMPB.253...63M. doi:10.1016/j.nimb.2006.10.035. ISSN   0168-583X.
  20. Mason, D.R.; Hudson, T.S.; Sutton, A.P. (January 2005). "Fast recall of state-history in kinetic Monte Carlo simulations utilizing the Zobrist key". Computer Physics Communications. 165 (1): 37–48. Bibcode:2005CoPhC.165...37M. doi:10.1016/j.cpc.2004.09.007.
  21. Dalla Torre, J.; Bocquet, J.-L.; Doan, N. V.; Adam, E.; Barbu, A. (2005). "JERK, an event-based Kinetic Monte Carlo model to predict microstructure evolution of materials under irradiation". Philosophical Magazine. Informa UK Limited. 85 (4–7): 549–558. Bibcode:2005PMag...85..549D. doi:10.1080/02678370412331320134. ISSN   1478-6435. S2CID   96878847.
  22. Opplestrup, Tomas; Bulatov, Vasily V.; Gilmer, George H.; Kalos, Malvin H.; Sadigh, Babak (4 December 2006). "First-Passage Monte Carlo Algorithm: Diffusion without All the Hops". Physical Review Letters. American Physical Society (APS). 97 (23): 230602. Bibcode:2006PhRvL..97w0602O. doi:10.1103/physrevlett.97.230602. ISSN   0031-9007. PMID   17280187.