A mathematical or physical process is **time-reversible** if the dynamics of the process remain well-defined when the sequence of time-states is reversed.

A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process.

In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation:

Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π.

In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry).

In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.

Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics),^{ [1] } which means that on the microscopic level, if one were to keep track of all the particles and all the degrees of freedom, the many-body system processes are all reversible; However, such analysis is beyond the capability of any human being (or artificial intelligence), and the macroscopic properties (like entropy and temperature) of many-body system are only * defined * from the statistics of the ensembles. When we talk about such macroscopic properties in thermodynamics, in certain cases, we can see irreversibility in the time evolution of these quantities on a statistical level. Indeed, the second law of thermodynamics predicates that the entropy of the entire universe must not decrease, not because the probability of that is zero, but because it is so unlikely that it is a * statistical impossibility * for all practical considerations (see Crooks fluctuation theorem).

A stochastic process is time-reversible if the joint probabilities of the forward and reverse state sequences are the same for all sets of time increments { *τ*_{s} }, for *s* = 1, ..., *k* for any *k*:^{ [2] }

- .

A univariate stationary Gaussian process is time-reversible. Markov processes can only be reversible if their stationary distributions have the property of detailed balance:

- .

Kolmogorov's criterion defines the condition for a Markov chain or continuous-time Markov chain to be time-reversible.

Time reversal of numerous classes of stochastic processes has been studied, including Lévy processes,^{ [3] } stochastic networks (Kelly's lemma),^{ [4] } birth and death processes,^{ [5] } Markov chains,^{ [6] } and piecewise deterministic Markov processes.^{ [7] }

Time reversal method works based on the linear reciprocity of the wave equation, which states that the time reversed solution of a wave equation is also a solution to the wave equation since standard wave equations only contain even derivatives of the unknown variables.^{ [8] } Thus, the wave equation is symmetrical under time reversal, so the time reversal of any valid solution is also a solution. This means that a wave's path through space is valid when travelled in either direction.

Time reversal signal processing ^{ [9] } is a process in which this property is used to reverse a received signal; this signal is then re-emitted and a temporal compression occurs, resulting in a reversal of the initial excitation waveform being played at the initial source.

- ↑ David Albert on
*Time and Chance* - ↑ Tong (1990), Section 4.4
- ↑ Jacod, J.; Protter, P. (1988). "Time Reversal on Levy Processes".
*The Annals of Probability*.**16**(2): 620. doi: 10.1214/aop/1176991776 . JSTOR 2243828. - ↑ Kelly, F. P. (1976). "Networks of Queues".
*Advances in Applied Probability*.**8**(2): 416–432. doi:10.2307/1425912. JSTOR 1425912. S2CID 204177645. - ↑ Tanaka, H. (1989). "Time Reversal of Random Walks in One-Dimension".
*Tokyo Journal of Mathematics*.**12**: 159–174. doi: 10.3836/tjm/1270133555 . - ↑ Norris, J. R. (1998).
*Markov Chains*. Cambridge University Press. ISBN 978-0521633963. - ↑ Löpker, A.; Palmowski, Z. (2013). "On time reversal of piecewise deterministic Markov processes".
*Electronic Journal of Probability*.**18**. arXiv: 1110.3813 . doi:10.1214/EJP.v18-1958. S2CID 1453859. - ↑ Parvasi, Seyed Mohammad; Ho, Siu Chun Michael; Kong, Qingzhao; Mousavi, Reza; Song, Gangbing (19 July 2016). "Real time bolt preload monitoring using piezoceramic transducers and time reversal technique—a numerical study with experimental verification".
*Smart Materials and Structures*.**25**(8): 085015. Bibcode:2016SMaS...25h5015P. doi:10.1088/0964-1726/25/8/085015. ISSN 0964-1726. S2CID 113510522. - ↑ Anderson, B. E., M. Griffa, C. Larmat, T.J. Ulrich, and P.A. Johnson, "Time reversal",
*Acoust. Today*, 4 (1), 5-16 (2008). https://acousticstoday.org/time-reversal-brian-e-anderson/

A **Markov chain** or **Markov process** is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs *now*." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.

In mathematics, a **stochastic matrix** is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a **probability matrix**, **transition matrix**, *substitution matrix*, or **Markov matrix**. The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics. There are several different definitions and types of stochastic matrices:

In probability theory and statistics, the term **Markov property** refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term **strong Markov property** is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

In mathematics and statistics, a **stationary process** is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. If you draw a line through the middle of a stationary process then it should be flat; it may have 'seasonal' cycles around the trend line, but overall it does not trend up nor down.

In physics, chemistry, and related fields, **master equations** are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.

A **continuous-time Markov chain** (**CTMC**) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

A **stochastic differential equation** (**SDE**) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.

The principle of **detailed balance** can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.

A **cyclostationary process** is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.

In probability, a **discrete-time Markov chain** (**DTMC**) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current variable, and not any variables in the past. For instance, a machine may have two states, *A* and *E*. When it is in state *A*, there is a 40% chance of it moving to state *E* and a 60% chance of it remaining in state *A*. When it is in state *E*, there is a 70% chance of it moving to *A* and a 30% chance of it staying in *E*. The sequence of states of the machine is a Markov chain. If we denote the chain by then is the state which the machine starts in and is the random variable describing its state after 10 transitions. The process continues forever, indexed by the natural numbers.

In probability theory, the **Gillespie algorithm** generates a statistically correct trajectory of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others, presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power. As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.

The **Gittins index** is a measure of the reward that can be achieved through a given stochastic process with certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is a real scalar.

In mathematics – specifically, in stochastic analysis – an **Itô diffusion** is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In queueing theory, a discipline within the mathematical theory of probability, **quasireversibility** is a property of some queues. The concept was first identified by Richard R. Muntz and further developed by Frank Kelly. Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates and a weaker condition is applied on probability fluxes. For example, an M/M/1 queue with state-dependent arrival rates and state-dependent service times is reversible, but not quasireversible.

In probability theory, a **piecewise-deterministic Markov process (PDMP)** is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by an ordinary differential equation between those times. The class of models is "wide enough to include as special cases virtually all the non-diffusion models of applied probability." The process is defined by three quantities: the flow, the jump rate, and the transition measure.

In queueing theory, a discipline within the mathematical theory of probability, an **M/D/1 queue** represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times are fixed (deterministic). The model name is written in Kendall's notation. Agner Krarup Erlang first published on this model in 1909, starting the subject of queueing theory. An extension of this model with more than one server is the M/D/c queue.

In probability theory, **Kelly's lemma** states that for a stationary continuous-time Markov chain, a process defined as the time-reversed process has the same stationary distribution as the forward-time process. The theorem is named after Frank Kelly.

In mathematics, a **continuous-time random walk** (**CTRW**) is a generalization of a random walk where the wandering particle waits for a random time between jumps. It is a stochastic jump process with arbitrary distributions of jump lengths and waiting times. More generally it can be seen to be a special case of a Markov renewal process.

In stochastic processes, **Kramers–Moyal expansion** refers to a Taylor series expansion of the master equation, named after Hans Kramers and José Enrique Moyal. In many textbooks, the expansion is used only to derive the Fokker–Planck equation, and never used again. In general, continuous stochastic processes are essentially all Markovian, and so Fokker–Planck equations are sufficient for studying them. The higher-order Kramers–Moyal expansion only come into play when the process is jumpy. This usually means it is a Poisson-like process.

The **separation principle** is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain conditions. In its most basic formulation it deals with a linear stochastic system

- Isham, V. (1991) "Modelling stochastic phenomena". In:
*Stochastic Theory and Modelling*, Hinkley, DV., Reid, N., Snell, E.J. (Eds). Chapman and Hall. ISBN 978-0-412-30590-0. - Tong, H. (1990)
*Non-linear Time Series: A Dynamical System Approach*. Oxford UP. ISBN 0-19-852300-9

- Isham, V. (1991) "Modelling stochastic phenomena". In:

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.