Markovian

Last updated

Markovian is an adjective that may describe:

See also

Related Research Articles

Stochastic process Collection of random variables

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, cryptography and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.

Markov chain Random process independent of past history

A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.

A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it — with unobservable ("hidden") states. As part of the definition, HMM requires that there be an observable process whose outcomes are "influenced" by the outcomes of in a known way. Since cannot be observed directly, the goal is to learn about by observing HMM has an additional requirement that the outcome of at time must be "influenced" exclusively by the outcome of at and that the outcomes of and at must not affect the outcome of at

Gauss–Markov stochastic processes are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A stationary Gauss–Markov process is unique up to rescaling; such a process is also known as an Ornstein–Uhlenbeck process.

Markov property Memoryless property of a stochastic process

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.

In physics, chemistry and related fields, master equations are used to describe the time evolution of a system that can be modelled as being in a probabilistic combination of states at any given time and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.

In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.

A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed.

In applied probability, a Markov additive process (MAP) is a bivariate Markov process where the future states depends only on one of the variables.

In probability and statistics, a Markov renewal process (MRP) is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chains, Poisson processes and renewal processes can be derived as special cases of MRP's.

The Gittins index is a measure of the reward that can be achieved through a given stochastic process with certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is a real scalar.

In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process is a mathematical model for the time between job arrivals to a system. The simplest such process is a Poisson process where the time between each arrival is exponentially distributed.

In queueing theory, a discipline within the mathematical theory of probability, Burke's theorem is a theorem asserting that, for the M/M/1 queue, M/M/c queue or M/M/∞ queue in the steady state with arrivals is a Poisson process with rate parameter λ:

  1. The departure process is a Poisson process with rate parameter λ.
  2. At time t the number of customers in the queue is independent of the departure process prior to time t.

In probability theory, uniformization method, is a method to compute transient solutions of finite state continuous-time Markov chains, by approximating the process by a discrete-time Markov chain. The original chain is scaled by the fastest transition rate γ, so that transitions occur at the same rate in every state, hence the name. The method is simple to program and efficiently calculates an approximation to the transient distribution at a single point in time. The method was first introduced by Winfried Grassmann in 1977.

Kinetic scheme

In physics, chemistry and related fields, a kinetic scheme is a network of states and connections between them representing the scheme of a dynamical process. Usually a kinetic scheme represents a Markovian process, while for non-Markovian processes generalized kinetic schemes are used. Figure 1 shows an illustration of a kinetic scheme.

In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation is an identity relating the joint probability distributions of different sets of coordinates on a stochastic process. The equation was derived independently by both the British mathematician Sydney Chapman and the Russian mathematician Andrey Kolmogorov.

Jacobus (Jaap) Wessels was a Dutch mathematician and Professor of Stochastic Operations Research at the Eindhoven University of Technology., known for his contributions in the field of Markov decision processes.

Éric Moulines French researcher in statistical learning

Éric Moulines is a French researcher in statistical learning and signal processing. He received the silver medal from the CNRS in 2010, the France Télécom prize awarded in collaboration with the French Academy of Sciences in 2011. He was appointed a Fellow of the European Association for Signal Processing in 2012 and of the Institute of Mathematical Statistics in 2016. He is General Engineer of the Corps des Mines (X81).

In quantum mechanics, a quantum Markov semigroup describes the dynamics in a Markovian open quantum system. The axiomatic definition of the prototype of quantum Markov semigroups was first introduced by A. M. Kossakowski in 1972, and then developed by V. Gorini, A. M. Kossakowski, E. C. G. Sudarshan and Göran Lindblad in 1976.