Killed process

Last updated

In probability theory specifically, in stochastic analysis a killed process is a stochastic process that is forced to assume an undefined or "killed" state at some (possibly random) time.

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

Contents

Definition

Let X : T × Ω  S be a stochastic process defined for "times" t in some ordered index set T, on a probability space (Ω, Σ, P), and taking values in a measurable space S. Let ζ : Ω  T be a random time, referred to as the killing time. Then the killed processY associated to X is defined by

In mathematics, an index set is a set whose members label members of another set. For instance, if the elements of a set A may be indexed or labeled by means of the elements of a set J, then J is an index set. The indexing consists of a surjective function from J onto A and the indexed collection is typically called an (indexed) family, often written as {Aj}jJ.

In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same.

In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a -algebra on this set and provides information about the sets that will be measured.

and Yt is left undefined for t  ζ. Alternatively, one may set Yt = c for t  ζ, where c is a "coffin state" not in S.

See also

Related Research Articles

Random variable variable whose possible values are numerical outcomes of a random phenomenon

In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable whose possible values are outcomes of a random phenomenon. More specifically, a random variable is defined as a function that maps the outcomes of an unpredictable process to numerical quantities, typically real numbers. It is a variable, in the sense that it depends on the outcome of an underlying process providing the input to this function, and it is random in the sense that the underlying process is assumed to be random.

In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

Stochastic process mathematical object usually defined as a collection of random variables

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a collection of random variables. Historically, the random variables were associated with or indexed by a set of numbers, usually viewed as points in time, giving the interpretation of a stochastic process representing numerical values of some system randomly changing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines including sciences such as biology, chemistry, ecology, neuroscience, and physics as well as technology and engineering fields such as image processing, signal processing, information theory, computer science, cryptography and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

Martingale (probability theory) model in probability theory

In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence, given all prior values, is equal to the present value.

Markov property stochastic process satisfying a certain property

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov.

A random field is another term for stochastic process in modern mathematics with some restriction on its index set. The modern definition of a random field or a stochastic process is a generalization of the classic naive definition of "stochastic process" such that the underlying parameter need no longer be a simple real or integer valued "time", but can instead take values that are multidimensional vectors, or points on some manifold.

Stopping time specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet (1948) who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”

In mathematics, a sample-continuous process is a stochastic process whose sample paths are almost surely continuous functions.

In mathematics, the law of a stochastic process is the measure that the process induces on the collection of functions from the index set into the state space. The law encodes a lot of information about the process; in the case of a random walk, for example, the law is the probability distribution of the possible trajectories of the walk.

In mathematics, the Kolmogorov extension theorem is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the Russian mathematician Andrey Nikolaevich Kolmogorov.

In the study of stochastic processes in mathematics, a hitting time is the first time at which a given process "hits" a given subset of the state space. Exit times and return times are also examples of hitting times.

In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.

In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph L. Doob.

In the theory of stochastic processes, the filtering problem is a mathematical model for a number of state estimation problems in signal processing and related fields. The general idea is to establish a "best estimate" for the true value of some system from an incomplete, potentially noisy set of observations on that system. The problem of optimal non-linear filtering was solved by Ruslan L. Stratonovich, see also Harold J. Kushner's work and Moshe Zakai's, who introduced a simplified dynamics for the unnormalized conditional law of the filter known as Zakai equation. The solution, however, is infinite-dimensional in the general case. Certain approximations and special cases are well understood: for example, the linear filters are optimal for Gaussian random variables, and are known as the Wiener filter and the Kalman-Bucy filter. More generally, as the solution is infinite dimensional, it requires finite dimensional approximations to be implemented in a computer with finite memory. A finite dimensional approximated nonlinear filter may be more based on heuristics, such as the Extended Kalman Filter or the Assumed Density Filters, or more methodologically oriented such as for example the Projection Filters, some sub-families of which are shown to coincide with the Assumed Density Filters.

In probability theory, a continuous stochastic process is a type of stochastic process that may be said to be "continuous" as a function of its "time" or index parameter. Continuity is a nice property for a process to have, since it implies that they are well-behaved in some sense, and, therefore, much easier to analyze. It is implicit here that the index of the stochastic process is a continuous variable. Some authors define a "continuous (stochastic) process" as only requiring that the index variable be continuous, without continuity of sample paths: in some terminology, this would be a continuous-time stochastic process, in parallel to a "discrete-time process". Given the possible confusion, caution is needed.

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general state spaces.

References

Bernt Karsten Øksendal is a Norwegian mathematician. He completed his undergraduate studies at the University of Oslo, working under Otte Hustad. He obtained his PhD from University of California, Los Angeles in 1971; his thesis was titled Peak Sets and Interpolation Sets for Some Algebras of Analytic Functions and was supervised by Theodore Gamelin. In 1991, he was appointed as a professor at the University of Oslo. In 1992, he was appointed as an adjunct professor at the Norwegian School of Economics and Business Administration, Bergen, Norway.

International Standard Book Number Unique numeric book identifier

The International Standard Book Number (ISBN) is a numeric commercial book identifier which is intended to be unique. Publishers purchase ISBNs from an affiliate of the International ISBN Agency.