Intensity measure

Last updated

In probability theory, an intensity measure is a measure that is derived from a random measure. The intensity measure is a non-random measure and is defined as the expectation value of the random measure of a set, hence it corresponds to the average volume the random measure assigns to a set. The intensity measure contains important information about the properties of the random measure. A Poisson point process, interpreted as a random measure, is for example uniquely determined by its intensity measure. [1]

Contents

Definition

Let be a random measure on the measurable space and denote the expected value of a random element with .

The intensity measure

of is defined as

for all . [2] [3]

Note the difference in notation between the expectation value of a random element , denoted by and the intensity measure of the random measure , denoted by .

Properties

The intensity measure is always s-finite and satisfies

for every positive measurable function on . [3]

Related Research Articles

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which

In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured.

A measure space is a basic object of measure theory, a branch of mathematics that studies generalized notions of volumes. It contains an underlying set, the subsets of this set that are feasible for measuring and the method that is used for measuring. One important example of a measure space is a probability space.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

In probability theory, independent increments are a property of stochastic processes and random measures. Most of the time, a process or random measure has independent increments by definition, which underlines their importance. Some of the stochastic processes that by definition possess independent increments are the Wiener process, all Lévy processes, all additive process and the Poisson point process.

Also known as the (Moran-)Gamma Process, the gamma process is a random process studied in mathematics, statistics, probability theory, and stochastics. The gamma process is a stochastic or random process consisting of independently distributed gamma distributions where represents the number of event occurrences from time 0 to time . The gamma distribution has shape parameter and rate parameter , often written as . Both and must be greater than 0. The gamma process is often written as where represents the time from 0. The process is a pure-jump increasing Lévy process with intensity measure for all positive . Thus jumps whose size lies in the interval occur as a Poisson process with intensity The parameter controls the rate of jump arrivals and the scaling parameter inversely controls the jump size. It is assumed that the process starts from a value 0 at t = 0 meaning

In mathematics, a positive or a signed measure μ on a set X is called σ-finite if X equals the union of a sequence of measurable sets A1, A2, A3, … of finite measure μ(An) < ∞. Similarly, a subset of X is called σ-finite if it equals such a countable union. A measure being σ-finite is a weaker condition than being finite (i.e., weaker than μ(X) < ∞).

In measure theory, a branch of mathematics, a finite measure or totally finite measure is a special measure that always takes on finite values. Among finite measures are probability measures. The finite measures are often easier to handle than more general measures and show a variety of different properties depending on the sets they are defined on.

In mathematics, and specifically in measure theory, equivalence is a notion of two measures being qualitatively similar. Specifically, the two measures agree on which events have measure zero.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.

In the mathematics of probability, a transition kernel or kernel is a function in mathematics that has different applications. Kernels can for example be used to define random measures or stochastic processes. The most important example of kernels are the Markov kernels.

In the mathematical theory of probability and measure, a sub-probability measure is a measure that is closely related to probability measures. While probability measures always assign the value 1 to the underlying set, sub-probability measures assign a value lesser than or equal to 1 to the underlying set.

Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables. A weak version of this result was first shown by Harald Cramér in 1938.

In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are totally ordered collections of subsets that are used to model the information that is available at a given point and therefore play an important role in the formalization of random (stochastic) processes.

In the theory of stochastic processes, a ν-transform is an operation that transforms a measure or a point process into a different point process. Intuitively the ν-transform randomly relocates the points of the point process, with the type of relocation being dependent on the position of each point.

The mapping theorem is a theorem in the theory of point processes, a sub-discipline of probability theory. It describes how a Poisson point process is altered under measurable transformations. This allows construction of more complex Poisson point processes out of homogeneous Poisson point processes and can, for example, be used to simulate these more complex Poisson point processes in a similar manner to inverse transform sampling.

The σ-algebra of τ-past, is a σ-algebra associated with a stopping time in the theory of stochastic processes, a branch of probability theory.

References

  1. Klenke, Achim (2008). Probability Theory . Berlin: Springer. p.  528. doi:10.1007/978-1-84800-048-3. ISBN   978-1-84800-047-6.
  2. Klenke, Achim (2008). Probability Theory . Berlin: Springer. p.  526. doi:10.1007/978-1-84800-048-3. ISBN   978-1-84800-047-6.
  3. 1 2 Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. p. 53. doi:10.1007/978-3-319-41598-7. ISBN   978-3-319-41596-3.