Intensity measure

Last updated

In probability theory, a intensity measure is a measure that is derived from a random measure. The intensity measure is a non-random measure and is defined as the expectation value of the random measure of a set, hence it corresponds to the average volume the random measure assigns to a set. The intensity measure contains important information about the properties of the random measure. A Poisson point process, interpreted as a random measure, is for example uniquely determined by its intensity measure. [1]

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

Measure (mathematics) mathematical function which associates a comparable numeric value to some subsets of a given set

In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space Rn. For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word, specifically, 1.

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

Contents

Definition

Let be a random measure on the measurable space and denote the expected value of a random element with .

In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a -algebra on this set and provides information about the sets that will be measured.

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

The intensity measure

of is defined as

for all . [2] [3]

Note the difference in notation between the expectation value of a random element , denoted by and the intensity measure of the random measure , denoted by .

Properties

The intensity measure is always s-finite and satisfies

In measure theory, a branch of mathematics that studies generalized notions of volumes, an s-finite measure is a special type of measure. An s-finite measure is more general than a finite measure, but allows one to generalize certain proofs for finite measures.

for every positive measurable function on . [3]

In mathematics and in particular measure theory, a measurable function is a function between two measurable spaces such that the preimage of any measurable set is measurable, analogously to the definition that a function between topological spaces is continuous if the preimage of each open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.

Related Research Articles

A measure space is a basic object of measure theory, a branch of mathematics that studies generalized notions of volumes. Measure spaces contain information about the underlying set, the subsets of said set that are feasible for measuring and the method that is used for measuring. One important example of a measure space is a probability space.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the “conditions” are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

In probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of Y given X is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.

In probability theory, independent increments are a property of stochastic processes and random measures. Most of the time, a process or random measure has independent increments by definition, which underlines their importance. Some of the stochastic processes that by definition possess independent increments are the Wiener process, all Lévy processes and the Poisson point process.

In mathematics, a positive measure μ defined on a σ-algebra Σ of subsets of a set X is called a finite measure if μ(X) is a finite real number, and a set A in Σ is of finite measure if μ(A) < ∞. The measure μ is called σ-finite if X is the countable union of measurable sets with finite measure. A set in a measure space is said to have σ-finite measure if it is a countable union of measurable sets with finite measure. A measure being σ-finite is a weaker condition than being finite, i.e. all finite measures are σ-finite but there are (many) σ-finite measures that are not finite.

In measure theory, a branch of mathematics, a finite measure or totally finite measure is a special measure that always takes on finite values. Among finite measures are probability measures. The finite measures are often easier to handle than more general measures and show a variety of different properties depending on the sets they are defined on.

In mathematics, and specifically in measure theory, equivalence is a notion of two measures being qualitatively similar. Specifically, the two measures agree on which events have measure zero.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence in measure, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

In probability theory, a Markov kernel is a map that plays the role, in the general theory of Markov processes, that the transition matrix does in the theory of Markov processes with a finite state space.

In the mathematics of probability, a transition kernel or kernel is a function in mathematics that has different applications. Kernels can for example be used to define random measures or stochastic processes. The most important example of kernels are the Markov kernels.

In the mathematical theory of probability and measure, a sub-probability measure is a measure that is closely related to probability measures. While probability measures always assign the value 1 to the underlying set, sub-probability measures assign a value smaller or equal to one to 1.

The σ-algebra generated by a family of sets or short generated σ-algebra or generated σ-field is a central concept in measure theory, a branch of mathematics that studies generalized notions of volumes. Generated σ-algebras are also common in probability theory, where they are for example used to model available information for stochastic processes via filtrations and provide the basis for the construction of the Borel σ-algebra.

Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables. A weak version of this result was first shown by Harald Cramér in 1938.

In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are used to model the information that is available at a given point and therefore play an important role in the formalization of random processes.

In the theory of stochastic processes, a ν-transform is a operation that transforms a measure or a point process into a different point process. Intuitively the ν-transform randomly relocates the points of the point process, with the type of relocation being dependent on the position of each point.

The mapping theorem is a theorem in the theory of point processes, a sub-discipline of probability theory. It describes how a Poisson point process is altered under measurable transformations. This allows to construct more complex Poisson point processes out of homogeneous Poisson point processes and can for example be used to simulate these more complex Poisson point processes in a similar manner than inverse transform sampling.

The σ-algebra of τ-past, is a σ-algebra associated with a stopping time in the theory of stochastic processes, a branch of probability theory.

References

  1. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 528. doi:10.1007/978-1-84800-048-3. ISBN   978-1-84800-047-6.
  2. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 526. doi:10.1007/978-1-84800-048-3. ISBN   978-1-84800-047-6.
  3. 1 2 Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. p. 53. doi:10.1007/978-3-319-41598-7. ISBN   978-3-319-41596-3.