Choquet integral

Last updated

A Choquet integral is a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. [1] It was initially used in statistical mechanics and potential theory, [2] but found its way into decision theory in the 1980s, [3] where it is used as a way of measuring the expected utility of an uncertain event. It is applied specifically to membership functions and capacities. In imprecise probability theory, the Choquet integral is also used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability.

Contents

Using the Choquet integral to denote the expected utility of belief functions measured with capacities is a way to reconcile the Ellsberg paradox and the Allais paradox. [4] [5]

Definition

The following notation is used:

Assume that is measurable with respect to , that is

Then the Choquet integral of with respect to is defined by:

where the integrals on the right-hand side are the usual Riemann integral (the integrands are integrable because they are monotone in ).

Properties

In general the Choquet integral does not satisfy additivity. More specifically, if is not a probability measure, it may hold that

for some functions and .

The Choquet integral does satisfy the following properties.

Monotonicity

If then

Positive homogeneity

For all it holds that

Comonotone additivity

If are comonotone functions, that is, if for all it holds that

.
which can be thought of as and rising and falling together

then

Subadditivity

If is 2-alternating,[ clarification needed ] then

Superadditivity

If is 2-monotone,[ clarification needed ] then

Alternative representation

Let denote a cumulative distribution function such that is integrable. Then this following formula is often referred to as Choquet Integral:

where .

Applications

The Choquet integral was applied in image processing, video processing and computer vision. In behavioral decision theory, Amos Tversky and Daniel Kahneman use the Choquet integral and related methods in their formulation of cumulative prospect theory. [6]

See also

Notes

  1. Choquet, G. (1953). "Theory of capacities". Annales de l'Institut Fourier. 5: 131–295. doi: 10.5802/aif.53 .
  2. Denneberg, D. (1994). Non-additive measure and Integral. Kluwer Academic. ISBN   0-7923-2840-X.
  3. Grabisch, M. (1996). "The application of fuzzy integrals in multicriteria decision making". European Journal of Operational Research . 89 (3): 445–456. doi:10.1016/0377-2217(95)00176-X.
  4. Chateauneuf, A.; Cohen, M. D. (2010). "Cardinal Extensions of the EU Model Based on the Choquet Integral". In Bouyssou, Denis; Dubois, Didier; Pirlot, Marc; Prade, Henri (eds.). Decision-making Process: Concepts and Methods. pp. 401–433. doi:10.1002/9780470611876.ch10. ISBN   9780470611876.
  5. Sriboonchita, S.; Wong, W. K.; Dhompongsa, S.; Nguyen, H. T. (2010). Stochastic dominance and applications to finance, risk and economics. CRC Press. ISBN   978-1-4200-8266-1.
  6. Tversky, A.; Kahneman, D. (1992). "Advances in Prospect Theory: Cumulative Representation of Uncertainty". Journal of Risk and Uncertainty. 5 (4): 297–323. doi:10.1007/bf00122574. S2CID   8456150.

Further reading

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

<span class="mw-page-title-main">Measure (mathematics)</span> Generalization of mass, length, area and volume

In mathematics, the concept of a measure is a generalization and formalization of geometrical measures and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations of measure are widely used in quantum physics and physics in general.

In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where the infinite series representation which initially defined the function becomes divergent.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the L1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, S. Watanabe, I. Shigekawa, and so on finally completed the foundations.

In mathematics, the Mellin inversion formula tells us conditions under which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function.

In mathematics, fuzzy measure theory considers generalized measures in which the additive property is replaced by the weaker property of monotonicity. The central concept of fuzzy measure theory is the fuzzy measure, which was introduced by Choquet in 1953 and independently defined by Sugeno in 1974 in the context of fuzzy integrals. There exists a number of different classes of fuzzy measures including plausibility/belief measures, possibility/necessity measures, and probability measures, which are a subset of classical measures.

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization. They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.

An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.

In mathematics, the Babenko–Beckner inequality (after Konstantin I. Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (qp)-norm of the n-dimensional Fourier transform is defined to be

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and

<span class="mw-page-title-main">Ramanujan's master theorem</span> Mathematical theorem

In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.

In financial mathematics and economics, a distortion risk measure is a type of risk measure which is related to the cumulative distribution function of the return of a financial portfolio.

In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations, defined in terms of linear functionals. It is attributed to Giuseppe Peano.