Nonlinear expectation

Last updated

In probability theory, a nonlinear expectation is a nonlinear generalization of the expectation. Nonlinear expectations are useful in utility theory as they more closely match human behavior than traditional expectations. [1] The common use of nonlinear expectations is in assessing risks under uncertainty. Generally, nonlinear expectations are categorized into sub-linear and super-linear expectations dependent on the additive properties of the given sets. Much of the study of nonlinear expectation is attributed to work of mathematicians within the past two decades.

Contents

Definition

A functional (where is a vector lattice on a given set ) is a nonlinear expectation if it satisfies: [2] [3] [4]

  1. Monotonicity: if such that then
  2. Preserving of constants: if then

The complete consideration of the given set, the linear space for the functions given that set, and the nonlinear expectation value is called the nonlinear expectation space.

Often other properties are also desirable, for instance convexity, subadditivity, positive homogeneity, and translative of constants. [2] For a nonlinear expectation to be further classified as a sublinear expectation, the following two conditions must also be met:

  1. Subadditivity: for then
  2. Positive homogeneity: for then

For a nonlinear expectation to instead be classified as a superlinear expectation, the subadditivity condition above is instead replaced by the condition: [5]

  1. Superadditivity: for then

Examples

Related Research Articles

In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:

In mathematics, the Borel–Weil–Bott theorem is a basic result in the representation theory of Lie groups, showing how a family of representations can be obtained from holomorphic sections of certain complex vector bundles, and, more generally, from higher sheaf cohomology groups associated to such bundles. It is built on the earlier Borel–Weil theorem of Armand Borel and André Weil, dealing just with the space of sections, the extension to higher cohomology groups being provided by Raoul Bott. One can equivalently, through Serre's GAGA, view this as a result in complex algebraic geometry in the Zariski topology.

Empirical risk minimization is a principle in statistical learning theory which defines a family of learning algorithms based on evaluating performance over a known and fixed dataset. The core idea is based on an application of the law of large numbers; more specifically, we cannot know exactly how well a predictive algorithm will work in practice because we don't know the true distribution of the data, but we can instead estimate and optimize the performance of the algorithm on a known set of training data. The performance over the known set of training data is referred to as the empirical risk.

In mathematics, a function is superadditive if

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk measure is a function that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance.

A Choquet integral is a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. It was initially used in statistical mechanics and potential theory, but found its way into decision theory in the 1980s, where it is used as a way of measuring the expected utility of an uncertain event. It is applied specifically to membership functions and capacities. In imprecise probability theory, the Choquet integral is also used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability.

In actuarial science and applied probability, ruin theory uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.

In probability theory, a Markov kernel is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.

In financial mathematics, a conditional risk measure is a random variable of the financial risk as if measured at some point in the future. A risk measure can be thought of as a conditional risk measure on the trivial sigma algebra.

Given a Hilbert space with a tensor product structure a product numerical range is defined as a numerical range with respect to the subset of product vectors. In some situations, especially in the context of quantum mechanics product numerical range is known as local numerical range

In probability theory, the g-expectation is a nonlinear expectation based on a backwards stochastic differential equation (BSDE) originally developed by Shige Peng.

In probability theory, Cantelli's inequality is an improved version of Chebyshev's inequality for one-sided tail bounds. The inequality states that, for

In quantum information theory, strong subadditivity of quantum entropy (SSA) is the relation among the von Neumann entropies of various quantum subsystems of a larger quantum system consisting of three subsystems. It is a basic theorem in modern quantum information theory. It was conjectured by D. W. Robinson and D. Ruelle in 1966 and O. E. Lanford III and D. W. Robinson in 1968 and proved in 1973 by E.H. Lieb and M.B. Ruskai, building on results obtained by Lieb in his proof of the Wigner-Yanase-Dyson conjecture.

Martin Hairer's theory of regularity structures provides a framework for studying a large class of subcritical parabolic stochastic partial differential equations arising from quantum field theory. The framework covers the Kardar–Parisi–Zhang equation, the equation and the parabolic Anderson model, all of which require renormalization in order to have a well-defined notion of solution.

In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loosely, it states that any right continuous Feller process on starting from deterministic point has also deterministic initial movement.

References

  1. Peng, Shige (2017). "Theory, methods and meaning of nonlinear expectation theory". Scientia Sinica Mathematica. 47 (10): 1223–1254. doi: 10.1360/N012016-00209 . S2CID   125094517.
  2. 1 2 Peng, Shige (2006). "G–Expectation, G–Brownian Motion and Related Stochastic Calculus of Itô Type". Abel Symposia. Springer-Verlag. 2. arXiv: math/0601035 . Bibcode:2006math......1035P.
  3. Peng, Shige (2004). "Nonlinear Expectations, Nonlinear Evaluations and Risk Measures". Stochastic Methods in Finance (PDF). Lecture Notes in Mathematics. Vol. 1856. pp. 165–138. doi:10.1007/978-3-540-44644-6_4. ISBN   978-3-540-22953-7. Archived from the original (PDF) on March 3, 2016. Retrieved August 9, 2012.
  4. Peng, Shige (2019). Nonlinear Expectations and Stochastic Calculus under Uncertainty. Berlin, Heidelberg: Springer. doi:10.1007/978-3-662-59903-7. ISBN   978-3-662-59902-0.
  5. Molchanov, Ilya; Mühlemann, Anja (2021-01-01). "Nonlinear expectations of random sets". Finance and Stochastics. 25 (1): 5–41. arXiv: 1903.04901 . doi:10.1007/s00780-020-00442-3. ISSN   1432-1122. S2CID   254080636.
  6. Chen, Zengjing; Epstein, Larry (2002). "Ambiguity, Risk, and Asset Returns in Continuous Time". Econometrica. 70 (4): 1403–1443. doi:10.1111/1468-0262.00337. ISSN   0012-9682. JSTOR   3082003.
  7. Nendel, Max (2021). "Markov chains under nonlinear expectation". Mathematical Finance. 31 (1): 474–507. arXiv: 1803.03695 . doi: 10.1111/mafi.12289 . ISSN   1467-9965. S2CID   52064327.