In actuarial science and applied probability, ruin theory (sometimes risk theory [1] or collective risk theory) uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.
The theoretical foundation of ruin theory, known as the Cramér–Lundberg model (or classical compound-Poisson risk model, classical risk process [2] or Poisson risk process) was introduced in 1903 by the Swedish actuary Filip Lundberg. [3] Lundberg's work was republished in the 1930s by Harald Cramér. [4]
The model describes an insurance company who experiences two opposing cash flows: incoming cash premiums and outgoing claims. Premiums arrive a constant rate c > 0 from customers and claims arrive according to a Poisson process with intensity λ and are independent and identically distributed non-negative random variables with distribution F and mean μ (they form a compound Poisson process). So for an insurer who starts with initial surplus x, the aggregate assets are given by: [5]
The central object of the model is to investigate the probability that the insurer's surplus level eventually falls below zero (making the firm bankrupt). This quantity, called the probability of ultimate ruin, is defined as
where the time of ruin is with the convention that . This can be computed exactly using the Pollaczek–Khinchine formula as [6] (the ruin function here is equivalent to the tail function of the stationary distribution of waiting time in an M/G/1 queue [7] )
where is the transform of the tail distribution of ,
and denotes the -fold convolution. In the case where the claim sizes are exponentially distributed, this simplifies to [7]
E. Sparre Andersen extended the classical model in 1957 [8] by allowing claim inter-arrival times to have arbitrary distribution functions. [9]
where the claim number process is a renewal process and are independent and identically distributed random variables. The model furthermore assumes that almost surely and that and are independent. The model is also known as the renewal risk model.
Michael R. Powers [10] and Gerber and Shiu [11] analyzed the behavior of the insurer's surplus through the expected discounted penalty function, which is commonly referred to as Gerber-Shiu function in the ruin literature and named after actuarial scientists Elias S.W. Shiu and Hans-Ulrich Gerber. It is arguable whether the function should have been called Powers-Gerber-Shiu function due to the contribution of Powers. [10]
In Powers' notation, this is defined as
where is the discounting force of interest, is a general penalty function reflecting the economic costs to the insurer at the time of ruin, and the expectation corresponds to the probability measure . The function is called expected discounted cost of insolvency by Powers. [10]
In Gerber and Shiu's notation, it is given as
where is the discounting force of interest and is a penalty function capturing the economic costs to the insurer at the time of ruin (assumed to depend on the surplus prior to ruin and the deficit at ruin ), and the expectation corresponds to the probability measure . Here the indicator function emphasizes that the penalty is exercised only when ruin occurs.
It is quite intuitive to interpret the expected discounted penalty function. Since the function measures the actuarial present value of the penalty that occurs at , the penalty function is multiplied by the discounting factor , and then averaged over the probability distribution of the waiting time to . While Gerber and Shiu [11] applied this function to the classical compound-Poisson model, Powers [10] argued that an insurer's surplus is better modeled by a family of diffusion processes.
There are a great variety of ruin-related quantities that fall into the category of the expected discounted penalty function.
Special case | Mathematical representation | Choice of penalty function |
---|---|---|
Probability of ultimate ruin | ||
Joint (defective) distribution of surplus and deficit | ||
Defective distribution of claim causing ruin | ||
Trivariate Laplace transform of time, surplus and deficit | ||
Joint moments of surplus and deficit |
Other finance-related quantities belonging to the class of the expected discounted penalty function include the perpetual American put option, [12] the contingent claim at optimal exercise time, and more.
In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.
An instanton is a notion appearing in theoretical and mathematical physics. An instanton is a classical solution to equations of motion with a finite, non-zero action, either in quantum mechanics or in quantum field theory. More precisely, it is a solution to the equations of motion of the classical field theory on a Euclidean spacetime.
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question.
In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.
In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.
In mathematics, a Dirac comb is a periodic function with the formula
In mathematics, the Volterra integral equations are a special type of integral equations. They are divided into two groups referred to as the first and the second kind.
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
In probability theory, a member of the (a, b, 0) class of distributions is any distribution of a discrete random variable N whose values are nonnegative integers whose probability mass function satisfies the recurrence formula
Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.
In probability theory, the optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far. Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.
Quantum characteristics are phase-space trajectories that arise in the phase space formulation of quantum mechanics through the Wigner transform of Heisenberg operators of canonical coordinates and momenta. These trajectories obey the Hamilton equations in quantum form and play the role of characteristics in terms of which time-dependent Weyl's symbols of quantum operators can be expressed. In the classical limit, quantum characteristics reduce to classical trajectories. The knowledge of quantum characteristics is equivalent to the knowledge of quantum dynamics.
An affine term structure model is a financial model that relates zero-coupon bond prices to a spot rate model. It is particularly useful for deriving the yield curve – the process of determining spot rate model inputs from observable bond market data. The affine class of term structure models implies the convenient form that log bond prices are linear functions of the spot rate.
This article describes Lyapunov optimization for dynamical systems. It gives an example application to optimal control in queueing networks.
In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is also called a Poisson random measure, Poisson random point field or Poisson point field. When the process is defined on the real line, it is often called simply the Poisson process.
In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.
Exponential Tilting (ET), Exponential Twisting, or Exponential Change of Measure (ECM) is a distribution shifting technique used in many parts of mathematics. The different exponential tiltings of a random variable is known as the natural exponential family of .
Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983, as a universal tool to data compression, but recently have been used to model data in different areas such as biology, linguistics and music.