In decision theory, economics, and finance, a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value, which is the first moment about zero—and the variance, which is the second moment about the mean (or the standard deviation, which is the square root of the variance).
The most well-known two-moment decision model is that of modern portfolio theory, which gives rise to the decision portion of the Capital Asset Pricing Model; these employ mean-variance analysis, and focus on the mean and variance of a portfolio's final value.
Suppose that all relevant random variables are in the same location-scale family, meaning that the distribution of every random variable is the same as the distribution of some linear transformation of any other random variable. Then for any von Neumann–Morgenstern utility function, using a mean-variance decision framework is consistent with expected utility maximization, [1] [2] as illustrated in example 1:
Example 1: [3] [4] [5] [6] [7] [8] [9] [10] Let there be one risky asset with random return , and one riskfree asset with known return , and let an investor's initial wealth be . If the amount , the choice variable, is to be invested in the risky asset and the amount is to be invested in the safe asset, then, contingent on , the investor's random final wealth will be . Then for any choice of , is distributed as a location-scale transformation of . If we define random variable as equal in distribution to then is equal in distribution to , where μ represents an expected value and σ represents a random variable's standard deviation (the square root of its second moment). Thus we can write expected utility in terms of two moments of :
where is the von Neumann–Morgenstern utility function, is the density function of , and is the derived mean-standard deviation choice function, which depends in form on the density function f. The von Neumann–Morgenstern utility function is assumed to be increasing, implying that more wealth is preferred to less, and it is assumed to be concave, which is the same as assuming that the individual is risk averse.
It can be shown that the partial derivative of v with respect to μw is positive, and the partial derivative of v with respect to σw is negative; thus more expected wealth is always liked, and more risk (as measured by the standard deviation of wealth) is always disliked. A mean-standard deviation indifference curve is defined as the locus of points (σw, μw) with σw plotted horizontally, such that Eu(w) has the same value at all points on the locus. Then the derivatives of v imply that every indifference curve is upward sloped: that is, along any indifference curve dμw / dσw > 0. Moreover, it can be shown [3] that all such indifference curves are convex: along any indifference curve, d2μw / d(σw)2 > 0.
Example 2: The portfolio analysis in example 1 can be generalized. If there are n risky assets instead of just one, and if their returns are jointly elliptically distributed, then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return—and all possible portfolios have return distributions that are location-scale-related to each other. [11] [12] Thus portfolio optimization can be implemented using a two-moment decision model.
Example 3: Suppose that a price-taking, risk-averse firm must commit to producing a quantity of output q before observing the market realization p of the product's price. [13] Its decision problem is to choose q so as to maximize the expected utility of profit:
where E is the expected value operator, u is the firm's utility function, c is its variable cost function, and g is its fixed cost. All possible distributions of the firm's random revenue pq, based on all possible choices of q, are location-scale related; so the decision problem can be framed in terms of the expected value and variance of revenue.
If the decision-maker is not an expected utility maximizer, decision-making can still be framed in terms of the mean and variance of a random variable if all alternative distributions for an unpredictable outcome are location-scale transformations of each other. [14]
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location.
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value". The error of an observation is the deviation of the observed value from the true value of a quantity of interest. The residual is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals. In econometrics, "errors" are also called disturbances.
In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may be random vectors, in which case the mixture distribution is a multivariate distribution.
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the variance of asset prices as a proxy for risk.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.
In probability theory, especially in mathematical statistics, a location–scale family is a family of probability distributions parametrized by a location parameter and a non-negative scale parameter. For any random variable whose probability distribution function belongs to such a family, the distribution function of also belongs to the family.
In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.
In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk is present, in which case expected utility is maximized. Formally, exponential utility is given by:
Goals-Based Investing or Goal-Driven Investing is the use of financial markets to fund goals within a specified period of time. Traditional portfolio construction balances expected portfolio variance with return and uses a risk aversion metric to select the optimal mix of investments. By contrast, GBI optimizes an investment mix to minimize the probability of failing to achieve a minimum wealth level within a set period of time.
In probability and statistics, an elliptical distribution is any member of a broad family of probability distributions that generalize the multivariate normal distribution. Intuitively, in the simplified two and three dimensional case, the joint distribution forms an ellipse and an ellipsoid, respectively, in iso-density plots.
In portfolio theory, a mutual fund separation theorem, mutual fund theorem, or separation theorem is a theorem stating that, under certain conditions, any investor's optimal portfolio can be constructed by holding each of certain mutual funds in appropriate ratios, where the number of mutual funds is smaller than the number of individual assets in the portfolio. Here a mutual fund refers to any specified benchmark portfolio of the available assets. There are two advantages of having a mutual fund theorem. First, if the relevant conditions are met, it may be easier for an investor to purchase a smaller number of mutual funds than to purchase a larger number of assets individually. Second, from a theoretical and empirical standpoint, if it can be assumed that the relevant conditions are indeed satisfied, then implications for the functioning of asset markets can be derived and tested.