Chance-constrained portfolio selection

Last updated

Chance-constrained portfolio selection is an approach to portfolio selection under loss aversion. The formulation assumes that (i) investor's preferences are representable by the expected utility of final wealth, and that (ii) they require that the probability of their final wealth falling below a survival or safety level must to be acceptably low. The chance-constrained portfolio problem is then to find:

Max wjE(Xj), subject to Pr( wjXj < s) ≤ α, wj = 1, wj ≥ 0 for all j,
where s is the survival level and α is the admissible probability of ruin; w is the weight and x is the value of the jth asset to be included in the portfolio.

The original implementation is based on the seminal work of Abraham Charnes and William W. Cooper on chance constrained programming in 1959, [1] and was first applied to finance by Bertil Naslund and Andrew B. Whinston in 1962 [2] and in 1969 by N. H. Agnew, et al. [3]

For fixed α the chance-constrained portfolio problem represents lexicographic preferences and is an implementation of capital asset pricing under loss aversion. In general though, it is observed [4] that no utility function can represent the preference ordering of chance-constrained programming because a fixed α does not admit compensation for a small increase in α by any increase in expected wealth.

For a comparison to mean-variance and safety-first portfolio problems, see; [5] for a survey of solution methods here, see; [6] for a discussion of the risk aversion properties of chance-constrained portfolio selection, see. [7]

See also

Related Research Articles

<span class="mw-page-title-main">Capital asset pricing model</span> Model used in finance

In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio.

<span class="mw-page-title-main">Risk aversion</span> Economics theory

In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome.

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization.

Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.

The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.

In microeconomics, a consumer's Marshallian demand function is the quantity they demand of a particular good as a function of its price, their income, and the prices of other goods, a more technical exposition of the standard demand function. It is a solution to the utility maximization problem of how the consumer can maximize their utility for given income and prices. A synonymous term is uncompensated demand function, because when the price rises the consumer is not compensated with higher nominal income for the fall in their real income, unlike in the Hicksian demand function. Thus the change in quantity demanded is a combination of a substitution effect and a wealth effect. Although Marshallian demand is in the context of partial equilibrium theory, it is sometimes called Walrasian demand as used in general equilibrium theory.

The equity premium puzzle refers to the inability of an important class of economic models to explain the average equity risk premium (ERP) provided by a diversified portfolio of equities over that of government bonds, which has been observed for more than 100 years. There is a significant disparity between returns produced by stocks compared to returns produced by government treasury bills. The equity premium puzzle addresses the difficulty in understanding and explaining this disparity. This disparity is calculated using the equity risk premium:

In microeconomics, a consumer's Hicksian demand function or compensated demand function for a good is their quantity demanded as part of the solution to minimizing their expenditure on all goods while delivering a fixed level of utility. Essentially, a Hicksian demand function shows how an economic agent would react to the change in the price of a good, if the agent's income was compensated to guarantee the agent the same utility previous to the change in the price of the good—the agent will remain on the same indifference curve before and after the change in the price of the good. The function is named after John Hicks.

Stochastic dominance is a partial order between random variables. It is a form of stochastic ordering. The concept arises in decision theory and decision analysis in situations where one gamble can be ranked as superior to another gamble for a broad class of decision-makers. It is based on shared preferences regarding sets of possible outcomes and their associated probabilities. Only limited knowledge of preferences is required for determining dominance. Risk aversion is a factor only in second order stochastic dominance.

Merton's portfolio problem is a problem in continuous-time finance and in particular intertemporal portfolio choice. An investor must choose how much to consume and must allocate their wealth between stocks and a risk-free asset so as to maximize expected utility. The problem was formulated and solved by Robert C. Merton in 1969 both for finite lifetimes and for the infinite case. Research has continued to extend and generalize the model to include factors like transaction costs and bankruptcy.

<span class="mw-page-title-main">Exponential utility</span>

In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk is present, in which case expected utility is maximized. Formally, exponential utility is given by:

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Goals-Based Investing or Goal-Driven Investing is the use of financial markets to fund goals within a specified period of time. Traditional portfolio construction balances expected portfolio variance with return and uses a risk aversion metric to select the optimal mix of investments. By contrast, GBI optimizes an investment mix to minimize the probability of failing to achieve a minimum wealth level within a set period of time.

<span class="mw-page-title-main">Portfolio optimization</span> Process of selecting a portfolio

Portfolio optimization is the process of selecting an optimal portfolio, out of a set of considered portfolios, according to some objective. The objective typically maximizes factors such as expected return, and minimizes costs like financial risk, resulting in a multi-objective optimization problem. Factors being considered may range from tangible to intangible.

In decision theory, economics, and finance, a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value, which is the first moment about zero—and the variance, which is the second moment about the mean.

In finance, economics, and decision theory, hyperbolic absolute risk aversion (HARA) refers to a type of risk aversion that is particularly convenient to model mathematically and to obtain empirical predictions from. It refers specifically to a property of von Neumann–Morgenstern utility functions, which are typically functions of final wealth, and which describe a decision-maker's degree of satisfaction with the outcome for wealth. The final outcome for wealth is affected both by random variables and by decisions. Decision-makers are assumed to make their decisions so as to maximize the expected value of the utility function.

In portfolio theory, a mutual fund separation theorem, mutual fund theorem, or separation theorem is a theorem stating that, under certain conditions, any investor's optimal portfolio can be constructed by holding each of certain mutual funds in appropriate ratios, where the number of mutual funds is smaller than the number of individual assets in the portfolio. Here a mutual fund refers to any specified benchmark portfolio of the available assets. There are two advantages of having a mutual fund theorem. First, if the relevant conditions are met, it may be easier for an investor to purchase a smaller number of mutual funds than to purchase a larger number of assets individually. Second, from a theoretical and empirical standpoint, if it can be assumed that the relevant conditions are indeed satisfied, then implications for the functioning of asset markets can be derived and tested.

In computer science, optimal computing budget allocation (OCBA) is an approach to maximize the overall simulation efficiency for finding an optimal decision. It was introduced in the mid-1990s by Dr. Chun-Hung Chen.

Intertemporal portfolio choice is the process of allocating one's investable wealth to various assets, especially financial assets, repeatedly over time, in such a way as to optimize some criterion. The set of asset proportions at any time defines a portfolio. Since the returns on almost all assets are not fully predictable, the criterion has to take financial risk into account. Typically the criterion is the expected value of some concave function of the value of the portfolio after a certain number of time periods—that is, the expected utility of final wealth. Alternatively, it may be a function of the various levels of goods and services consumption that are attained by withdrawing some funds from the portfolio after each time period.

Chance Constrained Programming (CCP) is a mathematical optimization approach used to handle problems under uncertainty. It was first introduced by Charnes and Cooper in 1959 and further developed by Miller and Wagner in 1965. CCP is widely used in various fields, including finance, engineering, and operations research, to optimize decision-making processes where certain constraints need to be satisfied with a specified probability.

References

  1. A. Chance and W. W. Cooper (1959), "Chance-Constrained Programming," Management Science, 6, No. 1, 73-79. . Retrieved September 24, 2020
  2. Naslund, B. and A. Whinston (1962), "A Model of Multi-Period Investment under Uncertainty," Management Science, 8, No. 2, 184-200. Retrieved September 24, 2020.
  3. Agnew, N.H, R.A. Agnes, J. Rasmussen and K. R. Smith (1969), "An Application of Chance-Constrained Programming to Portfolio Selection in a Casualty Insurance Firm," Management Science, 15, No. 10, 512-520. . Retrieved September 24, 2020.
  4. Borch, K. H. (1968), The Economics of Uncertainty, Princeton University Press, Princeton. . Retrieved September 24, 2020.
  5. Seppälä, J. (1994), “The diversification of currency loans: A comparison between safety-first and mean-variance criteria,” European Journal of Operations Research, 74, 325-343. . Retrieved September 25, 2020.
  6. Bay, X., X. Zheng and X. Sun (2012), "A survey on probabilistic constrained optimization problems," Numerical Algebra, Control and Optimization, 2, No. 4, 767-778. . Retrieved September 25, 2020.
  7. Pyle, D. H. and Stephen J. Turnovsky (1971), “Risk Aversion in Chance Constrained Portfolio Selection, Management Science,18, No. 3, 218-225.. Retrieved September 24, 2020.