Upside potential ratio

Last updated

The upside-potential ratio is a measure of a return of an investment asset relative to the minimal acceptable return. The measurement allows a firm or individual to choose investments which have had relatively good upside performance, per unit of downside risk.

Contents

where the returns have been put into increasing order. Here is the probability of the return and which occurs at is the minimal acceptable return. In the secondary formula and . [1]

The upside-potential ratio may also be expressed as a ratio of partial moments since is the first upper moment and is the second lower partial moment.

The measure was developed by Frank A. Sortino.

Discussion

The upside-potential ratio is a measure of risk-adjusted returns. All such measures are dependent on some measure of risk. In practice, standard deviation is often used, perhaps because it is mathematically easy to manipulate. However, standard deviation treats deviations above the mean (which are desirable, from the investor's perspective) exactly the same as it treats deviations below the mean (which are less desirable, at the very least). In practice, rational investors have a preference for good returns (e.g., deviations above the mean) and an aversion to bad returns (e.g., deviations below the mean).

Sortino further found that investors are (or, at least, should be) averse not to deviations below the mean, but to deviations below some "minimal acceptable return" (MAR), which is meaningful to them specifically. Thus, this measure uses deviations above the MAR in the numerator, rewarding performance above the MAR. In the denominator, it has deviations below the MAR, thus penalizing performance below the MAR.

Thus, by rewarding desirable results in the numerator and penalizing undesirable results in the denominator, this measure attempts to serve as a pragmatic measure of the goodness of an investment portfolio's returns in a sense that is not just mathematically simple (a primary reason to use standard deviation as a risk measure), but one that considers the realities of investor psychology and behavior.

See also

Related Research Articles

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the past variance of asset prices as a proxy for future risk.

In finance, the Sharpe ratio measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for its risk. It is defined as the difference between the returns of the investment and the risk-free return, divided by the standard deviation of the investment returns. It represents the additional amount of return that an investor receives per unit of increase in risk.

The Sortino ratio measures the risk-adjusted return of an investment asset, portfolio, or strategy. It is a modification of the Sharpe ratio but penalizes only those returns falling below a user-specified target or required rate of return, while the Sharpe ratio penalizes both upside and downside volatility equally. Though both ratios measure an investment's risk-adjusted return, they do so in significantly different ways that will frequently lead to differing conclusions as to the true nature of the investment's return-generating efficiency.

In finance, Jensen's alpha is used to determine the abnormal return of a security or portfolio of securities over the theoretical expected return. It is a version of the standard alpha based on a theoretical performance instead of a market index.

The Treynor reward to volatility model, named after Jack L. Treynor, is a measurement of the returns earned in excess of that which could have been earned on an investment that has no diversifiable risk, per unit of market risk assumed.

The drawdown is the measure of the decline from a historical peak in some variable.

The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula:

The information ratio measures and compares the active return of an investment compared to a benchmark index relative to the volatility of the active return. It is defined as the active return divided by the tracking error. It represents the additional amount of return that an investor receives per unit of increase in risk. The information ratio is simply the ratio of the active return of the portfolio divided by the tracking error of its return, with both components measured relative to the performance of the agreed-on benchmark.

In finance, tracking error or active risk is a measure of the risk in an investment portfolio that is due to active management decisions made by the portfolio manager; it indicates how closely a portfolio follows the index to which it is benchmarked. The best measure is the standard deviation of the difference between the portfolio and index returns.

The ulcer index is a stock market risk measure or technical analysis indicator devised by Peter Martin in 1987, and published by him and Byron McCann in their 1989 book The Investors Guide to Fidelity Funds. It is a measure of downwards volatility, the amount of drawdown or retracement over a period.

Roy's safety-first criterion is a risk management technique, devised by A. D. Roy, that allows an investor to select one portfolio rather than another based on the criterion that the probability of the portfolio's return falling below a minimum desired threshold is minimized.

Simply stated, Post-Modern Portfolio Theory (PMPT) is an extension of the traditional Modern Portfolio Theory (MPT) of Markowitz and Sharpe. Both theories provide analytical methods for rational investors to use diversification to optimize their investment portfolios. The essential difference between PMPT and MPT is that PMPT emphasizes the return that must be earned on an investment in order to meet future, specified obligations, MPT is concened only with the absolute return vis-a-vis the risk-free rate.

In financial mathematics, a risk measure is used to determine the amount of an asset or set of assets to be kept in reserve. The purpose of this reserve is to make the risks taken by financial institutions, such as banks and insurance companies, acceptable to the regulator. In recent years attention has turned towards convex and coherent risk measurement.

The bias ratio is an indicator used in finance to analyze the returns of investment portfolios, and in performing due diligence.

The Omega ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Con Keating and William F. Shadwick in 2002 and is defined as the probability weighted ratio of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.

Modigliani risk-adjusted performance (also known as M2, M2, Modigliani–Modigliani measure or RAP) is a measure of the risk-adjusted returns of some investment portfolio. It measures the returns of the portfolio, adjusted for the risk of the portfolio relative to that of some benchmark (e.g., the market). We can interpret the measure as the difference between the scaled excess return of our portfolio P and that of the market, where the scaled portfolio has the same volatility as the market. It is derived from the widely used Sharpe ratio, but it has the significant advantage of being in units of percent return (as opposed to the Sharpe ratio – an abstract, dimensionless ratio of limited utility to most investors), which makes it dramatically more intuitive to interpret.

Downside risk is the financial risk associated with losses. That is, it is the risk of the actual return being below the expected return, or the uncertainty about the magnitude of that difference.

In mathematics, the set of positive real numbers, is the subset of those real numbers that are greater than zero. The non-negative real numbers, also include zero. Although the symbols and are ambiguously used for either of these, the notation or for and or for has also been widely employed, is aligned with the practice in algebra of denoting the exclusion of the zero element with a star, and should be understandable to most practicing mathematicians.

The Rachev Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Dr. Svetlozar Rachev and has been extensively studied in quantitative finance. Unlike the reward-to-variability ratios, such as Sharpe ratio and Sortino ratio, the Rachev ratio is a reward-to-risk ratio, which is designed to measure the right tail reward potential relative to the left tail risk in a non-Gaussian setting. Intuitively, it represents the potential for extreme positive returns compared to the risk of extreme losses, at a rarity frequency q defined by the user.

References

  1. Chen, L.; He, S.; Zhang, S. (2011). "When all risk-adjusted performance measures are the same: In praise of the Sharpe ratio". Quantitative Finance. 11 (10): 1439. CiteSeerX   10.1.1.701.141 . doi:10.1080/14697680903081881. S2CID   15825491.