Omega ratio

Last updated

The Omega ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Con Keating and William F. Shadwick in 2002 and is defined as the probability weighted ratio of gains versus losses for some threshold return target. [1] The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.

Contents

Omega is calculated by creating a partition in the cumulative return distribution in order to create an area of losses and an area for gains relative to this threshold.

The ratio is calculated as:

where is the cumulative probability distribution function of the returns and is the target return threshold defining what is considered a gain versus a loss. A larger ratio indicates that the asset provides more gains relative to losses for some threshold and so would be preferred by an investor. When is set to zero the gain-loss-ratio by Bernardo and Ledoit arises as a special case. [2]

Comparisons can be made with the commonly used Sharpe ratio which considers the ratio of return versus volatility. [3] The Sharpe ratio considers only the first two moments of the return distribution whereas the Omega ratio, by construction, considers all moments.

Optimization of the Omega ratio

The standard form of the Omega ratio is a non-convex function, but it is possible to optimize a transformed version using linear programming. [4] To begin with, Kapsos et al. show that the Omega ratio of a portfolio is:

If we are interested in maximizing the Omega ratio, then the relevant optimization problem to solve is:

The objective function is still non-convex, so we have to make several more modifications. First, note that the discrete analogue of the objective function is:

For sampled asset class returns, let and . Then the discrete objective function becomes:

With these substitutions, we have been able to transform the non-convex optimization problem into an instance of linear-fractional programming. Assuming that the feasible region is non-empty and bounded, it is possible to transform a linear-fractional program into a linear program. Conversion from a linear-fractional program to a linear program gives us the final form of the Omega ratio optimization problem:

where are the respective lower and upper bounds for the portfolio weights. To recover the portfolio weights, normalize the values of so that their sum is equal to 1.

See also

Related Research Articles

The likelihood function is the joint probability of the observed data viewed as a function of the parameters of the chosen statistical model.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In geometry, the Dehn invariant is a value used to determine whether one polyhedron can be cut into pieces and reassembled ("dissected") into another, and whether a polyhedron or its dissections can tile space. It is named after Max Dehn, who used it to solve Hilbert's third problem by proving that not all polyhedra with equal volume could be dissected into each other.

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.

In statistics, the score is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In statistics, the Wald test assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In information theory, the cross-entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .

In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a margin between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form.

In quantum information theory, the Wehrl entropy, named after Alfred Wehrl, is a classical entropy of a quantum-mechanical density matrix. It is a type of quasi-entropy defined for the Husimi Q representation of the phase-space quasiprobability distribution. See for a comprehensive review of basic properties of classical, quantum and Wehrl entropies, and their implications in statistical mechanics.

In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haim Brezis and Petru Mironescu in the late 2010s.

In financial mathematics and stochastic optimization, the concept of risk measure is used to quantify the risk involved in a random outcome or risk position. Many risk measures have hitherto been proposed, each having certain characteristics. The entropic value at risk (EVaR) is a coherent risk measure introduced by Ahmadi-Javid, which is an upper bound for the value at risk (VaR) and the conditional value at risk (CVaR), obtained from the Chernoff inequality. The EVaR can also be represented by using the concept of relative entropy. Because of its connection with the VaR and the relative entropy, this risk measure is called "entropic value at risk". The EVaR was developed to tackle some computational inefficiencies of the CVaR. Getting inspiration from the dual representation of the EVaR, Ahmadi-Javid developed a wide class of coherent risk measures, called g-entropic risk measures. Both the CVaR and the EVaR are members of this class.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

References

  1. Keating & Shadwick. "A Universal Performance Measure" (PDF). The Finance Development Centre Limited. UK. S2CID   16222368. Archived from the original (PDF) on 2019-08-04.
  2. Bernardo, Antonio E.; Ledoit, Olivier (2000-02-01). "Gain, Loss, and Asset Pricing". Journal of Political Economy. 108 (1): 144–172. CiteSeerX   10.1.1.39.2638 . doi:10.1086/262114. ISSN   0022-3808. S2CID   16854983.
  3. "Assessing CTA Quality with the Omega Performance Measure" (PDF). Winton Capital Management. UK.
  4. Kapsos, Michalis; Zymler, Steve; Christofides, Nicos; Rustem, Berç (Summer 2014). "Optimizing the Omega Ratio using Linear Programming" (PDF). Journal of Computational Finance. 17 (4): 49–57. doi:10.21314/JCF.2014.283.