RiskMetrics

Last updated

The RiskMetrics variance model (also known as exponential smoother) was first established in 1989, when Sir Dennis Weatherstone, the new chairman of J.P. Morgan, asked for a daily report measuring and explaining the risks of his firm. Nearly four years later in 1992, J.P. Morgan launched the RiskMetrics methodology to the marketplace, making the substantive research and analysis that satisfied Sir Dennis Weatherstone's request freely available to all market participants.

Contents

In 1998, as client demand for the group's risk management expertise exceeded the firm's internal risk management resources, the Corporate Risk Management Department was spun off from J.P. Morgan as RiskMetrics Group with 23 founding employees. The RiskMetrics technical document was revised in 1996. In 2001, it was revised again in Return to RiskMetrics. In 2006, a new method for modeling risk factor returns was introduced (RM2006). On 25 January 2008, RiskMetrics Group listed on the New York Stock Exchange (NYSE: RISK). In June 2010, RiskMetrics was acquired by MSCI for $1.55 billion. [1]

Risk measurement process

Portfolio risk measurement can be broken down into steps. The first is modeling the market that drives changes in the portfolio's value. The market model must be sufficiently specified so that the portfolio can be revalued using information from the market model. The risk measurements are then extracted from the probability distribution of the changes in portfolio value. The change in value of the portfolio is typically referred to by portfolio managers as profit and loss, or P&L

Risk factors

Risk management systems are based on models that describe potential changes in the factors affecting portfolio value. These risk factors are the building blocks for all pricing functions. In general, the factors driving the prices of financial securities are equity prices, foreign exchange rates, commodity prices, interest rates, correlation and volatility. By generating future scenarios for each risk factor, we can infer changes in portfolio value and reprice the portfolio for different "states of the world".

Portfolio risk measures

Standard deviation

The first widely used portfolio risk measure was the standard deviation of portfolio value, as described by Harry Markowitz. While comparatively easy to calculate, standard deviation is not an ideal risk measure since it penalizes profits as well as losses.

Value at risk

The 1994 tech doc popularized VaR as the risk measure of choice among investment banks looking to be able to measure their portfolio risk for the benefit of banking regulators. VaR is a downside risk measure, meaning that it typically focuses on losses.

Expected shortfall

A third commonly used risk measure is expected shortfall , also known variously as expected tail loss, XLoss, conditional VaR, or CVaR.

Marginal VaR

The Marginal VaR of a position with respect to a portfolio can be thought of as the amount of risk that the position is adding to the portfolio. It can be formally defined as the difference between the VaR of the total portfolio and the VaR of the portfolio without the position.

To measure the effect of changing positions on portfolio risk, individual VaRs are insufficient. Volatility measures the uncertainty in the return of an asset, taken in isolation. When this asset belongs to a portfolio, however, what matters is the contribution to portfolio risk.

Philippe Jorion (2007)

Incremental risk

Incremental risk statistics provide information regarding the sensitivity of portfolio risk to changes in the position holding sizes in the portfolio.

An important property of incremental risk is subadditivity. That is, the sum of the incremental risks of the positions in a portfolio equals the total risk of the portfolio. This property has important applications in the allocation of risk to different units, where the goal is to keep the sum of the risks equal to the total risk.

Since there are three risk measures covered by RiskMetrics, there are three incremental risk measures: Incremental VaR (IVaR), Incremental Expected Shortfall (IES), and Incremental Standard Deviation (ISD).

Incremental statistics also have applications to portfolio optimization. A portfolio with minimum risk will have incremental risk equal to zero for all positions. Conversely, if the incremental risk is zero for all positions, the portfolio is guaranteed to have minimum risk only if the risk measure is subadditive.

Coherent risk measures

A coherent risk measure satisfies the following four properties:

1. Subadditivity

A risk measure is subadditive if for any portfolios A and B, the risk of A+B is never greater than the risk of A plus the risk of B. In other words, the risk of the sum of subportfolios is smaller than or equal to the sum of their individual risks.

Standard deviation and expected shortfall are subadditive, while VaR is not.

Subadditivity is required in connection with aggregation of risks across desks, business units, accounts, or subsidiary companies. This property is important when different business units calculate their risks independently and we want to get an idea of the total risk involved. Lack of subadditivity could also be a matter of concern for regulators, where firms might be motivated to break up into affiliates to satisfy capital requirements.

2. Translation invariance

Adding cash to the portfolio decreases its risk by the same amount.

3. Positive homogeneity of degree 1

If we double the size of every position in a portfolio, the risk of the portfolio will be twice as large.

4. Monotonicity

If losses in portfolio A are larger than losses in portfolio B for all possible risk factor return scenarios, then the risk of portfolio A is higher than the risk of portfolio B.

Assessing risk measures

The estimation process of any risk measure can be wrong by a considerable margin. If from the imprecise estimate we cannot get a good understanding what the true value could be, then the estimate is virtually worthless. A good risk measurement is to supplement any estimated risk measure with some indicator of their precision, or, of the size of its error.

There are various ways to quantify the error of some estimates. One approach is to estimate a confidence interval of the risk measurement.

Market models

RiskMetrics describes three models for modeling the risk factors that define financial markets.

Covariance approach

The first is very similar to the mean-covariance approach of Markowitz. Markowitz assumed that asset covariance matrix can be observed. The covariance matrix can be used to compute portfolio variance. RiskMetrics assumes that the market is driven by risk factors with observable covariance. The risk factors are represented by time series of prices or levels of stocks, currencies, commodities, and interest rates. Instruments are evaluated from these risk factors via various pricing models. The portfolio itself is assumed to be some linear combination of these instruments.

Historical simulation

The second market model assumes that the market only has finitely many possible changes, drawn from a risk factor return sample of a defined historical period. Typically one performs a historical simulation by sampling from past day-on-day risk factor changes, and applying them to the current level of the risk factors to obtain risk factor price scenarios. These perturbed risk factor price scenarios are used to generate a profit (loss) distribution for the portfolio.

This method has the advantage of simplicity, but as a model, it is slow to adapt to changing market conditions. It also suffers from simulation error, as the number of simulations is limited by the historical period (typically between 250 and 500 business days).

Monte carlo simulation

The third market model assumes that the logarithm of the return, or, log-return, of any risk factor typically follows a normal distribution. Collectively, the log-returns of the risk factors are multivariate normal. Monte Carlo algorithm simulation generates random market scenarios drawn from that multivariate normal distribution. For each scenario, the profit (loss) of the portfolio is computed. This collection of profit (loss) scenarios provides a sampling of the profit (loss) distribution from which one can compute the risk measures of choice.

Criticism

Nassim Taleb in his book The Black Swan (2007) wrote:

Banks are now more vulnerable to the Black Swan than ever before with "scientists" among their staff taking care of exposures. The giant firm J. P. Morgan put the entire world at risk by introducing in the nineties RiskMetrics, a phony method aiming at managing people’s risks. A related method called “Value-at-Risk,” which relies on the quantitative measurement of risk, has been spreading. [2]

Related Research Articles

Financial economics is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on both sides of a trade". Its concern is thus the interrelation of financial variables, such as share prices, interest rates and exchange rates, as opposed to those concerning the real economy. It has two main areas of focus: asset pricing, commonly known as "Investments", and corporate finance; the first being the perspective of providers of capital, i.e. investors, and the second of users of capital. It thus provides the theoretical underpinning for much of finance.

<span class="mw-page-title-main">Value at risk</span> Estimated potential loss for an investment under a given set of conditions

Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose, given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.

Market risk is the risk of losses in positions arising from movements in market variables like prices and volatility. There is no unique classification as each classification may refer to different aspects of market risk. Nevertheless, the most commonly used types of market risk are:

Rational pricing is the assumption in financial economics that asset prices - and hence asset pricing models - will reflect the arbitrage-free price of the asset as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.

Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the variance of asset prices as a proxy for risk.

In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the Capital Asset Pricing Model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital.

Financial risk management is the practice of protecting economic value in a firm by managing exposure to financial risk - principally operational risk, credit risk and market risk, with more specific variants as listed aside. As for risk management more generally, financial risk management requires identifying its sources, measuring it, and the plans to address them. See Finance § Risk management for an overview.

<span class="mw-page-title-main">Financial risk</span> Any of various types of risk associated with financing

Financial risk is any of various types of risk associated with financing, including financial transactions that include company loans in risk of default. Often it is understood to include only downside risk, meaning the potential for financial loss and uncertainty about its extent.

Financial risk modeling is the use of formal mathematical and econometric techniques to measure, monitor and control the market risk, credit risk, and operational risk on a firm's balance sheet, on a bank's trading book, or re a fund manager's portfolio value; see Financial risk management. Risk modeling is one of many subtasks within the broader area of financial modeling.

The following outline is provided as an overview of and topical guide to finance:

The single-index model (SIM) is a simple asset pricing model to measure both the risk and the return of a stock. The model has been developed by William Sharpe in 1963 and is commonly used in the finance industry. Mathematically the SIM is expressed as:

Post-Modern Portfolio Theory (PMPT) is an extension of the traditional Modern Portfolio Theory (MPT), an application of mean-variance analysis (MVA). Both theories propose how rational investors can use diversification to optimize their portfolios.

<span class="mw-page-title-main">Volatility (finance)</span> Degree of variation of a trading price series over time

In finance, volatility is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns.

In finance, model risk is the risk of loss resulting from using insufficiently accurate models to make decisions, originally and frequently in the context of valuing financial securities. However, model risk is more and more prevalent in activities other than financial securities valuation, such as assigning consumer credit scores, real-time probability prediction of fraudulent credit card transactions, and computing the probability of air flight passenger being a terrorist. Rebonato in 2002 defines model risk as "the risk of occurrence of a significant difference between the mark-to-model value of a complex and/or illiquid instrument, and the price at which the same instrument is revealed to have traded in the market".

In trading strategy, news analysis refers to the measurement of the various qualitative and quantitative attributes of textual news stories. Some of these attributes are: sentiment, relevance, and novelty. Expressing news stories as numbers and metadata permits the manipulation of everyday information in a mathematical and statistical way. This data is often used in financial markets as part of a trading strategy or by businesses to judge market sentiment and make better business decisions.

In investment banking, PnL Explained is an income statement with commentary that attributes or explains the daily fluctuation in the value of a portfolio of trades to the root causes of the changes.

A stress test, in financial terminology, is an analysis or simulation designed to determine the ability of a given financial instrument or financial institution to deal with an economic crisis. Instead of doing financial projection on a "best estimate" basis, a company or its regulators may do stress testing where they look at how robust a financial instrument is in certain crashes, a form of scenario analysis. They may test the instrument under, for example, the following stresses:

Portfolio optimization is the process of selecting the best portfolio, out of the set of all portfolios being considered, according to some objective. The objective typically maximizes factors such as expected return, and minimizes costs like financial risk. Factors being considered may range from tangible to intangible.

<span class="mw-page-title-main">Mathematical finance</span> Application of mathematical and statistical methods in finance

Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling of financial markets.

In finance, the Markowitz model ─ put forward by Harry Markowitz in 1952 ─ is a portfolio optimization model; it assists in the selection of the most efficient portfolio by analyzing various possible portfolios of the given securities. Here, by choosing securities that do not 'move' exactly together, the HM model shows investors how to reduce their risk. The HM model is also called mean-variance model due to the fact that it is based on expected returns (mean) and the standard deviation (variance) of the various portfolios. It is foundational to Modern portfolio theory.

References

Specific
  1. "MSCI to buy RiskMetrics for $1.55 billion". Reuters. 1 March 2010. Retrieved 1 November 2018.
  2. Nassim Taleb (2007). The Black Swan: The Impact of the Highly Improbable . ISBN   9781400063512. Cited in Nassim Taleb (10 September 2009). "Report on The Risks of Financia l Modeling, VaR and the Economic Breakdown" (PDF). U.S. House of Representatives. Archived from the original (PDF) on 4 November 2009.