This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Financial correlations measure the relationship between the changes of two or more financial variables over time. For example, the prices of equity stocks and fixed interest bonds often move in opposite directions: when investors sell stocks, they often use the proceeds to buy bonds and vice versa. In this case, stock and bond prices are negatively correlated.
Financial correlations play a key role in modern finance. Under the capital asset pricing model (CAPM; a model recognised by a Nobel prize), an increase in diversification increases the return/risk ratio. Measures of risk include value at risk, expected shortfall, and portfolio return variance. [1]
There are several statistical measures of the degree of financial correlations. The Pearson product-moment correlation coefficient is sometimes applied to finance correlations. However, the limitations of Pearson correlation approach in finance are evident. First, linear dependencies as assessed by the Pearson correlation coefficient do not appear often in finance. Second, linear correlation measures are only natural dependence measures if the joint distribution of the variables is elliptical. However, only few financial distributions such as the multivariate normal distribution and the multivariate student-t distribution are special cases of elliptical distributions, for which the linear correlation measure can be meaningfully interpreted. Third, a zero Pearson product-moment correlation coefficient does not necessarily mean independence, because only the two first moments are considered. For example, (y ≠ 0) will lead to Pearson correlation coefficient of zero, which is arguably misleading. [2] Since the Pearson approach is unsatisfactory to model financial correlations, quantitative analysts have developed specific financial correlation measures. Accurately estimating correlations requires the modeling process of marginals to incorporate characteristics such as skewness and kurtosis. Not accounting for these attributes can lead to severe estimation error in the correlations and covariances that have negative biases (as much as 70% of the true values). [3] In a practical application in portfolio optimization, accurate estimation of the variance-covariance matrix is paramount. Thus, forecasting with Monte-Carlo simulation with the Gaussian copula and well-specified marginal distributions are effective. [4]
Steven Heston applied a correlation approach [5] to negatively correlate stochastic stock returns and stochastic volatility . The core equations of the original Heston model are the two stochastic differential equations, SDEs
and
where S is the underlying stock, is the expected growth rate of , and is the stochastic volatility of at time t. In equation (2), g is the mean reversion rate (gravity), which pulls the variance to its long term mean , and is the volatility of the volatility σ(t). dz(t) is the standard Brownian motion, i.e. , is i.i.d., in particular is a random drawing from a standardized normal distribution n~(0,1). In equation (1), the underlying follows the standard geometric Brownian motion, which is also applied in Black–Scholes–Merton model, which however assumes constant volatility. The correlation between the stochastic processes (1) and (2) is introduced by correlating the two Brownian motions and . The instantaneous correlation between the Brownian motions is
The definition (3) can be conveniently modeled with the identity
where and are independent, and and are independent, t ≠ t’.
The Cointelation SDE [6] connects the SDE's above to the concept of mean reversion and drift which are usually concepts that are misunderstood [7] by practitioners.
A further financial correlation measure, mainly applied to default correlation,[ according to whom? ] is the binomial correlation approach of Lucas (1995). [8] We define the binomial events and where is the default time of entity and is the default time of entity . Hence if entity defaults before or at time , the random indicator variable will take the value in 1, and 0 otherwise. The same applies to . Furthermore, and is the default probability of and respectively, and is the joint probability of default. The standard deviation of a one-trial binomial event is , where P is the probability of outcome X. Hence, we derive the joint default dependence coefficient of the binomial events and as
By construction, equation (5) can only model binomial events, for example default and no default. The binomial correlation approach of equation (5) is a limiting case of the Pearson correlation approach discussed in section 1. As a consequence, the significant shortcomings of the Pearson correlation approach for financial modeling apply also to the binomial correlation model.[ citation needed ]
A fairly recent, famous as well as infamous correlation approach applied in finance is the copula approach. Copulas go back to Sklar (1959). [9] Copulas were introduced to finance by Vasicek (1987) [10] and Li (2000). [11]
Copulas simplify statistical problems. They allow the joining of multiple univariate distributions to a single multivariate distribution. Formally, a copula function C transforms an n-dimensional function on the interval [0,1] into a unit-dimensional one:
More explicitly, let be a uniform random vector with and . Then there exists a copula function such that
where F is the joint cumulative distribution function and , i = 1, ..., ni are the univariate marginal distributions. is the inverse of . If the marginal distributions are continuous, it follows that C is unique. For properties and proofs of equation (11), see Sklar (1959) and Nelsen (2006). [12] Numerous types of copula functions exist. They can be broadly categorized in one-parameter copulas as the Gaussian copula, and the Archimedean copula, which comprise Gumbel, Clayton and Frank copulas. Often cited two-parameter copulas are student-t, Fréchet, and Marshall-Olkin. For an overview of these copulas, see Nelsen (2006). In finance, copulas are typically applied to derive correlated default probabilities in a portfolio,[ according to whom? ] for example in a collateralized debt obligation, CDO. This was first done by Li in 2006. He defined the uniform margins as cumulative default probabilities Q for entity i at a fixed time t, :
Hence, from equations (7) and (8) we derive the Gaussian default time copula CGD,
In equation (9) the terms map the cumulative default probabilities Q of asset i for time t, , percentile to percentile to standard normal. The mapped standard normal marginal distributions are then joined to a single n-variate distribution by applying the correlation structure of the multivariate normal distribution with correlation matrix R. The probability of n correlated defaults at time t is given by .
Numerous non-academic articles have been written demonizing the copula approach and blaming it for the 2007/2008 global financial crisis, see for example Salmon 2009, [13] Jones 2009, [14] and Lohr 2009. [15] There are three main criticisms of the copula approach: (a) tail dependence, (b) calibration, (c) risk management.
(a) Tail dependence
In a crisis, financial correlations typically increase, see studies by Das, Duffie, Kapadia, and Saita (2007) [16] and Duffie, Eckner, Horel and Saita (2009) [17] and references therein. Hence it would be desirable to apply a correlation model with high co-movements in the lower tail of the joint distribution. It can be mathematically shown that the Gaussian copula has relative low tail dependence, as seen in the following scatter plots.[ citation needed ]
Figure 1: Scatter plots of different copula models
As seen in Figure 1b, the student-t copula exhibits higher tail dependence and might be better suited to model financial correlations. Also, as seen in Figure 1(c), the Gumbel copula exhibits high tail dependence especially for negative co-movements. Assuming that correlations increase when asset prices decrease, the Gumbel copula might also be a good correlation approach for financial modeling. [18]
(b) Calibration
A further criticism of the Gaussian copula is the difficulty to calibrate it to market prices. In practice, typically a single correlation parameter (not a correlation matrix) is used to model the default correlation between any two entities in a collateralized debt obligation, CDO. Conceptually this correlation parameter should be the same for the entire CDO portfolio. However, traders randomly alter the correlation parameter for different tranches, in order to derive desired tranche spreads. Traders increase the correlation for ‘extreme’ tranches as the equity tranche or senior tranches, referred to as the correlation smile. This is similar to the often cited implied volatility smile in the Black–Scholes–Merton model. Here traders increase the implied volatility especially for out-of-the money puts, but also for out-of-the money calls to increase the option price.[ citation needed ].
In a mean-variance optimization framework, accurate estimation of the variance-covariance matrix is paramount. Thus, forecasting with Monte-Carlo simulation with the Gaussian copula and well-specified marginal distributions are effective. [19] Allowing the modeling process to allow for empirical characteristics in stock returns such as auto-regression, asymmetric volatility, skewness, and kurtosis is important. Not accounting for these attributes lead to severe estimation error in the correlations and variances that have negative biases (as much as 70% of the true values). [20]
(c) Risk management
A further criticism of the Copula approach is that the copula model is static and consequently allows only limited risk management, see Finger (2009) [21] or Donnelly and Embrechts (2010). [22] The original copulas models of Vasicek (1987) and Li (2000) and several extensions of the model as Hull and White (2004) [23] or Gregory and Laurent (2004) [24] do have a one period time horizon, i.e. are static. In particular, there is no stochastic process for the critical underlying variables default intensity and default correlation. However, even in these early copula formulations, back testing and stress testing the variables for different time horizons can give valuable sensitivities, see Whetten and Adelson (2004) [25] and Meissner, Hector, and. Rasmussen (2008). [26] In addition, the copula variables can be made a function of time as in Hull, Predescu, and White (2005). [27] This still does not create a fully dynamic stochastic process with drift and noise, which allows flexible hedging and risk management. The best solutions are truly dynamic copula frameworks, see section ‘Dynamic Copulas’ below.
Before the global 2007–08 financial crisis, numerous market participants trusted the copula model uncritically and naively.[ citation needed ] However, the 2007–08 crisis was less a matter of a particular correlation model, but rather an issue of "irrational complacency". In the extremely benign time period from 2003 to 2006, proper hedging, proper risk management and stress test results were largely ignored.[ citation needed ] The prime example is AIG's London subsidiary, which had sold credit default swaps and collateralized debt obligations in an amount of close to $500 billion without conducting any major hedging. For an insightful paper on inadequate risk management leading up to the crisis, see “A personal view of the crisis – Confessions of a Risk Manager” (The Economist 2008). [28] In particular, if any credit correlation model is fed with benign input data as low default intensities and low default correlation, the risk output figures will be benign, ‘garbage in garbage out’ in modeling terminology.[ citation needed ]
A core enhancement of copula models are dynamic copulas, introduced by Albanese et al. (2005) [29] and (2007). [30] The "dynamic conditioning" approach models the evolution of multi-factor super-lattices, which correlate the return processes of each entity at each time step. Binomial dynamic copulas apply combinatorial methods to avoid Monte Carlo simulations. Richer dynamic Gaussian copulas apply Monte Carlo simulation and come at the cost of requiring powerful computer technology.
In order to avoid specifying the default correlation between each entity pair in a portfolio a factorization is often applied.[ citation needed ] This leads to conditionally independent default (CID) modeling. The most widely applied CID model is the one-factor Gaussian copula (OFGC) model. It was the de facto market model for pricing CDOs before the 2007/2008 global financial crisis.[ citation needed ] The core equation of the OFGC model
where and are random drawings from and . As a result, the latent variable , sometimes interpreted as the asset value of i, see Turc, Very, Benhamou and Alvarez et al. (2005), [31] [ better source needed ] is also n~(0,1). The common factor can be interpreted as the economic environment, possibly represented by the return of the S&P 500. is the idiosyncratic component, the ‘strength’ of entity i, possibly measured by entity i's stock price return. From equation (10) we see, that the correlation between entities i is modeled indirectly by conditioning the latent variable on the common factor . For example, for p =1, the latent variables of all entities , so the are identical in each simulation. For p = 0, all latent variable for all entities , hence the are independent. Importantly, once we fix the value of M, the defaults of the n entities are (conditionally on M) mutually independent.[ citation needed ]
As of 2010, the OFGC is the basis for credit risk management in Basel II.[ citation needed ] The benefits of the model are simplicity and intuition. One of the main shortcomings of the model is that traders when pricing CDOs randomly alter the correlation parameter for different CDO tranches to achieve desired tranche spreads. However conceptually, the correlation parameter should be identical for the whole portfolio.[ citation needed ]
Contagion default modeling can be viewed as a variation of CID modeling. As discussed in section 2.3, in the CID framework, correlation is modeled by conditioning on a common market factor M, which impacts all entities to the same degree. The lower the random drawing for M, the higher is the default intensity of all entities (unless ρ = 0). Hence CID modeling can elucidate default clustering. In contrast, contagion approaches model the default intensity of an entity as a function of the default of another entity. Hence contagion default modeling incorporates counterparty risk, i.e. the direct impact of a defaulting entity on the default intensity of another entity. In particular, after a default of a particular entity, the default intensity of all assets in the portfolio increases. This default contagion then typically fades exponentially to non-contagious default intensity levels. See the papers of Davis and Lo (2001) [32] and Jarrow and Yu (2001), [33] who pioneered contagion default modeling.
Within the credit correlation modeling framework, a fairly new correlation approach is top–down modeling. Here the evolution of the portfolio intensity distribution is derived directly, i.e. abstracting from the default intensities of individual entities. Top-down models are typically applied in practice if:
Top–down models are typically more parsimonious, computationally efficient and can often be calibrated better to market prices than bottom–up models. Although seemingly important information such as the default intensities of individual entities is disregarded, a top-down model can typically better capture properties of the portfolio such as volatility or correlation smiles. In addition, the default information of individual entities can often be inferred by random thinning techniques, see Giesecke, Goldberg and Ding (2007) [34] for details.
Within the top-down framework, Schönbucher (2006) [35] creates a time-inhomogeneous Markov-chain of transition rates. Default correlation is introduced by changes in the volatility of transition rates. For certain parameter constellations, higher volatility means faster transition to lower states as default, and as a consequence implies higher default correlation, and vice versa. Similarly, Hurd and Kuznetsov (2006a) [36] and (2006b) [37] induce correlation by a random change in the speed of time. A faster speed of time means faster transition to a lower state, possibly default, and as a result increases default correlation, and vice versa. For a comparative analysis of correlation approaches in finance, see Albanese, Li, Lobachevskiy, and Meissner (2010). [38]
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Copulas are used to describe/model the dependence (inter-correlation) between random variables. Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammatical copulas in linguistics. Copulas have been used widely in quantitative finance to model and minimize tail risk and portfolio-optimization applications.
In finance, diversification is the process of allocating capital in a way that reduces the exposure to any one particular asset or risk. A common path towards diversification is to reduce risk or volatility by investing in a variety of assets. If asset prices do not change in perfect synchrony, a diversified portfolio will have less variance than the weighted average variance of its constituent assets, and often less volatility than the least volatile of its constituents.
Single-tranche CDO or bespoke CDO is an extension of full capital structure synthetic CDO deals, which are a form of collateralized debt obligation. These are bespoke transactions where the bank and the investor work closely to achieve a specific target.
In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
A local volatility model, in mathematical finance and financial engineering, is an option pricing model that treats volatility as a function of both the current asset level and of time . As such, it is a generalisation of the Black–Scholes model, where the volatility is a constant. Local volatility models are often compared with stochastic volatility models, where the instantaneous volatility is not just a function of the asset level but depends also on a new "global" randomness coming from an additional random component.
David X. Li is a Chinese-born Canadian quantitative analyst and actuary who pioneered the use of Gaussian copula models for the pricing of collateralized debt obligations (CDOs) in the early 2000s. The Financial Times has called him "the world’s most influential actuary", while in the aftermath of the 2007–2008 financial crisis, to which Li's model has been partly credited to blame, his model has been called a "recipe for disaster" in the hands of those who did not fully understand his research and misapplied it. Widespread application of simplified Gaussian copula models to financial products such as securities may have contributed to the 2007–2008 financial crisis. David Li is currently an adjunct professor at the University of Waterloo in the Statistics and Actuarial Sciences department.
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.
Damiano Brigo is a mathematician known for research in mathematical finance, filtering theory, stochastic analysis with differential geometry, probability theory and statistics, authoring more than 130 research publications and three monographs. From 2012 he serves as full professor with a chair in mathematical finance at the Department of Mathematics of Imperial College London, where he headed the Mathematical Finance group in 2012–2019. He is also a well known quantitative finance researcher, manager and advisor in the industry. His research has been cited and published also in mainstream industry publications, including Risk Magazine, where he has been the most cited author in the twenty years 1998–2017. He is often requested as a plenary or invited speaker both at academic and industry international events. Brigo's research has also been used in court as support for legal proceedings.
The seven states of randomness in probability theory, fractals and risk analysis are extensions of the concept of randomness as modeled by the normal distribution. These seven states were first introduced by Benoît Mandelbrot in his 1997 book Fractals and Scaling in Finance, which applied fractal analysis to the study of risk and randomness. This classification builds upon the three main states of randomness: mild, slow, and wild.
Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.
A bespoke portfolio is a table of reference securities. A bespoke portfolio may serve as the reference portfolio for a synthetic CDO arranged by an investment bank and selected by a particular investor or for that investor by an investment manager.
{{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help)