In mathematical finance, multiple factor models are asset pricing models that can be used to estimate the discount rate for the valuation of financial assets. They are generally extensions of the single-factor capital asset pricing model (CAPM).
The multifactor equity risk model was first developed by Barr Rosenberg and Vinay Marathe. [1] Initially they proposed a linear model of beta
where is the return to equity asset in the period , is the risk free return, is the market index return, is a market residual return and is a parameter fit by a time series regression over history prior to time t. Then are risk exposure values calculated from fundamental and technical data, are factor returns determined by a cross-sectional regression for each time period and are the regression residuals. This model was reformulated by Rosenberg et al. into a direct model of asset return,
Here the factor returns and specific returns are fit by a weighted regression over each time period for a representative asset universe. For instance the model might be fit over the 3000 highest capitalization US common stocks. The primary application of the model is to estimate the asset by asset covariance matrix of asset returns by the equation
where is the covariance matrix of factor returns, and is a block diagonal matrix of specific returns. The matrix is then used for Markowitz portfolio construction which involves maximizing the quadratic utility function
subject to linear constraints on the vector of asset holdings . Here a is a vector of expected returns and is a scalar parameter termed the risk aversion.
Nicolo G. Torre made a number of improvements to this framework which importantly sharpened the risk control achievable by these means. [2] In Rosenberg's model the risk indices X consisted of industry weights and risk indices. Each asset would be given an exposure to one or more industries, e. g. based on breakdowns of the firms balance sheet or earning statement into industry segments. These industry exposures would sum to 1 for each asset. Thus the model had no explicit market factor but rather the market return was projected on to the industry returns. Torre modified this scheme by introducing an explicit market factor (with unit exposure for each asset.) To keep the model identified by imposed the condition that the industry factor returns sum to zero in each time period. Thus the model is estimated as
subject to
for all
where the sum is over industry factors. Here m(t) is the market return. Explicitly identifying the market factor then permitted Torre to estimate the variance of this factor using a leveraged GARCH(1,1) model due to Robert Engle and Tim Bollerslev
s^2(t)=w+a s^2(t-1)+ b1 fp(m(t-1))^2 + b2 fm(m(t-1))^2
Here
fp(x)=x for x>0 0 for x<=0
fm(x)=0 for x>=0 x for x<0
and w, a, b1 and b2 are parameters fit from long time series estimations using maximum likelihood methods. This model provides a rapid update of market variance which is incorporated into the update of F, resulting in a more dynamic model of risk. In particular it accounts for the convergence of asset returns and consequent loss of diversification that occurs in portfolios during periods of market turbulence.
In the risk model industry factors carry about half the explanatory power after the market effect is accounted for. However, Rosenberg had left unsolved how the industry groupings should be defined – choosing to rely simply on a conventional set of industries. Defining industry sets is a problem in taxonomy. The basic difficulty is that the industry is defined by the members assigned to it, but which industry an individual equity should be assigned to is often unclear. Difficulties can be reduced by introducing a large number of narrowly defined industries, but this approach is in tension with the demands of risk estimation. For robust risk estimates we favor a moderate number of industries with each industry representing a few percentage points of market capitalization and not exclusively dominated by the largest company in the industry. Torre resolved this problem by introducing several hundred narrowly defined mini-industries and then applying guided clustering techniques to combine the mini-industries into industry groupings suitable for risk estimation.
In the initial Rosenberg approach factor and specific returns are assumed to be normally distributed. However experience turns up a number of outlying observations that are both too large and too frequent to be fit by a normal distribution Although introduction of a GARCH market factor partly reduces this difficulty, it does not eliminate it. Torre showed that return distributions can be modeled as a mixture of a normal distribution and a jump distribution. In the case of a single factor the mixing model is easily stated. Each time period t there is a binary mixing variable b(t). If b(t)=0 then the factor return in that period is drawn from the normal distribution and if b(t)=1 it drawn from the jump distribution. Torre found that simultaneous jumps occur in factors. Accordingly, in the multivariate case it is necessary to introduce a multivariate shock vector w(i,t) where w(i,t)=0 if the multivariate mixing variable b(i,t)=0 and w(i,t) is drawn from the ith jump distribution if b(i,t)=1. A transmission matrix T then maps w from the shock space into the factor space. Torre found that the market, factor and specific returns could all be described by a mixture of normal returns and power law distributed shocks occurring at a low frequency. This modeling refinement substantially improves the performance of the model with regard to extreme events. As such it makes possible construction of portfolios which behave in more expected manners during periods of market turbulence.
Although originally developed for the US equity market, the multifactor risk model was rapidly extended to other equity markets and to other types of securities such as bonds and equity options. The problem of how to construct a multi-asset class risk model then arises. A first approach was made by Beckers, Rudd and Stefek for the global equity market. They estimated a model involving currency, country, global industries and global risk indices. This model worked well for portfolios constructed by the top down process of first selecting countries and then selecting assets within countries. It was less successful on portfolios constructed by a bottom up process in which portfolios within countries were first selected by country specialists and then a global overlay was applied. In addition the global model applied to a single country portfolio would often be at odds with the local market model. Torre resolved these difficulties by introducing a two-stage factor analysis. The first stage consists of fitting a series of local factor models of the familiar form resulting in a set of factor returns f(i,j,t) where f(i,j,t) is the return to factor i in the jth local model at t. The factor returns are then fit to a second stage model of the form
Here Y gives the exposure of local factor (i,j) to the global factor whose return is g(k,t) and h(i,j,t) is the local specific factor return. The covariance matrix of factor returns is estimated as
where G is the covariance matrix of global factors and H is the block diagonal covariances of local specific factor returns. This modeling approach permits gluing any number of local models together to provide a comprehensive multi-asset class analysis. This is particularly relevant for global equity portfolios and for enterprise wide risk management.
The multifactor risk model with the refinements discussed above is the dominant method for controlling risk in professionally managed portfolios. It is estimated that more than half of world capital is managed using such models.
Many academics have attempted to construct factor models with a fairly small number of parameters. These include:
However, there is as yet no general agreement on how many factors there are. [3] There are numerous commercial models available, including those from MSCI and the Goldman Sachs asset management factor model. [4]
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
Covariance in probability theory and statistics is a measure of the joint variability of two random variables.
In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio.
Market risk is the risk of losses in positions arising from movements in market variables like prices and volatility. There is no unique classification as each classification may refer to different aspects of market risk. Nevertheless, the most commonly used types of market risk are:
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.
In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the capital asset pricing model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital. Furthermore, the newer APT model is more dynamic being utilised in more theoretical application than the preceding CAPM model. A 1986 article written by Gregory Connor and Robert Korajczyk, utilised the APT framework and applied it to portfolio performance measurement suggesting that the Jensen coefficient is an acceptable measurement of portfolio performance.
In finance, the beta is a statistic that measures the expected increase or decrease of an individual stock price in proportion to movements of the stock market as a whole. Beta can be used to indicate the contribution of an individual asset to the market risk of a portfolio when it is added in small quantity. It refers to an asset's non-diversifiable risk, systematic risk, or market risk. Beta is not a measure of idiosyncratic risk.
Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (heteroscedasticity) is incorporated into the regression. WLS is also a specialization of generalized least squares, when all the off-diagonal entries of the covariance matrix of the errors, are null.
In probability theory, the Kelly criterion is a formula for sizing a bet. The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. Assuming that the expected returns are known, the Kelly criterion leads to higher wealth than any other strategy in the long run. John Larry Kelly Jr., a researcher at Bell Labs, described the criterion in 1956.
In finance, tracking error or active risk is a measure of the risk in an investment portfolio that is due to active management decisions made by the portfolio manager; it indicates how closely a portfolio follows the index to which it is benchmarked. The best measure is the standard deviation of the difference between the portfolio and index returns.
In finance, diversification is the process of allocating capital in a way that reduces the exposure to any one particular asset or risk. A common path towards diversification is to reduce risk or volatility by investing in a variety of assets. If asset prices do not change in perfect synchrony, a diversified portfolio will have less variance than the weighted average variance of its constituent assets, and often less volatility than the least volatile of its constituents.
The single-index model (SIM) is a simple asset pricing model to measure both the risk and the return of a stock. The model has been developed by William Sharpe in 1963 and is commonly used in the finance industry. Mathematically the SIM is expressed as:
Within mathematical finance, the intertemporal capital asset pricing model, or ICAPM, is an alternative to the CAPM provided by Robert Merton. It is a linear factor model with wealth as state variable that forecasts changes in the distribution of future returns or income.
The RiskMetrics variance model was first established in 1989, when Sir Dennis Weatherstone, the new chairman of J.P. Morgan, asked for a daily report measuring and explaining the risks of his firm. Nearly four years later in 1992, J.P. Morgan launched the RiskMetrics methodology to the marketplace, making the substantive research and analysis that satisfied Sir Dennis Weatherstone's request freely available to all market participants.
In portfolio theory, a mutual fund separation theorem, mutual fund theorem, or separation theorem is a theorem stating that, under certain conditions, any investor's optimal portfolio can be constructed by holding each of certain mutual funds in appropriate ratios, where the number of mutual funds is smaller than the number of individual assets in the portfolio. Here a mutual fund refers to any specified benchmark portfolio of the available assets. There are two advantages of having a mutual fund theorem. First, if the relevant conditions are met, it may be easier for an investor to purchase a smaller number of mutual funds than to purchase a larger number of assets individually. Second, from a theoretical and empirical standpoint, if it can be assumed that the relevant conditions are indeed satisfied, then implications for the functioning of asset markets can be derived and tested.
Financial correlations measure the relationship between the changes of two or more financial variables over time. For example, the prices of equity stocks and fixed interest bonds often move in opposite directions: when investors sell stocks, they often use the proceeds to buy bonds and vice versa. In this case, stock and bond prices are negatively correlated.
Returns-based style analysis (RBSA) is a statistical technique used in finance to deconstruct the returns of investment strategies using a variety of explanatory variables. The model results in a strategy's exposures to asset classes or other factors, interpreted as a measure of a fund or portfolio manager's investment style. While the model is most frequently used to show an equity mutual fund’s style with reference to common style axes, recent applications have extended the model’s utility to model more complex strategies, such as those employed by hedge funds.
In portfolio management, the Carhart four-factor model is an extra factor addition in the Fama–French three-factor model, proposed by Mark Carhart. The Fama-French model, developed in the 1990, argued most stock market returns are explained by three factors: risk, price and company size. Carhart added a momentum factor for asset pricing of stocks. The Four Factor Model is also known in the industry as the Monthly Momentum Factor (MOM). Momentum is the speed or velocity of price changes in a stock, security, or tradable instrument.
Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.