where:"}},"i":0}}]}" id="mwOg">Four equivalent formulations,^{ [6] } where:
Financial economics studies how rational investors would apply decision theory to investment management. The subject is thus built on the foundations of microeconomics and derives several key results for the application of decision making under uncertainty to the financial markets. The underlying economic logic yields the fundamental theorem of asset pricing, which gives the conditions for arbitrage-free asset pricing.^{ [6] }^{ [5] } The various "fundamental" valuation formulae result directly.
Underlying all of financial economics are the concepts of present value and expectation.^{ [6] }
Calculating their present value allows the decision maker to aggregate the cashflows (or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making. ^{ [note 1] } (Note that here, "" represents a generic (or arbitrary) discount rate applied to the cash flows, whereas in the valuation formulae, the risk-free rate is applied once these have been "adjusted" for their riskiness; see below.)
An immediate extension is to combine probabilities with present value, leading to the expected value criterion which sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence, and respectively. ^{ [note 2] }
This decision method, however, fails to consider risk aversion ("as any student of finance knows"^{ [6] }). In other words, since individuals receive greater utility from an extra dollar when they are poor and less utility when comparatively rich, the approach is to therefore "adjust" the weight assigned to the various outcomes - i.e. "states" - correspondingly, . See indifference price. (Some investors may in fact be risk seeking as opposed to risk averse, but the same logic would apply).
Choice under uncertainty here may then be characterized as the maximization of expected utility. More formally, the resulting expected utility hypothesis states that, if certain axioms are satisfied, the subjective value associated with a gamble by an individual is that individual's statistical expectation of the valuations of the outcomes of that gamble.
The impetus for these ideas arise from various inconsistencies observed under the expected value framework, such as the St. Petersburg paradox and the Ellsberg paradox. ^{ [note 3] }
JEL classification codes |
In the Journal of Economic Literature classification codes, Financial Economics is one of the 19 primary classifications, at JEL: G. It follows Monetary and International Economics and precedes Public Economics. For detailed subclassifications see JEL classification codes § G. Financial Economics. The New Palgrave Dictionary of Economics (2008, 2nd ed.) also uses the JEL codes to classify its entries in v. 8, Subject Index, including Financial Economics at pp. 863–64. The below have links to entry abstracts of The New Palgrave Online for each primary or secondary JEL category (10 or fewer per page, similar to Google searches):
Tertiary category entries can also be searched.^{ [10] } |
The concepts of arbitrage-free, "rational", pricing and equilibrium are then coupled ^{ [11] } with the above to derive various of the "classical"^{ [12] } (or "neo-classical" ^{ [13] }) financial economics models.
Rational pricing is the assumption that asset prices (and hence asset pricing models) will reflect the arbitrage-free price of the asset, as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Economic equilibrium is, in general, a state in which economic forces such as supply and demand are balanced, and, in the absence of external influences these equilibrium values of economic variables will not change. General equilibrium deals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.)
The two concepts are linked as follows: where market prices do not allow for profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and are therefore not in equilibrium.^{ [14] } An arbitrage equilibrium is thus a precondition for a general economic equilibrium.
The immediate, and formal, extension of this idea, the fundamental theorem of asset pricing, shows that where markets are as described – and are additionally (implicitly and correspondingly) complete – one may then make financial decisions by constructing a risk neutral probability measure corresponding to the market.
"Complete" here means that there is a price for every asset in every possible state of the world, , and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assuming no friction): essentially solving simultaneously for n (risk-neutral) probabilities, , given n prices. For a simplified example see Rational pricing § Risk neutral valuation, where the economy has only two possible states – up and down – and where and (=) are the two corresponding probabilities, and in turn, the derived distribution, or "measure".
The formal derivation will proceed by arbitrage arguments. ^{ [6] }^{ [14] }^{ [11] } The analysis here is often undertaken assuming a representative agent , ^{ [15] } essentially treating all market-participants, "agents", as identical (or, at least, that they act in such a way that the sum of their choices is equivalent to the decision of one individual) with the effect that the problems are then mathematically tractable.
With this measure in place, the expected, i.e. required, return of any security (or portfolio) will then equal the riskless return, plus an "adjustment for risk",^{ [6] } i.e. a security-specific risk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions.^{ [6] }^{ [5] }^{ [16] } This approach is consistent with the above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences.
Thus, continuing the example, in pricing a derivative instrument its forecasted cashflows in the up- and down-states, and , are multiplied through by and , and are then discounted at the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation with and combined. In general, this premium may be derived by the CAPM (or extensions) as will be seen under § Uncertainty.
The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of the underlying and a risk free "bond"; see Rational pricing § Delta hedging (and § Uncertainty below). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk.
(Correspondingly, mathematical finance separates into two analytic regimes: risk and portfolio management (generally) use physical (or actual or actuarial) probability, denoted by "P"; while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q". In specific applications the lower case is used, as in the above equations.)
With the above relationship established, the further specialized Arrow–Debreu model may be derived. ^{ [note 4] } This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy. The Arrow–Debreu model applies to economies with maximally complete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods.
A direct extension, then, is the concept of a state price security (also called an Arrow–Debreu security), a contract that agrees to pay one unit of a numeraire (a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is the state price of this particular state of the world; also referred to as a "Risk Neutral Density".^{ [20] }
In the above example, the state prices, , would equate to the present values of and : i.e. what one would pay today, respectively, for the up- and down-state securities; the state price vector is the vector of state prices for all states. Applied to derivative valuation, the price today would simply be [× + ×]: the fourth formula (see above regarding the absence of a risk premium here). For a continuous random variable indicating a continuum of possible states, the value is found by integrating over the state price "density". These concepts are extended to martingale pricing and the related risk-neutral measure.
State prices find immediate application as a conceptual tool ("contingent claim analysis");^{ [6] } but can also be applied to valuation problems.^{ [21] } Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security"^{ [2] } – as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices.^{ [22] }^{ [21] }^{ [20] } These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself.
Using the related stochastic discount factor - also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factor , and then taking the expectation;^{ [16] } the third equation above. Essentially, this factor divides expected utility at the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporal marginal rate of substitution".
Bond valuation formula where Coupons and Face value are discounted at the appropriate rate, "i": typically a spread over the (per period) risk free rate as a function of credit risk; often quoted as a "yield to maturity". See body for discussion re the relationship with the above pricing formulae. |
DCF valuation formula, where the value of the firm, is its forecasted free cash flows discounted to the present using the weighted average cost of capital, i.e. cost of equity and cost of debt, with the former (often) derived using the below CAPM. For share valuation investors use the related dividend discount model. |
The capital asset pricing model (CAPM): The expected return used when discounting cashflows on an asset , is the risk-free rate plus the market premium multiplied by beta (), the asset's correlated volatility relative to the overall market . |
The Black–Scholes equation: |
The Black–Scholes formula for the value of a call option: |
Applying the above economic concepts, we may then derive various economic- and financial models and principles. As above, the two usual areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information",^{ [1] }^{ [15] } as will be seen below.
Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations.^{ [4] } (This division sometimes denoted "deterministic" and "random",^{ [23] } or "stochastic".)
The starting point here is "Investment under certainty", and usually framed in the context of a corporation. The Fisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders. Related is the Modigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected by how that firm is financed, and depends neither on its dividend policy nor its decision to raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark ^{ [11] } for evaluating the effects of factors outside the model that do affect value. ^{ [note 5] }
The mechanism for determining (corporate) value is provided by ^{ [26] }^{ [27] } John Burr Williams' The Theory of Investment Value , which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the "intrinsic", long-term worth is the present value of its future net cashflows, in the form of dividends. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below. Net present value (NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics under Outline of finance § Discounted cash flow valuation. ^{ [note 6] }
Bond valuation, in that cashflows (coupons and return of principal, or "Face value") are deterministic, may proceed in the same fashion.^{ [23] } An immediate extension, Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's corresponding zero rate, and of equivalent credit worthiness – as opposed to an overall rate. In many treatments bond valuation precedes equity valuation, under which cashflows (dividends) are not "known" per se. Williams and onward allow for forecasting as to these – based on historic ratios or published dividend policy – and cashflows are then treated as essentially deterministic; see below under § Corporate finance theory.
For both, "under certainty, with the focus on cash flows from securities over time," valuation based on a term structure of interest rates is in fact consistent with arbitrage-free pricing.^{ [28] } Indeed, a corollary of the above is that "the law of one price implies the existence of a discount factor" ^{ [29] } (and hence, as formulated, ).
Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows. Fisher's formulation of the theory here - developing an intertemporal equilibrium model - underpins also ^{ [26] } the below applications to uncertainty; ^{ [note 7] } see ^{ [30] } for the development.
For "choice under uncertainty" the twin assumptions of rationality and market efficiency, as more closely defined, lead to modern portfolio theory (MPT) with its capital asset pricing model (CAPM) – an equilibrium-based result – and to the Black–Scholes–Merton theory (BSM; often, simply Black–Scholes) for option pricing – an arbitrage-free result. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; see Asset pricing § Interrelationship.
Briefly, and intuitively – and consistent with § Arbitrage-free pricing and equilibrium above – the relationship between rationality and efficiency is as follows.^{ [31] } Given the ability to profit from private information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e. efficient, prices: the efficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (See earnings response coefficient.) The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to the best guess of the future: the assumption of rational expectations. The EMH does allow that when faced with new information, some investors may overreact and some may underreact, but what is required, however, is that investors' reactions follow a normal distribution – so that the net effect on market prices cannot be reliably exploited to make an abnormal profit. In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news:^{ [32] } the random walk hypothesis. This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution. ^{ [note 8] }
Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow; correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium. Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends, ^{ [5] }^{ [32] }^{ [15] } as based on currently available information. What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM.
In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are. ^{ [note 9] } This result will be independent of the investor's level of risk aversion and assumed utility function, thus providing a readily determined discount rate for corporate finance decision makers as above,^{ [35] } and for other investors. The argument proceeds as follows: ^{ [36] } If one can construct an efficient frontier – i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of the risk-free asset and the "market portfolio" (the Mutual fund separation theorem), with the combinations here plotting as the capital market line, or CML. Then, given this CML, the required return on a risky security will be independent of the investor's utility function, and solely determined by its covariance ("beta") with aggregate, i.e. market, risk. This is because investors here can then maximize utility through leverage as opposed to pricing; see Separation property (finance), Markowitz model § Choosing the best portfolio and CML diagram aside. As can be seen in the formula aside, this result is consistent with the preceding, equaling the riskless return plus an adjustment for risk.^{ [5] } A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to derive other equilibrium-pricing models.
Black–Scholes provides a mathematical model of a financial market containing derivative instruments, and the resultant formula for the price of European-styled options. ^{ [note 10] } The model is expressed as the Black–Scholes equation, a partial differential equation describing the changing price of the option over time; it is derived assuming log-normal, geometric Brownian motion (see Brownian model of financial markets). The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (, the value, or price, of the option, grows at , the risk-free rate).^{ [6] }^{ [5] } This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is the solution to the equation.) Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with § Arbitrage-free pricing and equilibrium above. Relatedly, therefore, the pricing formula may also be derived directly via risk neutral expectation. Itô's lemma provides the underlying mathematics, and, with Itô calculus more generally, remains fundamental in quantitative finance. ^{ [note 11] }
As implied by the Fundamental Theorem, it can be shown that the two models are consistent; then, as is to be expected, "classical" financial economics is thus unified. Here, the Black Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM.^{ [44] }^{ [13] } The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing. Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the fundamental result above – further explaining, and if required demonstrating, this unity.^{ [6] } Here, the CAPM is derived by linking , risk aversion, to overall market return, and setting the return on security as ; see Stochastic discount factor § Properties. The Black-Scholes formula is found, in the limit, by attaching a binomial probability ^{ [11] } to each of numerous possible spot-prices (i.e. states) and then rearranging for the terms corresponding to and , per the boxed description; see Binomial options pricing model § Relationship with Black–Scholes.
More recent work further generalizes and extends these models. As regards asset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models.
The majority of developments here relate to required return, i.e. pricing, extending the basic CAPM. Multi-factor models such as the Fama–French three-factor model and the Carhart four-factor model, propose factors other than market return as relevant in pricing. The intertemporal CAPM and consumption-based CAPM similarly extend the model. With intertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion of consumption (in the economic sense) then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return.
Whereas the above extend the CAPM, the single-index model is a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization. The arbitrage pricing theory (APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns."^{ [45] } It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line.^{ [note 12] }
As regards portfolio optimization, the Black–Litterman model ^{ [48] } departs from the original Markowitz model – i.e. of constructing portfolios via an efficient frontier. Black–Litterman instead starts with an equilibrium assumption, and is then modified to take into account the 'views' (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke ^{ [49] } asset allocation. Where factors additional to volatility are considered (kurtosis, skew...) then multiple-criteria decision analysis can be applied; here deriving a Pareto efficient portfolio. The universal portfolio algorithm applies machine learning to asset selection, learning adaptively from historical data. Behavioral portfolio theory recognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas have lately been applied here; recently this is the case also for genetic algorithms and Machine learning, more generally. (Tail) risk parity focuses on allocation of risk, rather than allocation of capital. ^{ [note 13] } See Portfolio optimization § Improving portfolio optimization for other techniques and objectives, and Financial risk management § Investment management for discussion.
PDE for a zero-coupon bond: Interpretation: Analogous to Black-Scholes, ^{ [50] } arbitrage arguments describe the instantaneous change in the bond price for changes in the (risk-free) short rate ; the analyst selects the specific short-rate model to be employed. |
In pricing derivatives, the binomial options pricing model provides a discretized version of Black–Scholes, useful for the valuation of American styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchers have used options to extract state-prices for a variety of other applications in financial economics.^{ [6] }^{ [44] }^{ [22] } For path dependent derivatives, Monte Carlo methods for option pricing are employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Various other numeric techniques have also been developed. The theoretical framework too has been extended such that martingale pricing is now the standard approach. ^{ [note 14] }
Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis"). Real options valuation allows that option holders can influence the option's underlying; models for employee stock option valuation explicitly assume non-rationality on the part of option holders; Credit derivatives allow that payment obligations or delivery requirements might not be honored. Exotic derivatives are now routinely valued. Multi-asset underlyers are handled via simulation or copula based analysis.
Similarly, the various short-rate models allow for an extension of these techniques to fixed income- and interest rate derivatives. (The Vasicek and CIR models are equilibrium-based, while Ho–Lee and subsequent models are based on arbitrage-free pricing.) The more general HJM Framework describes the dynamics of the full forward-rate curve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly for hybrid securities, where credit risk is combined with uncertainty re future rates; see Bond valuation § Stochastic calculus approach and Lattice model (finance) § Hybrid securities. ^{ [note 15] }
Following the Crash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thus implied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussed in the next section.
After the financial crisis of 2007–2008, a further development:^{ [59] } as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis. Addressing this, therefore, issues such as counterparty credit risk, funding costs and costs of capital are now additionally considered when pricing,^{ [60] } and a credit valuation adjustment, or CVA – and potentially other valuation adjustments, collectively xVA – is generally added to the risk-neutral derivative value. The standard economic arguments can be extended to incorporate these various adjustments.^{ [61] }
A related, and perhaps more fundamental change, is that discounting is now on the Overnight Index Swap (OIS) curve, as opposed to LIBOR as used previously.^{ [59] } This is because post-crisis, the overnight rate is considered a better proxy for the "risk-free rate".^{ [62] } (Also, practically, the interest paid on cash collateral is usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSA discounting".) Swap pricing – and, therefore, yield curve construction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-leg LIBOR tenor, with discounting on the common OIS curve.
Mirroring the above developments, asset-valuation and decisioning no longer need assume "certainty". Monte Carlo methods in finance allow financial analysts to construct "stochastic" or probabilistic corporate finance models, as opposed to the traditional static and deterministic models;^{ [63] } see Corporate finance § Quantifying uncertainty. Relatedly, Real Options theory allows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today.^{ [64] } More traditionally, decision trees – which are complementary – have been used to evaluate projects, by incorporating in the valuation (all) possible events (or states) and consequent management decisions;^{ [65] }^{ [63] } the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward."^{ [63] }^{ [note 16] }
Related to this, is the treatment of forecasted cashflows in equity valuation. In many cases, following Williams above, the average (or most likely) cash-flows were discounted,^{ [67] } as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments under Financial modeling § Accounting. In more modern treatments, then, it is the expected cashflows (in the mathematical sense: ) combined into an overall value per forecast period which are discounted. ^{ [68] }^{ [69] }^{ [70] }^{ [63] } And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows ^{ [63] } (essentially, and combined).
Other developments here include^{ [71] } agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizing the issues interrelated with capital structure. ^{ [72] } Clean surplus accounting and the related residual income valuation provide a model that returns price as a function of earnings, expected returns, and change in book value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; see Dividend policy § Relevance of dividend policy.
"Corporate finance" as a discipline more generally, per Fisher above, relates to the long term objective of maximizing the value of the firm - and its return to shareholders - and thus also incorporates the areas of capital structure and dividend policy. ^{ [73] } Extensions of the theory here then also consider these latter, as follows: (i) optimization re capitalization structure, and theories here as to corporate choices and behavior: Capital structure substitution theory, Pecking order theory, Market timing hypothesis, Trade-off theory; (ii) considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include: the Walter model, Lintner model, Residuals theory and signaling hypothesis, as well as discussion re the observed clientele effect and dividend puzzle.
As described, the typical application of real options is to capital budgeting type problems. However, here, they are also applied to problems of capital structure and dividend policy, and to the related design of corporate securities; ^{ [74] } and since stockholder and bondholders have different objective functions, in the analysis of the related agency problems. ^{ [64] } In all of these cases, state-prices can provide the market-implied information relating to the corporate, as above, which is then applied to the analysis. For example, convertible bonds can (must) be priced consistent with the (recovered) state-prices of the corporate's equity.^{ [21] }^{ [68] }
The discipline, as outlined, also includes a formal study of financial markets. Of interest especially are market regulation and market microstructure, and their relationship to price efficiency.
Regulatory economics studies, in general, the economics of regulation. In the context of finance, it will address the impact of financial regulation on the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence and financial stability. Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance, annual reports), insider trading, and short-selling will impact price efficiency, the cost of equity, and market liquidity.^{ [75] }
Market microstructure is concerned with the details of how exchange occurs in markets (with Walrasian-, matching-, Fisher-, and Arrow-Debreu market ^{[ disambiguation needed ]}s as prototypes), and "analyzes how specific trading mechanisms affect the price formation process",^{ [76] } examining the ways in which the processes of a market affect determinants of transaction costs, prices, quotes, volume, and trading behavior. It has been used, for example, in providing explanations for long-standing exchange rate puzzles,^{ [77] } and for the equity premium puzzle.^{ [78] } In contrast to the above classical approach, models here explicitly allow for (testing the impact of) market frictions and other imperfections; see also market design.
For both regulation ^{ [79] } and microstructure,^{ [80] } and generally,^{ [81] } agent-based models can be developed ^{ [82] } to examine any impact due to a change in structure or policy - or to make inferences re market dynamics - by testing these in an artificial financial market, or AFM. ^{ [note 17] } This approach, essentially simulated trade between numerous agents, "typically uses artificial intelligence technologies [often genetic algorithms and neural nets] to represent the adaptive behaviour of market participants".^{ [82] }
These 'bottom-up' models "start from first principals of agent behavior",^{ [83] } with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e. stylized facts] emerging from a soup of individual interacting strategies".^{ [83] } Agent-based models depart further from the classical approach — the representative agent, as outlined — in that they introduce heterogeneity into the environment (thereby addressing, also, the aggregation problem).
As above, there is a very close link between (i) the random walk hypothesis, with the associated belief that price changes should follow a normal distribution, on the one hand, and (ii) market efficiency and rational expectations, on the other. Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges.
As discussed, the assumptions that market prices follow a random walk and that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analysts and risk managers frequently modify the "standard models" (see Kurtosis risk, Skewness risk, Long tail, Model risk). In fact, Benoit Mandelbrot had discovered already in the 1960s ^{ [84] } that changes in financial prices do not follow a normal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics. ^{ [85] }
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of the above "classical" financial models; while jump diffusion models allow for (option) pricing incorporating "jumps" in the spot price.^{ [86] } Risk managers, similarly, complement (or substitute) the standard value at risk models with historical simulations, mixture models, principal component analysis, extreme value theory, as well as models for volatility clustering.^{ [87] } For further discussion see Fat-tailed distribution § Applications in economics, and Value at risk § Criticism. Portfolio managers, likewise, have modified their optimization criteria and algorithms; see § Portfolio theory above.
Closely related is the volatility smile, where, as above, implied volatility – the volatility corresponding to the BSM price – is observed to differ as a function of strike price (i.e. moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM. The term structure of volatility describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is then a three-dimensional surface plot of volatility smile and term structure. These empirical phenomena negate the assumption of constant volatility – and log-normality – upon which Black–Scholes is built.^{ [39] }^{ [86] } Within institutions, the function of Black-Scholes is now, largely, to communicate prices via implied volatilities, much like bond prices are communicated via YTM; see Black–Scholes model § The volatility smile.
In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes. The two main approaches are local volatility and stochastic volatility. The first returns the volatility which is "local" to each spot-time point of the finite difference- or simulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are first calibrated to observed prices, and are then applied to the valuation or hedging in question; the most common are Heston, SABR and CEV. This approach addresses certain problems identified with hedging under local volatility.^{ [88] }
Related to local volatility are the lattice-based implied-binomial and -trinomial trees – essentially a discretization of the approach – which are similarly, but less commonly,^{ [20] } used for pricing; these are built on state-prices recovered from the surface. Edgeworth binomial trees allow for a specified (i.e. non-Gaussian) skew and kurtosis in the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required.^{ [89] } Similarly purposed (and derived) closed-form models were also developed. ^{ [90] }
As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate. And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value for counterparty- and funding-related risk. These xVA are additional to any smile or surface effect. This is valid as the surface is built on price data relating to fully collateralized positions, and there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...)
As mentioned at top, mathematical finance (and particularly financial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, James Rickards, amongst other critics ^{ [85] } of financial economics, suggests that, instead, the theory needs revisiting almost entirely:
Market anomalies and economic puzzles |
As seen, a common assumption is that financial decision makers act rationally; see Homo economicus. Recently, however, researchers in experimental economics and experimental finance have challenged this assumption empirically. These assumptions are also challenged theoretically, by behavioral finance, a discipline primarily concerned with the limits to rationality of economic agents. ^{ [note 18] } For related criticisms re corporate finance theory vs its practice see:.^{ [92] }
Consistent with, and complementary to these findings, various persistent market anomalies have been documented, these being price or return distortions – e.g. size premiums – which appear to contradict the efficient-market hypothesis; calendar effects are the best known group here. Related to these are various of the economic puzzles, concerning phenomena similarly contradicting the theory. The equity premium puzzle , as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than the risk premium rational equity investors should demand, an "abnormal return". For further context see Random walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances.
More generally, and particularly following the financial crisis of 2007–2008, financial economics and mathematical finance have been subjected to deeper criticism; notable here is Nassim Nicholas Taleb, who claims that the prices of financial assets cannot be characterized by the simple models currently in use, rendering much of current practice at best irrelevant, and, at worst, dangerously misleading; see Black swan theory, Taleb distribution. A topic of general interest has thus been financial crises, ^{ [93] } and the failure of (financial) economics to model (and predict) these.
A related problem is systemic risk: where companies hold securities in each other then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See: Systemic risk § Inadequacy of classic valuation models; Cascades in financial networks; Flight-to-quality.
Areas of research attempting to explain (or at least model) these phenomena, and crises, include^{ [15] } noise trading, market microstructure (as above), and Heterogeneous agent models. The latter is extended to agent-based computational models, as mentioned; here ^{ [81] } price is treated as an emergent phenomenon, resulting from the interaction of the various market participants (agents). The noisy market hypothesis argues that prices can be influenced by speculators and momentum traders, as well as by insiders and institutions that often buy and sell stocks for reasons unrelated to fundamental value; see Noise (economic). The adaptive market hypothesis is an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles of evolution to financial interactions. An information cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information. Copula-based modelling has similarly been applied. See also Hyman Minsky's "financial instability hypothesis", as well as George Soros' application of "reflexivity".
On the obverse, however, various studies have shown that despite these departures from efficiency, asset prices do typically exhibit a random walk and that one cannot therefore consistently outperform market averages, i.e. attain "alpha".^{ [94] } The practical implication, therefore, is that passive investing (e.g. via low-cost index funds) should, on average, serve better than any other active strategy.^{ [95] }^{ [note 19] } Relatedly, institutionally inherent limits to arbitrage – as opposed to factors directly contradictory to the theory – are sometimes proposed as an explanation for these departures from efficiency.
Finance refers to monetary resources and to the study and discipline of money, currency and capital assets. As a subject of study, it is related to but distinct from economics, which is the study of the production, distribution, and consumption of goods and services. Based on the scope of financial activities in financial systems, the discipline can be divided into personal, corporate, and public finance.
The Black–Scholes or Black–Scholes–Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments. From the parabolic partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price given the risk of the security and its expected return. The equation and model are named after economists Fischer Black and Myron Scholes. Robert C. Merton, who first wrote an academic paper on the subject, is sometimes also credited.
In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio.
In mathematical finance, the Greeks are the quantities representing the sensitivity of the price of a derivative instrument such as an option to changes in one or more underlying parameters on which the value of an instrument or portfolio of financial instruments is dependent. The name is used because the most common of these sensitivities are denoted by Greek letters. Collectively these have also been called the risk sensitivities, risk measures or hedge parameters.
In mathematical finance, a risk-neutral measure is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market, a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure. Such a measure exists if and only if the market is arbitrage-free.
Rational pricing is the assumption in financial economics that asset prices – and hence asset pricing models – will reflect the arbitrage-free price of the asset as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Bond valuation is the process by which an investor arrives at an estimate of the theoretical fair value, or intrinsic worth, of a bond. As with any security or capital investment, the theoretical fair value of a bond is the present value of the stream of cash flows it is expected to generate. Hence, the value of a bond is obtained by discounting the bond's expected cash flows to the present using an appropriate discount rate.
In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the capital asset pricing model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital. Furthermore, the newer APT model is more dynamic being utilised in more theoretical application than the preceding CAPM model. A 1986 article written by Gregory Connor and Robert Korajczyk, utilised the APT framework and applied it to portfolio performance measurement suggesting that the Jensen coefficient is an acceptable measurement of portfolio performance.
In finance, the beta is a statistic that measures the expected increase or decrease of an individual stock price in proportion to movements of the stock market as a whole. Beta can be used to indicate the contribution of an individual asset to the market risk of a portfolio when it is added in small quantity. It refers to an asset's non-diversifiable risk, systematic risk, or market risk. Beta is not a measure of idiosyncratic risk.
Monte Carlo methods are used in corporate finance and mathematical finance to value and analyze (complex) instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining the distribution of their value over the range of resultant outcomes. This is usually done by help of stochastic asset models. The advantage of Monte Carlo methods over other techniques increases as the dimensions of the problem increase.
In mathematical finance, a Monte Carlo option model uses Monte Carlo methods to calculate the value of an option with multiple sources of uncertainty or with complicated features. The first application to option pricing was by Phelim Boyle in 1977. In 1996, M. Broadie and P. Glasserman showed how to price Asian options by Monte Carlo. An important development was the introduction in 1996 by Carriere of Monte Carlo methods for options with early exercise features.
Business valuation is a process and a set of procedures used to estimate the economic value of an owner's interest in a business. Here various valuation techniques are used by financial market participants to determine the price they are willing to pay or receive to effect a sale of the business. In addition to estimating the selling price of a business, the same valuation tools are often used by business appraisers to resolve disputes related to estate and gift taxation, divorce litigation, allocate business purchase price among business assets, establish a formula for estimating the value of partners' ownership interest for buy-sell agreements, and many other business and legal purposes such as in shareholders deadlock, divorce litigation and estate contest.
In financial economics, asset pricing refers to a formal treatment and development of two interrelated pricing principles, outlined below, together with the resultant models. There have been many models developed for different situations, but correspondingly, these stem from either general equilibrium asset pricing or rational asset pricing, the latter corresponding to risk neutral pricing.
Financial modeling is the task of building an abstract representation of a real world financial situation. This is a mathematical model designed to represent the performance of a financial asset or portfolio of a business, project, or any other investment.
In finance, a lattice model is a technique applied to the valuation of derivatives, where a discrete time model is required. For equity options, a typical example would be pricing an American option, where a decision as to option exercise is required at "all" times before and including maturity. A continuous model, on the other hand, such as Black–Scholes, would only allow for the valuation of European options, where exercise is on the option's maturity date. For interest rate derivatives lattices are additionally useful in that they address many of the issues encountered with continuous models, such as pull to par. The method is also used for valuing certain exotic options, where because of path dependence in the payoff, Monte Carlo methods for option pricing fail to account for optimal decisions to terminate the derivative by early exercise, though methods now exist for solving this problem.
The following outline is provided as an overview of and topical guide to finance:
In finance, an option is a contract which conveys to its owner, the holder, the right, but not the obligation, to buy or sell a specific quantity of an underlying asset or instrument at a specified strike price on or before a specified date, depending on the style of the option.
Quantitative analysis is the use of mathematical and statistical methods in finance and investment management. Those working in the field are quantitative analysts (quants). Quants tend to specialize in specific areas which may include derivative structuring or pricing, risk management, investment management and other related finance occupations. The occupation is similar to those in industrial mathematics in other industries. The process usually consists of searching vast databases for patterns, such as correlations among liquid assets or price-movement patterns.
Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling in the financial field.
In finance, a contingent claim is a derivative whose future payoff depends on the value of another “underlying” asset, or more generally, that is dependent on the realization of some uncertain future event. These are so named, since there is only a payoff under certain contingencies. Any derivative instrument that is not a contingent claim is called a forward commitment.
Financial economics
Asset pricing
Corporate finance
{{cite book}}
: CS1 maint: multiple names: authors list (link)