An event study is a statistical method to assess the impact of an event (also referred to as a "treatment"). [1]
Early prominent uses of event studies occurred in the field of finance. [1] For example, the announcement of a merger between two business entities can be analyzed to see whether investors believe the merger will create or destroy value. The basic idea is to find the abnormal return attributable to the event being studied by adjusting for the return that stems from the price fluctuation of the market as a whole. [2] The event study was invented by Ball and Brown (1968). [3]
As the event methodology can be used to elicit the effects of any type of event on the direction and magnitude of stock price changes, it is very versatile. Event studies are thus common to various research areas, such as accounting and finance, management, economics, marketing, information technology, law, political science, operations and supply chain management. [4]
One aspect often used to structure the overall body of event studies is the breadth of the studied event types. On the one hand, there is research investigating the stock market responses to economy-wide events (i.e., market shocks, such as regulatory changes, or catastrophic events like war). On the other hand, event studies are used to investigate the stock market responses to corporate events, such as mergers and acquisitions, earnings announcements, debt or equity issues, corporate reorganisations, investment decisions and corporate social responsibility (MacKinlay 1997; [5] McWilliams & Siegel, 1997 [6] ).
The general event study methodology is explained in, for example, MacKinlay (1997) [5] or Mitchell and Netter (1994). [7] In MacKinlay (1997), this is done "using financial market data" to "measure the impact of a specific event on the value of a firm". He argues that "given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event's economic impact can be constructed using security prices observed over a relatively short time period". It is important to note that short-horizon event studies are more reliable than long-horizon event studies [8] as the latter have many limitations. However, Kothari and Warner (2005) were able to refine long-horizon methodologies in order to improve the design and reliability of the studies over longer periods. [9]
Methodologically, event studies imply the following: Based on an estimation window prior to the analyzed event, the method estimates what the normal stock returns of the affected firm(s) should be at the day of the event and several days prior and after the event (i.e., during the event window). Thereafter, the method deducts this 'normal returns' from the 'actual returns' to receive 'abnormal returns' attributed to the event.
Event studies, however, may differ with respect to their specification of normal returns. The most common model for normal returns is the 'market model' (MacKinlay 1997). Following this model, the analysis implies to use an estimation window (typically sized 120 days) prior to the event to derive the typical relationship between the firm's stock and a reference index through a regression analysis. Based on the regression coefficients, the normal returns are then projected and used to calculate the abnormal returns. Alternative models for the normal returns include the CAPM model, or more simplistic approaches such as mean returns (see MacKinlay 1997 for an overview).
Depending on the model chosen for the 'normal return', conducting event studies requires the researcher to implement a distinct sequence of steps. For the most common model, the 'market model', the steps are as follows:
To specify if individual abnormal returns differ from zero with some statistical validity, test statistics need to be applied. Various test statistics at the different levels of analysis (i.e., AR-, CAR-, AAR- and CAAR-level) exist for this purpose. The most common test, the t-test, divides the abnormal returns through the root mean square error of the regression. Resulting t-values need then to be compared with the critical values of the Student's t-distribution. There is some evidence that during times of high volatility (e.g. financial crisis of 2007–2008), too many companies tend to show significantly abnormal returns using the t-test, which makes it more difficult to determine which returns are truly "abnormal". [8] [10]
Event studies can be implemented with various different tools. Single event studies can easily be implemented with MS Excel, event studies covering multiple events need to be built using statistical software packages (e.g., STATA, Matlab). Besides of these multi-use tools, there are solutions tailored to conducting event study analyses (e.g., Eventus, EventStudyTools).
The logic behind the event study methodology (within the specific context of mergers) is explained in Warren-Boulton and Dalkir (2001): [11]
Warren-Boulton and Dalkir (2001) [11] apply their event-probability methodology to the proposed merger between Staples, Inc. and Office Depot (1996), which was challenged by the Federal Trade Commission and eventually withdrawn.
Warren-Boulton and Dalkir (2001) [11] find highly significant returns to the only rival firm in the relevant market. Based on these returns, they are able to estimate the price effect of the merger in the product market which is highly consistent with the estimates of the likely price increase from other independent sources.
The results of event studies have been accepted as evidence in litigation in US, in the quantification of damages in cases relating to securities fraud. [12]
Finance refers to monetary resources and to the study and discipline of money, currency, assets and liabilities. As a subject of study, it is related to but distinct from economics, which is the study of the production, distribution, and consumption of goods and services. Based on the scope of financial activities in financial systems, the discipline can be divided into personal, corporate, and public finance.
Financial economics is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on both sides of a trade". Its concern is thus the interrelation of financial variables, such as share prices, interest rates and exchange rates, as opposed to those concerning the real economy. It has two main areas of focus: asset pricing and corporate finance; the first being the perspective of providers of capital, i.e. investors, and the second of users of capital. It thus provides the theoretical underpinning for much of finance.
In finance, technical analysis is an analysis methodology for analysing and forecasting the direction of prices through the study of past market data, primarily price and volume. As a type of active management, it stands in contradiction to much of modern portfolio theory. The efficacy of technical analysis is disputed by the efficient-market hypothesis, which states that stock market prices are essentially unpredictable, and research on whether technical analysis offers any benefit has produced mixed results. It is distinguished from fundamental analysis, which considers a company's financial statements, health, and the overall state of the market and economy.
In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio.
The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information.
In finance, valuation is the process of determining the value of a (potential) investment, asset, or security. Generally, there are three approaches taken, namely discounted cashflow valuation, relative valuation, and contingent claim valuation.
In finance, an abnormal return is the difference between the actual return of a security and the expected return. Abnormal returns are sometimes triggered by "events." Events can include mergers, dividend announcements, company earning announcements, interest rate increases, lawsuits, etc. all of which can contribute to an abnormal return. Events in finance can typically be classified as information or occurrences that have not already been priced by the market.
Investment management is the professional asset management of various securities, including shareholdings, bonds, and other assets, such as real estate, to meet specified investment goals for the benefit of investors. Investors may be institutions, such as insurance companies, pension funds, corporations, charities, educational establishments, or private investors, either directly via investment contracts/mandates or via collective investment schemes like mutual funds, exchange-traded funds, or Real estate investment trusts.
Financial econometrics is the application of statistical methods to financial market data. Financial econometrics is a branch of financial economics, in the field of economics. Areas of study include capital markets, financial institutions, corporate finance and corporate governance. Topics often revolve around asset valuation of individual stocks, bonds, derivatives, currencies and other financial instruments.
A market anomaly in a financial market is predictability that seems to be inconsistent with theories of asset prices. Standard theories include the capital asset pricing model and the Fama-French Three Factor Model, but a lack of agreement among academics about the proper theory leads many to refer to anomalies without a reference to a benchmark theory. Indeed, many academics simply refer to anomalies as "return predictors", avoiding the problem of defining a benchmark theory.
The random walk hypothesis is a financial theory stating that stock market prices evolve according to a random walk and thus cannot be predicted.
Articles in economics journals are usually classified according to JEL classification codes, which derive from the Journal of Economic Literature. The JEL is published quarterly by the American Economic Association (AEA) and contains survey articles and information on recently published books and dissertations. The AEA maintains EconLit, a searchable data base of citations for articles, books, reviews, dissertations, and working papers classified by JEL codes for the years from 1969. A recent addition to EconLit is indexing of economics journal articles from 1886 to 1968 parallel to the print series Index of Economic Articles.
Algorithmic trading is a method of executing orders using automated pre-programmed trading instructions accounting for variables such as time, price, and volume. This type of trading attempts to leverage the speed and computational resources of computers relative to human traders. In the twenty-first century, algorithmic trading has been gaining traction with both retail and institutional traders. A study in 2019 showed that around 92% of trading in the Forex market was performed by trading algorithms rather than humans.
In financial economics, finance, and accounting, the earnings response coefficient, or ERC, is the estimated relationship between equity returns and the unexpected portion of companies' earnings announcements.
The single-index model (SIM) is a simple asset pricing model to measure both the risk and the return of a stock. The model has been developed by William Sharpe in 1963 and is commonly used in the finance industry. Mathematically the SIM is expressed as:
In asset pricing and portfolio management the Fama–French three-factor model is a statistical model designed in 1992 by Eugene Fama and Kenneth French to describe stock returns. Fama and French were colleagues at the University of Chicago Booth School of Business, where Fama still works. In 2013, Fama shared the Nobel Memorial Prize in Economic Sciences for his empirical analysis of asset prices. The three factors are (1) market excess return, (2) the outperformance of small versus big companies, and (3) the outperformance of high book/market versus low book/market companies. There is academic debate about the last two factors.
Jonathan Kinlay is a quantitative researcher and hedge fund manager. He is founder and CEO of Systematic Strategies, LLC, a systematic hedge fund that deploys high-frequency trading strategies using news-based algorithms.
Quantitative analysis is the use of mathematical and statistical methods in finance and investment management. Those working in the field are quantitative analysts (quants). Quants tend to specialize in specific areas which may include derivative structuring or pricing, risk management, investment management and other related finance occupations. The occupation is similar to those in industrial mathematics in other industries. The process usually consists of searching vast databases for patterns, such as correlations among liquid assets or price-movement patterns.
Returns-based style analysis (RBSA) is a statistical technique used in finance to deconstruct the returns of investment strategies using a variety of explanatory variables. The model results in a strategy's exposures to asset classes or other factors, interpreted as a measure of a fund or portfolio manager's investment style. While the model is most frequently used to show an equity mutual fund’s style with reference to common style axes, recent applications have extended the model’s utility to model more complex strategies, such as those employed by hedge funds.
Paul A. Griffin is an accountant, academic, and author. He is Distinguished Professor Emeritus at the Graduate School of Management, University of California, Davis.