Superforecaster

Last updated

A superforecaster is a person who makes forecasts that can be shown by statistical means to have been consistently more accurate than the general public or experts. Superforecasters sometimes use modern analytical and statistical methodologies to augment estimates of base rates of events; research finds that such forecasters are typically more accurate than experts in the field who do not use analytical and statistical techniques. [1] The term "superforecaster" is a trademark of Good Judgment Inc. [2]

Contents

Etymology

The term is a combination of the prefix super, meaning "over and above" [3] or "of high grade or quality", [3] and forecaster, meaning one who predicts an outcome that might occur in the future.

History

Origins of the term are attributed to Philip E. Tetlock with results from The Good Judgment Project and subsequent book with Dan Gardner Superforecasting: The Art and Science of Prediction . [4]

In December 2019 a Central Intelligence Agency analyst writing under the pseudonym "Bobby W." suggested the Intelligence community should study superforecaster research on how certain individuals with "particular traits" are better forecasters and how they should be leveraged. [5]

In February 2020 Dominic Cummings agreed with Tetlock and others in implying that study of superforecasting was more effective than listening to political pundits. [6]

Superforecasters

Science

Superforecasters estimate a probability of an occurrence, and review the estimate when circumstances contributing to the estimate change. This is based on both personal impressions, public data, and incorporating input from other superforecasters, but attempts to remove bias in their estimates. [7] In The Good Judgment Project one set of forecasters were given training on how to translate their understandings into a probabilistic forecast, summarised into an acronym "CHAMP" for Comparisons, Historical trends, Average opinions, Mathematical models, and Predictable biases. [8]

A study published in 2021 used a Bias, Information, Noise (BIN) model to study the underlying processes enabling accuracy among superforecasters. The conclusion was that superforecasters' ability to filter out "noise" played a more significant role in improving accuracy than bias reduction or the efficient extraction of information. [9]

Effectiveness

In the Good Judgment Project, "the top forecasters... performed about 30 percent better than the average for intelligence community analysts who could read intercepts and other secret data". [10] [11]

Training forecasters with specialised techniques may increase forecaster accuracy: in the Good Judgment Project, one group was given training in the "CHAMP" methodology, which appeared to increase forecasting accuracy. [8]

Superforecasters sometimes predict that events are less than 50% likely to happen, but they still happen: Bloomberg notes that they made a prediction of 23% for a leave vote in the month of the June 2016 Brexit referendum. On the other hand, the BBC notes that they accurately predicted Donald Trump's success in the 2016 Republican Party primaries. [12]

Superforecasters also made a number of accurate and important forecasts about the coronavirus pandemic, which "businesses, governments and other institutions" have drawn upon. In addition, they have made "accurate predictions about world events like the approval of the United Kingdom’s Brexit vote in 2020, Saudi Arabia’s decision to partially take its national gas company public in 2019, and the status of Russia’s food embargo against some European countries also in 2019". [13]

Aid agencies are also using superforecasting to determine the probability of droughts becoming famines, [1] while the Center for a New American Security has described how superforecasters aided them in predicting future Colombian government policy. [14] Goldman Sachs drew upon superforecasters' vaccine forecasts during the coronavirus pandemic to inform their analyses. [15]

The Economist notes that in October 2021, Superforecasters accurately predicted events that occurred in 2022, including "election results in France and Brazil; the lack of a Winter Olympics boycott; the outcome of America's midterm elections, and that global Covid-19 vaccinations would reach 12bn doses in mid-2022". However, they did not forecast the emergence of the Omicron variant. [16] The following year, The Economist wrote that all eight of the Superforecasters’ predictions for 2023 were correct, including on global GDP growth, Chinese GDP growth, and election results in Nigeria and Turkey. [17]

In February 2023, Superforecasters made better forecasts than readers of the Financial Times on eight out of nine questions that were resolved at the end of the year. [18]

Traits

One of Tetlock's findings from the Good Judgment Project was that cognitive and personality traits were more important than specialised knowledge when it came to predicting the outcome of various world events typically more accurately than intelligence agencies. [19] In particular, a 2015 study found that key predictors of forecasting accuracy were "cognitive ability [IQ], political knowledge, and open-mindedness". [20] Superforecasters "were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness". In the Good Judgment Project, the superforecasters "scored higher on both intelligence and political knowledge than the already well-above-average group of forecasters" who were taking part in the tournament. [21]

People

Related Research Articles

<span class="mw-page-title-main">Prediction</span> Statement about a future event

A prediction or forecast is a statement about a future event or about future data. Predictions are often, but not always, based upon experience or knowledge of forecasters. There is no universal agreement about the exact difference between "prediction" and "estimation"; different authors and disciplines ascribe different connotations.

Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and resolution itself. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

Prediction markets, also known as betting markets, information markets, decision markets, idea futures or event derivatives, are open markets that enable the prediction of specific outcomes using financial incentives. They are exchange-traded markets established for trading bets in the outcome of various events. The market prices can indicate what the crowd thinks the probability of the event is. A typical prediction market contract is set up to trade between 0 and 100%. The most common form of a prediction market is a binary option market, which will expire at the price of 0 or 100%. Prediction markets can be thought of as belonging to the more general concept of crowdsourcing which is specially designed to aggregate information on particular topics of interest. The main purposes of prediction markets are eliciting aggregating beliefs over an unknown future outcome. Traders with different beliefs trade on contracts whose payoffs are related to the unknown future outcome and the market prices of the contracts are considered as the aggregated belief.

There are two main uses of the term calibration in statistics that denote special types of statistical inference problems. Calibration can mean

In the psychology of affective forecasting, the impact bias, a form of which is the durability bias, is the tendency for people to overestimate the length or the intensity of future emotional states.

<span class="mw-page-title-main">Planning fallacy</span> Cognitive bias of underestimating time needed

The planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed. This phenomenon sometimes occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias affects predictions only about one's own tasks. On the other hand, when outside observers predict task completion times, they tend to exhibit a pessimistic bias, overestimating the time needed. The planning fallacy involves estimates of task completion times more optimistic than those encountered in similar projects in the past.

<i>The Wisdom of Crowds</i> 2004 book by James Surowiecki

The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, published in 2004, is a book written by James Surowiecki about the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies and anecdotes to illustrate its argument, and touches on several fields, primarily economics and psychology.

Affective forecasting, also known as hedonic forecasting or the hedonic forecasting mechanism, is the prediction of one's affect in the future. As a process that influences preferences, decisions, and behavior, affective forecasting is studied by both psychologists and economists, with broad applications.

The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest. Since the forecast error is derived from the same scale of data, comparisons between the forecast errors of different series can only be made when the series are on the same scale.

The wisdom of the crowd is the collective opinion of a diverse and independent group of individuals rather than that of a single expert. This process, while not new to the Information Age, has been pushed into the mainstream spotlight by social information sites such as Quora, Reddit, Stack Exchange, Wikipedia, Yahoo! Answers, and other web resources which rely on collective human knowledge. An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise.

Reference class forecasting or comparison class forecasting is a method of predicting the future by looking at similar past situations and their outcomes. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The theoretical work helped Kahneman win the Nobel Prize in Economics.

<span class="mw-page-title-main">Philip E. Tetlock</span> Canadian-American political scientist

Philip E. Tetlock is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences. He was elected a Member of the American Philosophical Society in 2019.

Illusion of validity is a cognitive bias in which a person overestimates their ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data "tell" a coherent story.

The Good Judgment Project (GJP) is an organization dedicated to "harnessing the wisdom of the crowd to forecast world events". It was co-created by Philip E. Tetlock, decision scientist Barbara Mellers, and Don Moore, all professors at the University of Pennsylvania.

Aggregative Contingent Estimation (ACE) was a program of the Office of Incisive Analysis (OIA) at the Intelligence Advanced Research Projects Activity (IARPA). The program ran from June 2010 until June 2015.

<i>Superforecasting: The Art and Science of Prediction</i> Book by Philip E. Tetlock and Dan Gardner released in 2015

Superforecasting: The Art and Science of Prediction is a 2015 book by Philip E. Tetlock and Dan Gardner. It details findings from The Good Judgment Project.

Debiasing is the reduction of bias, particularly with respect to judgment and decision making. Biased judgment and decision making is that which systematically deviates from the prescriptions of objective standards such as facts, logic, and rational behavior or prescriptive norms. Biased judgment and decision making exists in consequential domains such as medicine, law, policy, and business, as well as in everyday life. Investors, for example, tend to hold onto falling stocks too long and sell rising stocks too quickly. Employers exhibit considerable discrimination in hiring and employment practices, and some parents continue to believe that vaccinations cause autism despite knowing that this link is based on falsified evidence. At an individual level, people who exhibit less decision bias have more intact social environments, reduced risk of alcohol and drug use, lower childhood delinquency rates, and superior planning and problem solving abilities.

<span class="mw-page-title-main">Unanimous A.I.</span> American technology company specializing in artificial swarm intelligence

Unanimous AI is an American technology company provides artificial swarm intelligence (ASI) technology. Unanimous AI provides a "human swarming" platform "swarm.ai" that allows distributed groups of users to collectively predict answers to questions. This process has resulted in successful predictions of major events such as the Kentucky Derby, the Oscars, the Stanley Cup, Presidential Elections, and the World Series.

Barbara Ann Mellers is the I. George Heyman University Professor of Psychology at the University of Pennsylvania. Her research focuses on decision processes.

References

  1. 1 2 Adonis (2020).
  2. "Trademark Electronic Search System (TESS)". tmsearch.uspto.gov. Retrieved 5 January 2023.
  3. 1 2 "Super Definition & Meaning". Merriam-Webster. Archived from the original on 1 November 2023. Retrieved 1 November 2023.
  4. Tetlock & Gardner (2015).
  5. Bobby W. (2019), p. 14.
  6. 1 2 BBC News (2020).
  7. BBC News (2020), What is the science behind it?.
  8. 1 2 Harford (2014), How to be a superforecaster.
  9. Satopää, Ville A.; Salikhov, Marat; Tetlock, Philip E.; Mellers, Barbara (2021). "Bias, Information, Noise: The BIN Model of Forecasting". Management Science. 67 (12): 7599–7618. doi:10.1287/mnsc.2020.3882.
  10. David Ignatius. "More chatter than needed". The Washington Post. 1 November 2013.
  11. Horowitz MC, Ciocca J, Kahn L, Ruhl C. "Keeping Score: A New Approach to Geopolitical Forecasting" (PDF). Perry World House, University of Pennsylvania. 2021, p.9.
  12. BBC News (2020), How successful is it?.
  13. Tara Law. "'Superforecasters' Are Making Eerily Accurate Predictions About COVID-19. Our Leaders Could Learn From Their Approach." TIME. 11 June 2020.
  14. Cochran KM, Tozzi G. "Getting it Righter, Faster: The Role of Prediction in Agile Government Decisionmaking". Center for a New American Security. 2017.
  15. Hatzius J, Struyven D, Bhushan S, Milo D. "V(accine)-Shaped Recovery". Goldman Sachs Economics Research. 7 November 2020.
  16. What the “superforecasters” predict for major events in 2023. The Economist. 18 November 2022
  17. What the “superforecasters” predict for major events in 2024. The Economist. 13 November 2023
  18. The art of superforecasting: how FT readers fared against the experts in 2023. Financial Times. 26 December 2023
  19. 1 2 3 Burton (2015).
  20. Mellers B, Stone E, Atanasov P, Rohrbaugh N, Metz SE, Ungar L, et al. "The psychology of intelligence analysis: drivers of prediction accuracy in world politics" (PDF). Journal of Experimental Psychology: Applied. 2015;21(1):1-14.
  21. Mellers B, Stone E, Atanasov P, Rohrbaugh N, Metz SE, Ungar L, et al. "The psychology of intelligence analysis: drivers of prediction accuracy in world politics" (PDF). Journal of Experimental Psychology: Applied. 2015;21(1):1-14.
  22. Nilaya (2015), Guests.
  23. "Superforecasting: The Future's Chequered Past and Present". whynow. 8 February 2021. Retrieved 17 July 2021.
  24. 1 2 3 4 "Superforecaster Profiles". Good Judgment. Retrieved 17 July 2021.

Further reading