Wisdom of the crowd

Last updated

The wisdom of the crowd is the collective opinion of a diverse and independent group of individuals rather than that of a single expert. This process, while not new to the Information Age, has been pushed into the mainstream spotlight by social information sites such as Quora, Reddit, Stack Exchange, Wikipedia, Yahoo! Answers, and other web resources which rely on collective human knowledge. [1] An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise. [2]

Contents

Trial by jury can be understood as at least partly relying on wisdom of the crowd, compared to bench trial which relies on one or a few experts. In politics, sometimes sortition is held as an example of what wisdom of the crowd would look like. Decision-making would happen by a diverse group instead of by a fairly homogenous political group or party. Research within cognitive science has sought to model the relationship between wisdom of the crowd effects and individual cognition.

A large group's aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, but often superior to, the answer given by any of the individuals within the group.

Jury theorems from social choice theory provide formal arguments for wisdom of the crowd given a variety of more or less plausible assumptions. Both the assumptions and the conclusions remain controversial, even though the theorems themselves are not. The oldest and simplest is Condorcet's jury theorem (1785).

Examples

Aristotle is credited as the first person to write about the "wisdom of the crowd" in his work Politics . [3] [4] According to Aristotle, "it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively, than those who are so, just as public dinners to which many contribute are better than those supplied at one man's cost". [5]

Sir Francis Galton by Charles Wellington Furse, given to the National Portrait Gallery, London in 1954 Sir Francis Galton by Charles Wellington Furse.jpg
Sir Francis Galton by Charles Wellington Furse, given to the National Portrait Gallery, London in 1954

The classic wisdom-of-the-crowds finding involves point estimation of a continuous quantity. At a 1906 country fair in Plymouth, 800 people participated in a contest to estimate the weight of a slaughtered and dressed ox. Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds. [6] This has contributed to the insight in cognitive science that a crowd's individual judgments can be modeled as a probability distribution of responses with the median centered near the true value of the quantity to be estimated. [7]

In recent years, the "wisdom of the crowd" phenomenon has been leveraged in business strategy, advertising spaces, and also political research. Marketing firms aggregate consumer feedback and brand impressions for clients. Meanwhile, companies such as Trada invoke crowds to design advertisements based on clients' requirements. [8] Lastly, political preferences are aggregated to predict or nowcast political elections. [9] [10]

Non-human examples are prevalent. For example, the golden shiner is a fish that prefers shady areas. The single shiner has a very difficult time finding shady regions in a body of water whereas a large group is much more efficient at finding the shade. [11]

Higher-dimensional problems and modeling

Although classic wisdom-of-the-crowds findings center on point estimates of single continuous quantities, the phenomenon also scales up to higher-dimensional problems that do not lend themselves to aggregation methods such as taking the mean. More complex models have been developed for these purposes. A few examples of higher-dimensional problems that exhibit wisdom-of-the-crowds effects include:

In further exploring the ways to improve the results, a new technique called the "surprisingly popular" was developed by scientists at MIT's Sloan Neuroeconomics Lab in collaboration with Princeton University. For a given question, people are asked to give two responses: What they think the right answer is, and what they think popular opinion will be. The averaged difference between the two indicates the correct answer. It was found that the "surprisingly popular" algorithm reduces errors by 21.3 percent in comparison to simple majority votes, and by 24.2 percent in comparison to basic confidence-weighted votes where people express how confident they are of their answers and 22.2 percent compared to advanced confidence-weighted votes, where one only uses the answers with the highest average. [18]

Definition of crowd

In the context of wisdom of the crowd, the term crowd takes on a broad meaning. One definition characterizes a crowd as a group of people amassed by an open call for participation. [19]

In the digital age, the potential for collective intelligence has expanded with the advent of information technologies and social media platforms such as Google, Facebook, Twitter, and others. These platforms enable the aggregation of opinions and knowledge on a massive scale, creating what some have defined as "intelligent communities." [20] However, the effectiveness of these digital crowds can be compromised by issues such as demographic biases, the influence of highly active users, and the presence of bots, which can skew the diversity and independence necessary for a crowd to be truly wise. To mitigate these issues, researchers have suggested using a multi-media approach to aggregate intelligence from various platforms or employing factor analysis to filter out biases and noise. [21]

While crowds are often leveraged in online applications, they can also be utilized in offline contexts. [19] In some cases, members of a crowd may be offered monetary incentives for participation. [22] Certain applications of "wisdom of the crowd", such as jury duty in the United States, mandate crowd participation. [23]

Analogues with individual cognition: the "crowd within"

The insight that crowd responses to an estimation task can be modeled as a sample from a probability distribution invites comparisons with individual cognition. In particular, it is possible that individual cognition is probabilistic in the sense that individual estimates are drawn from an "internal probability distribution." If this is the case, then two or more estimates of the same quantity from the same person should average to a value closer to ground truth than either of the individual judgments, since the effect of statistical noise within each of these judgments is reduced. This of course rests on the assumption that the noise associated with each judgment is (at least somewhat) statistically independent. Thus, the crowd needs to be independent but also diversified, in order to allow a variety of answers. The answers on the ends of the spectrum will cancel each other, allowing the wisdom of the crowd phenomena to take its place. Another caveat is that individual probability judgments are often biased toward extreme values (e.g., 0 or 1). Thus any beneficial effect of multiple judgments from the same person is likely to be limited to samples from an unbiased distribution. [24]

Vul and Pashler (2008) asked participants for point estimates of continuous quantities associated with general world knowledge, such as "What percentage of the world's airports are in the United States?" Without being alerted to the procedure in advance, half of the participants were immediately asked to make a second, different guess in response to the same question, and the other half were asked to do this three weeks later. The average of a participant's two guesses was more accurate than either individual guess. Furthermore, the averages of guesses made in the three-week delay condition were more accurate than guesses made in immediate succession. One explanation of this effect is that guesses in the immediate condition were less independent of each other (an anchoring effect) and were thus subject to (some of) the same kind of noise. In general, these results suggest that individual cognition may indeed be subject to an internal probability distribution characterized by stochastic noise, rather than consistently producing the best answer based on all the knowledge a person has. [24] These results were mostly confirmed in a high-powered pre-registered replication. [25] The only result that was not fully replicated was that a delay in the second guess generates a better estimate.

Hourihan and Benjamin (2010) tested the hypothesis that the estimate improvements observed by Vul and Pashler in the delayed responding condition were the result of increased independence of the estimates. To do this Hourihan and Benjamin capitalized on variations in memory span among their participants. In support they found that averaging repeated estimates of those with lower memory spans showed greater estimate improvements than the averaging the repeated estimates of those with larger memory spans. [26]

Rauhut and Lorenz (2011) expanded on this research by again asking participants to make estimates of continuous quantities related to real world knowledge. In this case participants were informed that they would make five consecutive estimates. This approach allowed the researchers to determine, firstly, the number of times one needs to ask oneself in order to match the accuracy of asking others and then, the rate at which estimates made by oneself improve estimates compared to asking others. The authors concluded that asking oneself an infinite number of times does not surpass the accuracy of asking just one other individual. Overall, they found little support for a so-called "mental distribution" from which individuals draw their estimates; in fact, they found that in some cases asking oneself multiple times actually reduces accuracy. Ultimately, they argue that the results of Vul and Pashler (2008) overestimate the wisdom of the "crowd within" – as their results show that asking oneself more than three times actually reduces accuracy to levels below that reported by Vul and Pashler (who only asked participants to make two estimates). [27]

Müller-Trede (2011) attempted to investigate the types of questions in which utilizing the "crowd within" is most effective. He found that while accuracy gains were smaller than would be expected from averaging ones' estimates with another individual, repeated judgments lead to increases in accuracy for both year estimation questions (e.g., when was the thermometer invented?) and questions about estimated percentages (e.g., what percentage of internet users connect from China?). General numerical questions (e.g., what is the speed of sound, in kilometers per hour?) did not improve with repeated judgments, while averaging individual judgments with those of a random other did improve accuracy. This, Müller-Trede argues, is the result of the bounds implied by year and percentage questions. [28]

Van Dolder and Van den Assem (2018) studied the "crowd within" using a large database from three estimation competitions organised by Holland Casino. For each of these competitions, they find that within-person aggregation indeed improves accuracy of estimates. Furthermore, they also confirm that this method works better if there is a time delay between subsequent judgments. Even with considerable delay between estimates, between-person aggregation is more beneficial. The average of a large number of judgements from the same person is barely better than the average of two judgements from different people. [29]

Dialectical bootstrapping: improving the estimates of the "crowd within"

Herzog and Hertwig (2009) attempted to improve on the "wisdom of many in one mind" (i.e., the "crowd within") by asking participants to use dialectical bootstrapping. Dialectical bootstrapping involves the use of dialectic (reasoned discussion that takes place between two or more parties with opposing views, in an attempt to determine the best answer) and bootstrapping (advancing oneself without the assistance of external forces). They posited that people should be able to make greater improvements on their original estimates by basing the second estimate on antithetical information. Therefore, these second estimates, based on different assumptions and knowledge than that used to generate the first estimate would also have a different error (both systematic and random) than the first estimate – increasing the accuracy of the average judgment. From an analytical perspective dialectical bootstrapping should increase accuracy so long as the dialectical estimate is not too far off and the errors of the first and dialectical estimates are different. To test this, Herzog and Hertwig asked participants to make a series of date estimations regarding historical events (e.g., when electricity was discovered), without knowledge that they would be asked to provide a second estimate. Next, half of the participants were simply asked to make a second estimate. The other half were asked to use a consider-the-opposite strategy to make dialectical estimates (using their initial estimates as a reference point). Specifically, participants were asked to imagine that their initial estimate was off, consider what information may have been wrong, what this alternative information would suggest, if that would have made their estimate an overestimate or an underestimate, and finally, based on this perspective what their new estimate would be. Results of this study revealed that while dialectical bootstrapping did not outperform the wisdom of the crowd (averaging each participants' first estimate with that of a random other participant), it did render better estimates than simply asking individuals to make two estimates. [30]

Hirt and Markman (1995) found that participants need not be limited to a consider-the-opposite strategy in order to improve judgments. Researchers asked participants to consider-an-alternative – operationalized as any plausible alternative (rather than simply focusing on the "opposite" alternative) – finding that simply considering an alternative improved judgments. [31]

Not all studies have shown support for the "crowd within" improving judgments. Ariely and colleagues asked participants to provide responses based on their answers to true-false items and their confidence in those answers. They found that while averaging judgment estimates between individuals significantly improved estimates, averaging repeated judgment estimates made by the same individuals did not significantly improve estimates. [32]

Challenges and solution approaches

Wisdom-of-the-crowds research routinely attributes the superiority of crowd averages over individual judgments to the elimination of individual noise, [33] an explanation that assumes independence of the individual judgments from each other. [7] [24] Thus the crowd tends to make its best decisions if it is made up of diverse opinions and ideologies.

Averaging can eliminate random errors that affect each person's answer in a different way, but not systematic errors that affect the opinions of the entire crowd in the same way. For instance, a wisdom-of-the-crowd technique would not be expected to compensate for cognitive biases. [34] [35]

Scott E. Page introduced the diversity prediction theorem: "The squared error of the collective prediction equals the average squared error minus the predictive diversity". Therefore, when the diversity in a group is large, the error of the crowd is small. [36]

Miller and Stevyers reduced the independence of individual responses in a wisdom-of-the-crowds experiment by allowing limited communication between participants. Participants were asked to answer ordering questions for general knowledge questions such as the order of U.S. presidents. For half of the questions, each participant started with the ordering submitted by another participant (and alerted to this fact), and for the other half, they started with a random ordering, and in both cases were asked to rearrange them (if necessary) to the correct order. Answers where participants started with another participant's ranking were on average more accurate than those from the random starting condition. Miller and Steyvers conclude that different item-level knowledge among participants is responsible for this phenomenon, and that participants integrated and augmented previous participants' knowledge with their own knowledge. [37]

Crowds tend to work best when there is a correct answer to the question being posed, such as a question about geography or mathematics. [38] When there is not a precise answer crowds can come to arbitrary conclusions. [39] Wisdom-of-the-crowd algorithms thrive when individual responses exhibit proximity and a symmetrical distribution around the correct, albeit unknown, answer. This symmetry allows errors in responses to cancel each other out during the averaging process. Conversely, these algorithms may falter when the subset of correct answers is limited, failing to counteract random biases. This challenge is particularly pronounced in online settings where individuals, often with varying levels of expertise, respond anonymously. Some "wisdom-of-the-crowd" algorithms tackle this issue using expectation–maximization voting techniques. The Wisdom-IN-the-crowd (WICRO) algorithm [35] offers a one-pass classification solution. It gauges the expertise level of individuals by assessing the relative "distance" between them. Specifically, the algorithm identifies experts by presuming that their responses will be relatively "closer" to each other when addressing questions within their field of expertise. This approach enhances the algorithm's ability to discern expertise levels in scenarios where only a small subset of participants possess proficiency in a given domain, mitigating the impact of potential biases that may arise during anonymous online interactions. [35] [40]

The wisdom of the crowd effect is easily undermined. Social influence can cause the average of the crowd answers to be inaccurate, while the geometric mean and the median are more robust. [41] This relies on knowing an individual's uncertainty and trust of their estimate. The average answer of individuals who are knowledgeable about a topic will vary from the average of individuals who know nothing of the topic. A simple average of knowledgeable and inexperienced opinions will be less accurate than one in which the weighting of the average is based on the uncertainty and trust of their answer.

Experiments run by the Swiss Federal Institute of Technology found that when a group of people were asked to answer a question together they would attempt to come to a consensus which would frequently cause the accuracy of the answer to decrease. One suggestion to counter this effect is to ensure that the group contains a population with diverse backgrounds. [39]

Research from the Good Judgment Project showed that teams organized in prediction polls can avoid premature consensus and produce aggregate probability estimates that are more accurate than those produced in prediction markets. [42]

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive bias</span> Systematic pattern of deviation from norm or rationality in judgment

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.

Prediction markets, also known as betting markets, information markets, decision markets, idea futures or event derivatives, are open markets that enable the prediction of specific outcomes using financial incentives. They are exchange-traded markets established for trading bets in the outcome of various events. The market prices can indicate what the crowd thinks the probability of the event is. A typical prediction market contract is set up to trade between 0 and 100%. The most common form of a prediction market is a binary option market, which will expire at the price of 0 or 100%. Prediction markets can be thought of as belonging to the more general concept of crowdsourcing which is specially designed to aggregate information on particular topics of interest. The main purposes of prediction markets are eliciting aggregating beliefs over an unknown future outcome. Traders with different beliefs trade on contracts whose payoffs are related to the unknown future outcome and the market prices of the contracts are considered as the aggregated belief.

The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. This heuristic, operating on the notion that, if something can be recalled, it must be important, or at least more important than alternative solutions not as readily recalled, is inherently biased toward recently acquired information.

The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of a known prototypical event. It is one of a group of heuristics proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.

The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.

The anchoring effect is a psychological phenomenon in which an individual's judgments or decisions are influenced by a reference point or "anchor" which can be completely irrelevant. Both numeric and non-numeric anchoring have been reported in research. In numeric anchoring, once the value of the anchor is set, subsequent arguments, estimates, etc. made by an individual may change from what they would have otherwise been without the anchor. For example, an individual may be more likely to purchase a car if it is placed alongside a more expensive model. Prices discussed in negotiations that are lower than the anchor may seem reasonable, perhaps even cheap to the buyer, even if said prices are still relatively higher than the actual market value of the car. Another example may be when estimating the orbit of Mars, one might start with the Earth's orbit and then adjust upward until they reach a value that seems reasonable.

<i>The Wisdom of Crowds</i> 2004 book by James Surowiecki

The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, published in 2004, is a book written by James Surowiecki about the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies and anecdotes to illustrate its argument, and touches on several fields, primarily economics and psychology.

The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

Lie detection is an assessment of a verbal statement with the goal to reveal a possible intentional deceit. Lie detection may refer to a cognitive process of detecting deception by evaluating message content as well as non-verbal cues. It also may refer to questioning techniques used along with technology that record physiological functions to ascertain truth and falsehood in response. The latter is commonly used by law enforcement in the United States, but rarely in other countries because it is based on pseudoscience.

In computer science, an anytime algorithm is an algorithm that can return a valid solution to a problem even if it is interrupted before it ends. The algorithm is expected to find better and better solutions the longer it keeps running.

<span class="mw-page-title-main">Philip E. Tetlock</span> Canadian-American political scientist

Philip E. Tetlock is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences. He was elected a Member of the American Philosophical Society in 2019.

Convergent thinking is a term coined by Joy Paul Guilford as the opposite of divergent thinking. It generally means the ability to give the "correct" answer to questions that do not require novel ideas, for instance on standardized multiple-choice tests for intelligence.

Metamemory or Socratic awareness, a type of metacognition, is both the introspective knowledge of one's own memory capabilities and the processes involved in memory self-monitoring. This self-awareness of memory has important implications for how people learn and use memories. When studying, for example, students make judgments of whether they have successfully learned the assigned material and use these decisions, known as "judgments of learning", to allocate study time.

Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.

Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.

Cultural consensus theory is an approach to information pooling which supports a framework for the measurement and evaluation of beliefs as cultural; shared to some extent by a group of individuals. Cultural consensus models guide the aggregation of responses from individuals to estimate (1) the culturally appropriate answers to a series of related questions and (2) individual competence in answering those questions. The theory is applicable when there is sufficient agreement across people to assume that a single set of answers exists. The agreement between pairs of individuals is used to estimate individual cultural competence. Answers are estimated by weighting responses of individuals by their competence and then combining responses.

The hard–easy effect is a cognitive bias that manifests itself as a tendency to overestimate the probability of one's success at a task perceived as hard, and to underestimate the likelihood of one's success at a task perceived as easy. The hard-easy effect takes place, for example, when individuals exhibit a degree of underconfidence in answering relatively easy questions and a degree of overconfidence in answering relatively difficult questions. "Hard tasks tend to produce overconfidence but worse-than-average perceptions," reported Katherine A. Burson, Richard P. Larrick, and Jack B. Soll in a 2005 study, "whereas easy tasks tend to produce underconfidence and better-than-average effects."

The Good Judgment Project (GJP) is an organization dedicated to "harnessing the wisdom of the crowd to forecast world events". It was co-created by Philip E. Tetlock, decision scientist Barbara Mellers, and Don Moore, all professors at the University of Pennsylvania.

Hal Pashler is Distinguished Professor of Psychology at University of California, San Diego. An experimental psychologist and cognitive scientist, Pashler is best known for his studies of human attentional limitations. and for his work on visual attention He has also developed and tested new methods for enhancing learning and reducing forgetting, focusing on the temporal spacing of learning and retrieval practice.

The surprisingly popular answer is a wisdom of the crowd technique that taps into the expert minority opinion within a crowd. For a given question, a group is asked two questions:

References

  1. Baase, Sara (2007). A Gift of Fire: Social, Legal, and Ethical Issues for Computing and the Internet. 3rd edition. Prentice Hall. pp. 351–357. ISBN   0-13-600848-8.
  2. 1 2 Yi, Sheng Kung Michael; Steyvers, Mark; Lee, Michael D.; Dry, Matthew J. (April 2012). "The Wisdom of the Crowd in Combinatorial Problems". Cognitive Science . 36 (3): 452–470. doi: 10.1111/j.1551-6709.2011.01223.x . PMID   22268680.
  3. Ober, Josiah (September 2009). "An Aristotelian middle way between deliberation and independent-guess aggregation" (PDF). Princeton/Stanford Working Papers in Classics. Stanford, California: Stanford University.
  4. Landemore, Hélène (2012). "Collective Wisdom—Old and New" (PDF). In Landemore, Hélène; Elster, Jon (eds.). Collective wisdom: principles and mechanisms. Cambridge, England: Cambridge University Press. ISBN   9781107010338. OCLC   752249923.
  5. Aristotle (1967) [4th century BC]. "III". Politics . Translated by Rackham, H. Cambridge, Massachusetts: Loeb Classical Library. p. 1281b. ASIN   B00JD13IJW.
  6. Galton, Francis (1907). "Vox populi". Nature . 75 (1949): 450–451. Bibcode:1907Natur..75..450G. doi: 10.1038/075450a0 .
  7. 1 2 Surowiecki, James (2004). The Wisdom of Crowds . Doubleday. p. 10. ISBN   978-0-385-50386-0.
  8. Rich, Laura (August 4, 2010). "Tapping the Wisdom of the Crowd". The New York Times . ISSN   0362-4331 . Retrieved April 3, 2017.
  9. Sjöberg, Lennart (September 2008). "Are all crowds equally wise? a comparison of political election forecasts by experts and the public". Journal of Forecasting. 28 (1): 1–18. doi:10.1002/for.1083. hdl: 10.1002/for.1083 . S2CID   153631270.
  10. Murr, Andreas E. (September 2015). "The wisdom of crowds: Applying Condorcet's jury theorem to forecasting US presidential elections". International Journal of Forecasting. 31 (3): 916–929. doi:10.1016/j.ijforecast.2014.12.002.
  11. Yong, Ed (January 31, 2013). "The Real Wisdom of the Crowds". Phenomena. Archived from the original on February 3, 2013. Retrieved April 2, 2017.
  12. Yi, S.K.M., Steyvers, M., Lee, M.D., and Dry, M. (2010). Wisdom of Crowds in Minimum Spanning Tree Problems. Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum.
  13. Lee, Michael D.; Steyvers, Mark; de Young, Mindy; Miller, Brent J. (January 2012). "Inferring Expertise in Knowledge and Prediction Ranking Tasks". Topics in Cognitive Science. 4 (1): 151–163. CiteSeerX   10.1.1.303.822 . doi:10.1111/j.1756-8765.2011.01175.x. PMID   22253187.
  14. Lee, Michael D.; Steyvers, Mark; de Young, Mindy; Miller, Brent J. Carlson, L.; Hölscher, C.; Shipley, T. F. (eds.). "A model-based approach to measuring expertise in ranking tasks". Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, Texas: Cognitive Science Society.
  15. Steyvers, Mark; Lee, Michael D.; Miller, Brent J.; Hemmer, Pernille (December 2009). "The Wisdom of Crowds in the Recollection of Order Information". Advances in Neural Information Processing Systems (22). Cambridge, Massachusetts: MIT Press: 1785–1793.
  16. Miller, Brent J.; Hemmer, Pernille; Steyvers, Michael D.; Lee, Michael D. (July 2009). "The Wisdom of Crowds in Ordering Problems". Proceedings of the Ninth International Conference on Cognitive Modeling. Manchester, England: International Conference on Cognitive Modeling.
  17. Zhang, S., and Lee, M.D., (2010). "Cognitive models and the wisdom of crowds: A case study using the bandit problem". In R. Catrambone, and S. Ohlsson (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society, pp. 1118–1123. Austin, TX: Cognitive Science Society.
  18. Prelec, Dražen; Seung, H. Sebastian; McCoy, John (2017). "A solution to the single-question crowd wisdom problem". Nature. 541 (7638): 532–535. Bibcode:2017Natur.541..532P. doi:10.1038/nature21054. PMID   28128245. S2CID   4452604.
  19. 1 2 Prpić, John; Shukla, Prashant P.; Kietzmann, Jan H.; McCarthy, Ian P. (2015-01-01). "How to work a crowd: Developing crowd capital through crowdsourcing". Business Horizons. 58 (1): 77–85. arXiv: 1702.04214 . doi:10.1016/j.bushor.2014.09.005. S2CID   10374568.
  20. Levy, Pierre (1999). Collective intelligence: Mankind's emerging world in cyberspace. Perseus Publishing. ISBN   0738202614.
  21. Franch, F abio (4 October 2022). "Wisdom of crowds". In Ceron, Andrea (ed.). Elgar Encyclopedia of Technology and Politics. Edward Elgar Publishing. pp. 262–267. doi:10.4337/9781800374263.wisdom.crowds.
  22. "Wisdom of the crowd". Nature. 438 (7066): 281. 2005. Bibcode:2005Natur.438..281.. doi: 10.1038/438281a . PMID   16292279.
  23. O'Donnell, Michael H. "Judge extols wisdom of juries". Idaho State Journal. Retrieved 2017-04-03.
  24. 1 2 3 Vul, E.; Pashler, H. (2008). "Measuring the Crowd Within: Probabilistic Representations Within Individuals". Psychological Science. 19 (7): 645–647. CiteSeerX   10.1.1.408.4760 . doi:10.1111/j.1467-9280.2008.02136.x. PMID   18727777. S2CID   44718192.
  25. Steegen, S; Dewitte, L; Tuerlinckx, F; Vanpaemel, W (2014). "Measuring the crowd within again: A pre-registered replication study". Frontiers in Psychology. 5: 786. doi: 10.3389/fpsyg.2014.00786 . PMC   4112915 . PMID   25120505.
  26. Hourihan, K. L.; Benjamin, A. S. (2010). "Smaller is better (when sampling from the crowd within): Low memory-span individuals benefit more from multiple opportunities for estimation". Journal of Experimental Psychology: Learning, Memory, and Cognition. 36 (4): 1068–1074. doi:10.1037/a0019694. PMC   2891554 . PMID   20565223.
  27. Rauhut, H; Lorenz (2011). "The wisdom of the crowds in one mind: How individuals can simulate the knowledge of diverse societies to reach better decisions". Journal of Mathematical Psychology. 55 (2): 191–197. doi:10.1016/j.jmp.2010.10.002.
  28. Müller-Trede, J. (2011). "Repeated judgment sampling: Boundaries". Judgment and Decision Making. 6 (4): 283–294. doi: 10.1017/S1930297500001893 . S2CID   18966323.
  29. van Dolder, Dennie; Assem, Martijn J. van den (2018). "The wisdom of the inner crowd in three large natural experiments". Nature Human Behaviour. 2 (1): 21–26. doi:10.1038/s41562-017-0247-6. hdl: 1871.1/e9dc3564-2c08-4de7-8a3a-e8e74a8d9fac . ISSN   2397-3374. PMID   30980050. S2CID   21708295. SSRN   3099179.
  30. Herzog, S. M.; Hertwig, R. (2009). "The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping". Psychological Science. 20 (2): 231–237. doi:10.1111/j.1467-9280.2009.02271.x. hdl: 11858/00-001M-0000-002E-575D-B . PMID   19170937. S2CID   23695566.
  31. Hirt, E. R.; Markman, K. D. (1995). "Multiple explanation: A consider-an-alternative strategy for debiasing judgments". Journal of Personality and Social Psychology. 69 (6): 1069–1086. doi:10.1037/0022-3514.69.6.1069. S2CID   145016943.
  32. Ariely, D.; Au, W. T.; Bender, R. H.; Budescu, D. V.; Dietz, C. B.; Gu, H.; Zauberman, G. (2000). "The effects of averaging subjective probability estimates between and within judges". Journal of Experimental Psychology: Applied. 6 (2): 130–147. CiteSeerX   10.1.1.153.9813 . doi:10.1037/1076-898x.6.2.130. PMID   10937317.
  33. Benhenda, Mostapha (2011). "A Model of Deliberation Based on Rawls's Political Liberalism". Social Choice and Welfare. 36: 121–178. doi:10.1007/s00355-010-0469-2. S2CID   9423855. SSRN   1616519.
  34. Marcus Buckingham; Ashley Goodall. "The Feedback Fallacy". Harvard Business Review . No. March-April 2019.
  35. 1 2 3 Ratner, N., Kagan, E., Kumar, P., & Ben-Gal, I. (2023). "Unsupervised classification for uncertain varying responses: The wisdom-in-the-crowd (WICRO) algorithm" (PDF). Knowledge-Based Systems, 272: 110551.{{cite web}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  36. Page, Scott E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies . Princeton, NJ: Princeton University Press. ISBN   978-0-691-13854-1.
  37. Miller, B., and Steyvers, M. (in press). "The Wisdom of Crowds with Communication". In L. Carlson, C. Hölscher, & T.F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
  38. "The Wisdom of Crowds". randomhouse.com.
  39. 1 2 Ball, Philip. "'Wisdom of the crowd': The myths and realities" . Retrieved 2017-04-02.
  40. Ghanaiem, A., Kagan, E., Kumar, P., Raviv, T., Glynn, P., & Ben-Gal, I. (2023). "Unsupervised Classification under Uncertainty: The Distance-Based Algorithm" (PDF). Mathematics, 11(23), 4784.{{cite web}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  41. "How Social Influence can Undermine the Wisdom of Crowd Effect". Proc. Natl. Acad. Sci., 2011.
  42. Atanasov, Pavel; Rescober, Phillip; Stone, Eric; Swift, Samuel A.; Servan-Schreiber, Emile; Tetlock, Philip; Ungar, Lyle; Mellers, Barbara (2016-04-22). "Distilling the Wisdom of Crowds: Prediction Markets vs. Prediction Polls". Management Science. 63 (3): 691–706. doi:10.1287/mnsc.2015.2374. ISSN   0025-1909.