Intuitive statistics

Last updated

Intuitive statistics, or folk statistics, is the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

Contents

Because this capacity can accommodate a broad range of informational domains, the subject matter is similarly broad and overlaps substantially with other cognitive phenomena. Indeed, some have argued that "cognition as an intuitive statistician" is an apt companion metaphor to the computer metaphor of cognition. [1] Others appeal to a variety of statistical and probabilistic mechanisms behind theory construction [2] [3] and category structuring. [4] [5] Research in this domain commonly focuses on generalizations relating to number, relative frequency, risk, and any systematic signatures in inferential capacity that an organism (e.g., humans, or non-human primates) might have. [1] [6]

Background and theory

Intuitive inferences can involve generating hypotheses from incoming sense data, such as categorization and concept structuring. Data are typically probabilistic and uncertainty is the rule, rather than the exception, in learning, perception, language, and thought. [7] [8] Recently, researchers have drawn from ideas in probability theory, philosophy of mind, computer science, and psychology to model cognition as a predictive and generative system of probabilistic representations, allowing information structures to support multiple inferences in a variety of contexts and combinations. [8] This approach has been called a probabilistic language of thought because it constructs representations probabilistically, from pre-existing concepts to predict a possible and likely state of the world. [5]

Probability

Statisticians and probability theorists have long debated about the use of various tools, assumptions, and problems relating to inductive inference in particular. [1] David Hume famously considered the problem of induction, questioning the logical foundations of how and why people can arrive at conclusions that extend beyond past experiences - both spatiotemporally and epistemologically. [9] More recently, theorists have considered the problem by emphasizing techniques for arriving from data to hypothesis using formal content-independent procedures, or in contrast, by considering informal, content-dependent tools for inductive inference. [10] [11] Searches for formal procedures have led to different developments in statistical inference and probability theory with different assumptions, including Fisherian frequentist statistics, Bayesian inference, and Neyman-Pearson statistics. [1]

Gerd Gigerenzer and David Murray argue that twentieth century psychology as a discipline adopted probabilistic inference as a unified set of ideas and ignored the controversies among probability theorists. They claim that a normative but incorrect view of how humans "ought to think rationally" follows from this acceptance. They also maintain, however, that the intuitive statistician metaphor of cognition is promising, and should consider different formal tools or heuristics as specialized for different problem domains, rather than a content- or context-free toolkit. Signal detection theorists and object detection models, for example, often use a Neyman-Pearson approach, whereas Fisherian frequentist statistics might aid cause-effect inferences. [1]

Frequentist inference

Frequentist inference focuses on the relative proportions or frequencies of occurrences to draw probabilistic conclusions. It is defined by its closely related concept, frequentist probability. This entails a view that "probability" is nonsensical in the absence of pre-existing data, because it is understood as a relative frequency that long-run samples would approach given large amounts of data. [12] Leda Cosmides and John Tooby have argued that it is not possible to derive a probability without reference to some frequency of previous outcomes, and this likely has evolutionary origins: Single-event probabilities, they claim, are not observable because organisms evolved to intuitively understand and make statistical inferences from frequencies of prior events, rather than to "see" probability as an intrinsic property of an event. [13]

Bayesian inference

Bayesian inference generally emphasizes the subjective probability of a hypothesis, which is computed as a posterior probability using Bayes' Theorem. It requires a "starting point" called a prior probability, which has been contentious for some frequentists who claim that frequency data are required to develop a prior probability, in contrast to taking a probability as an a priori assumption. [1] [12]

Bayesian models have been quite popular among psychologists, particularly learning theorists, because they appear to emulate the iterative, predictive process by which people learn and develop expectations from new observations, while giving appropriate weight to previous observations. [14] Andy Clark, a cognitive scientist and philosopher, recently wrote a detailed argument in support of understanding the brain as a constructive Bayesian engine that is fundamentally action-oriented and predictive, rather than passive or reactive. [15] More classic lines of evidence cited among supporters of Bayesian inference include conservatism, or the phenomenon where people modify previous beliefs toward, but not all the way to, a conclusion implied by previous observations. [6] This pattern of behavior is similar to the pattern of posterior probability distributions when a Bayesian model is conditioned on data, though critics argued that this evidence had been overstated and lacked mathematical rigor. [16]

Alison Gopnik more recently tackled the problem by advocating the use of Bayesian networks, or directed graph representations of conditional dependencies. In a Bayesian network, edge weights are conditional dependency strengths that are updated in light of new data, and nodes are observed variables. The graphical representation itself constitutes a model, or hypothesis, about the world and is subject to change, given new data. [2]

Error management theory

Error management theory (EMT) is an application of Neyman-Pearson statistics to cognitive and evolutionary psychology. It maintains that the possible fitness costs and benefits of type I (false positive) and type II (false negative) errors are relevant to adaptively rational inferences, toward which an organism is expected to be biased due to natural selection. EMT was originally developed by Martie Haselton and David Buss, with initial research focusing on its possible role in sexual overperception bias in men and sexual underperception bias in women. [17]

This is closely related to a concept called the "smoke detector principle" in evolutionary theory. It is defined by the tendency for immune, affective, and behavioral defenses to be hypersensitive and overreactive, rather than insensitive or weakly expressed. Randolph Nesse maintains that this is a consequence of a typical payoff structure in signal detection: In a system that is invariantly structured with a relatively low cost of false positives and high cost of false negatives, naturally selected defenses are expected to err on the side of hyperactivity in response to potential threat cues. [18] This general idea has been applied to hypotheses about the apparent tendency for humans to apply agency to non-agents based on uncertain or agent-like cues. [19] In particular, some claim that it is adaptive for potential prey to assume agency by default if it is even slightly suspected, because potential predator threats typically involve cheap false positives and lethal false negatives. [20]

Heuristics and biases

Heuristics are efficient rules, or computational shortcuts, for producing a judgment or decision. The intuitive statistician metaphor of cognition [1] led to a shift in focus for many psychologists, away from emotional or motivational principles and toward computational or inferential principles. [21] Empirical studies investigating these principles have led some to conclude that human cognition, for example, has built-in and systematic errors in inference, or cognitive biases. As a result, cognitive psychologists have largely adopted the view that intuitive judgments, generalizations, and numerical or probabilistic calculations are systematically biased. The result is commonly an error in judgment, including (but not limited to) recurrent logical fallacies (e.g., the conjunction fallacy), innumeracy, and emotionally motivated shortcuts in reasoning. [22] [23] [24] [25] Social and cognitive psychologists have thus considered it "paradoxical" that humans can outperform powerful computers at complex tasks, yet be deeply flawed and error-prone in simple, everyday judgments. [26]

Much of this research was carried out by Amos Tversky and Daniel Kahneman as an expansion of work by Herbert Simon on bounded rationality and satisficing. [27] Tversky and Kahneman argue that people are regularly biased in their judgments under uncertainty, because in a speed-accuracy tradeoff they often rely on fast and intuitive heuristics with wide margins of error rather than slow calculations from statistical principles. [28] These errors are called "cognitive illusions" because they involve systematic divergences between judgments and accepted, normative rules in statistical prediction. [29]

Gigerenzer has been critical of this view, arguing that it builds from a flawed assumption that a unified "normative theory" of statistical prediction and probability exists. His contention is that cognitive psychologists neglect the diversity of ideas and assumptions in probability theory, and in some cases, their mutual incompatibility. [30] [13] Consequently, Gigerenzer argues that many cognitive illusions are not violations of probability theory per se, but involve some kind of experimenter confusion between subjective probabilities with degrees of confidence and long-run outcome frequencies. [21] Cosmides and Tooby similarly claim that different probabilistic assumptions can be more or less normative and rational in different types of situations, and that there is not general-purpose statistical toolkit for making inferences across all informational domains. In a review of several experiments they conclude, in support of Gigerenzer, [21] that previous heuristics and biases experiments did not represent problems in an ecologically valid way, and that re-representing problems in terms of frequencies rather than single-event probabilities can make cognitive illusions largely vanish. [13]

Tversky and Kahneman refuted this claim, arguing that making illusions disappear by manipulating them, whether they are cognitive or visual, does not undermine the initially discovered illusion. They also note that Gigerenzer ignores cognitive illusions resulting from frequency data, e.g., illusory correlations such as the hot hand in basketball. [25] This, they note, is an example of an illusory positive autocorrelation that cannot be corrected by converted data to natural frequencies. [31]

For adaptationists, EMT can be applied to inference under any informational domain, where risk or uncertainty are present, such as predator avoidance, agency detection, or foraging. Researchers advocating this adaptive rationality view argue that evolutionary theory casts heuristics and biases in a new light, namely, as computationally efficient and ecologically rational shortcuts, or instances of adaptive error management. [32]

Base rate neglect

People often neglect base rates, or true actuarial facts about the probability or rate of a phenomenon, and instead give inappropriate amounts of weight to specific observations. [33] [34] In a Bayesian model of inference, this would amount to an underweighting of the prior probability, [6] which has been cited as evidence against the appropriateness of a normative Bayesian framework for modeling cognition. [1] [21] Frequency representations can resolve base rate neglect, and some consider the phenomenon to be an experimental artifact, i.e., a result of probabilities or rates being represented as mathematical abstractions, which are difficult to intuitively think about. [13] Gigerenzer speculates an ecological reason for this, noting that individuals learn frequencies through successive trials in nature. [35] Tversky and Kahneman refute Gigerenzer's claim, pointing to experiments where subjects predicted a disease based on the presence vs. absence of pre-specified symptoms across 250 trials, with feedback after each trial. [36] They note that base rate neglect was still found, despite the frequency formulation of subject trials in the experiment. [31]

Conjunction fallacy

Another popular example of a supposed cognitive illusion is the conjunction fallacy, described in an experiment by Tversky and Kahneman known as the "Linda problem." In this experiment, participants are presented with a short description of a person called Linda, who is 31 years old, single, intelligent, outspoken, and went to a university where she majored in philosophy, was concerned about discrimination and social justice, and participated in anti-nuclear protests. When participants were asked if it were more probable that Linda is (1) a bank teller, or (2) a bank teller and a feminist, 85% responded with option 2, even though it option 1 cannot be less probable than option 2. They concluded that this was a product of a representativeness heuristic, or a tendency to draw probabilistic inferences based on property similarities between instances of a concept, rather than a statistically structured inference. [24]

Gigerenzer argued that the conjunction fallacy is based on a single-event probability, and would dissolve under a frequentist approach. He and other researchers demonstrate that conclusions from the conjunction fallacy result from ambiguous language, rather than robust statistical errors or cognitive illusions. [37] In an alternative version of the Linda problem, participants are told that 100 people fit Linda's description and are asked how many are (1) bank tellers and (2) bank tellers and feminists. Experimentally, this version of the task appears to eliminate or mitigate the conjunction fallacy. [21] [37]

Computational models

There has been some question about how concept structuring and generalization can be understood in terms of brain architecture and processes. This question is impacted by a neighboring debate among theorists about the nature of thought, specifically between connectionist and language of thought models. Concept generalization and classification have been modeled in a variety of connectionist models, or neural networks, specifically in domains like language learning and categorization. [38] [39] Some emphasize the limitations of pure connectionist models when they are expected to generalize future instances after training on previous instances. Gary Marcus, for example, asserts that training data would have to be completely exhaustive for generalizations to occur in existing connectionist models, and that as a result, they do not handle novel observations well. He further advocates an integrationist perspective between a language of thought, consisting of symbol representations and operations, and connectionist models than retain the distributed processing that is likely used by neural networks in the brain. [40]

Evidence in humans

In practice, humans routinely make conceptual, linguistic, and probabilistic generalizations from small amounts of data. [4] [41] [8] [42] There is some debate about the utility of various tools of statistical inference in understanding the mind, but it is commonly accepted that the human mind is somehow an exceptionally apt prediction machine, and that action-oriented processes underlying this phenomenon, whatever they might entail, are at the core of cognition. [15] [43] Probabilistic inferences and generalization play central roles in concepts and categories and language learning, [44] and infant studies are commonly used to understand the developmental trajectory of humans' intuitive statistical toolkit(s).

Infant studies

Developmental psychologists such as Jean Piaget have traditionally argued that children do not develop the general cognitive capacities for probabilistic inference and hypothesis testing until concrete operational (age 7–11 years) and formal operational (age 12 years-adulthood) stages of development, respectively. [45] [46]

This is sometimes contrasted to a growing preponderance of empirical evidence suggesting that humans are capable generalizers in infancy. For example, looking-time experiments using expected outcomes of red and white ping pong ball proportions found that 8-month-old infants appear to make inferences about population characteristics from which the sample came, and vice versa when given population-level data. [47] Other experiments have similarly supported a capacity for probabilistic inference with 6- and 11-month-old infants, [48] but not in 4.5-month-olds. [49]

The colored ball paradigm in these experiments did not distinguish the possibilities of infants' inferences based on quantity vs. proportion, which was addressed in follow-up research where 12-month-old infants seemed to understand proportions, basing probabilistic judgments - motivated by preferences for the more probable outcomes - on initial evidence of the proportions in their available options. [50] Critics of the effectiveness of looking-time tasks allowed infants to search for preferred objects in single-sample probability tasks, supporting the notion that infants can infer probabilities of single events when given a small or large initial sample size. [51] The researchers involved in these findings have argued that humans possess some statistically structured, inferential system during preverbal stages of development and prior to formal education. [47] [50]

It is less clear, however, how and why generalization is observed in infants: It might extend directly from detection and storage of similarities and differences in incoming data, or frequency representations. Conversely, it might be produced by something like general-purpose Bayesian inference, starting with a knowledge base that is iteratively conditioned on data to update subjective probabilities, or beliefs. [52] [53] This ties together questions about the statistical toolkit(s) that might be involved in learning, and how they apply to infant and childhood learning specifically.

Gopnik advocates the hypothesis that infant and childhood learning are examples of inductive inference, a general-purpose mechanism for generalization, acting upon specialized information structures ("theories") in the brain. [54] On this view, infants and children are essentially proto-scientists because they regularly use a kind of scientific method, developing hypotheses, performing experiments via play, and updating models about the world based on their results. [55] For Gopnik, this use of scientific thinking and categorization in development and everyday life can be formalized as models of Bayesian inference. [56] An application of this view is the "sampling hypothesis," or the view that individual variation in children's causal and probabilistic inferences is an artifact of random sampling from a diverse set of hypotheses, and flexible generalizations based on sampling behavior and context. [57] [58] These views, particularly those advocating general Bayesian updating from specialized theories, are considered successors to Piaget's theory rather than wholesale refutations because they maintain its domain-generality, viewing children as randomly and unsystematically considering a range of models before selecting a probable conclusion. [59]

In contrast to the general-purpose mechanistic view, some researchers advocate both domain-specific information structures and similarly specialized inferential mechanisms. [60] [61] For example, while humans do not usually excel at conditional probability calculations, the use of conditional probability calculations are central to parsing speech sounds into comprehensible syllables, a relatively straightforward and intuitive skill emerging as early as 8 months. [62] Infants also appear to be good at tracking not only spatiotemporal states of objects, but at tracking properties of objects, and these cognitive systems appear to be developmentally distinct. This has been interpreted as domain specific toolkits of inference, each of which corresponds to separate types of information and has applications to concept learning. [60] [63]

Concept formation

Infants use form similarities and differences to develop concepts relating to objects, and this relies on multiple trials with multiple patterns, exhibiting some kind of common property between trials. [64] Infants appear to become proficient at this ability in particular by 12 months, [65] but different concepts and properties employ different relevant principles of Gestalt psychology, many of which might emerge at different stages of development. [66] Specifically, infant categorization at as early as 4.5 months involves iterative and interdependent processes by which exemplars (data) and their similarities and differences are crucial for drawing boundaries around categories. [67] These abstract rules are statistical by nature, because they can entail common co-occurrences of certain perceived properties in past instances and facilitate inferences about their structure in future instances. [68] [69] This idea has been extrapolated by Douglas Hofstadter and Emmanuel Sander, who argue that because analogy is a process of inference relying on similarities and differences between concept properties, analogy and categorization are fundamentally the same process used for organizing concepts from incoming data. [4]

Language learning

Infants and small children are not only capable generalizers of trait quantity and proportion, but of abstract rule-based systems such as language and music. [70] [71] These rules can be referred to as “algebraic rules” of abstract informational structure, and are representations of rule systems, or grammars. [72] For language, creating generalizations with Bayesian inference and similarity detection has been advocated by researchers as a special case of concept formation. [73] [74] Infants appear to be proficient in inferring abstract and structural rules from streams of linguistic sounds produced in their developmental environments, [75] and to generate wider predictions based on those rules. [76]

For example, 9-month-old infants are capable of more quickly and dramatically updating their expectations when repeated syllable strings contain surprising features, such as rare phonemes. [77] In general, preverbal infants appear to be capable of discriminating between grammars with which they have been trained with experience, and novel grammars. [78] [79] In 7-month-old infant looking-time tasks, infants seemed to pay more attention to unfamiliar grammatical structures than to familiar ones, [72] and in a separate study using 3-syllable strings, infants appeared to similarly have generalized expectations based on abstract syllabic structure previously presented, suggesting that they used surface occurrences, or data, in order to infer deeper abstract structure. This was taken to support the “multiple hypotheses [or models]” view by the researchers involved. [80] [81]

Evidence in non-human animals

Grey parrots

Multiple studies by Irene Pepperberg and her colleagues suggested that Grey parrots (Psittacus erithacus) have some capacity for recognizing numbers or number-like concepts, appearing to understand ordinality and cardinality of numerals. [82] [83] [84] Recent experiments also indicated that, given some language training and capacity for referencing recognized objects, they also have some ability to make inferences about probabilities and hidden object type ratios. [85]

Non-human primates

Experiments found that when reasoning about preferred vs. non-preferred food proportions, capuchin monkeys were able to make inferences about proportions inferred by sequentially sampled data. [86] Rhesus monkeys were similarly capable of using probabilistic and sequentially sampled data to make inferences about rewarding outcomes, and neural activity in the parietal cortex appeared to be involved in the decision-making process when they made inferences. [87] In a series of 7 experiments using a variety of relative frequency differences between banana pellets and carrots, orangutans, chimpanzees and gorillas also appeared to guide their decisions based on the ratios favoring the banana pellets after this was established as their preferred food item. [88]

Applications

Reasoning in medicine

Research on reasoning in medicine, or clinical reasoning, usually focuses on cognitive processes and/or decision-making outcomes among physicians and patients. Considerations include assessments of risk, patient preferences, and evidence-based medical knowledge. [89] On a cognitive level, clinical inference relies heavily on interplay between abstraction, abduction, deduction, and induction. [90] Intuitive "theories," or knowledge in medicine, can be understood as prototypes in concept spaces, or alternatively, as semantic networks. [91] [92] Such models serve as a starting point for intuitive generalizations to be made from a small number of cues, resulting in the physician's tradeoff between the "art and science" of medical judgement. [93] This tradeoff was captured in an artificially intelligent (AI) program called MYCIN, which outperformed medical students, but not experienced physicians with extensive practice in symptom recognition. [93] [94] [95] [89] Some researchers argue that despite this, physicians are prone to systematic biases, or cognitive illusions, in their judgment (e.g., satisficing to make premature diagnoses, confirmation bias when diagnoses are suspected a priori). [89]

Communication of patient risk

Statistical literacy and risk judgments have been described as problematic for physician-patient communication. [96] For example, physicians frequently inflate the perceived risk of non-treatment, [97] alter patients' risk perceptions by positively or negatively framing single statistics (e.g., 97% survival rate vs. 3% death rate), and/or fail to sufficiently communicate "reference classes" of probability statements to patients. [98] The reference class is the object of a probability statement: If a psychiatrist says, for example, “this medication can lead to a 30-50% chance of a sexual problem,” it is ambiguous whether this means that 30-50% of patients will develop a sexual problem at some point, or if all patients will have problems in 30-50% of their sexual encounters. [99]

Base rates in clinical judgment

In studies of base rate neglect, the problems given to participants often use base rates of disease prevalence. In these experiments, physicians and non-physicians are similarly susceptible to base rate neglect, or errors in calculating conditional probability. Here is an example from an empirical survey problem given to experienced physicians: Suppose that a hypothetical cancer had a prevalence of 0.3% in the population, and the true positive rate of a screening test was 50% with a false positive rate of 3%. Given a patient with a positive test result, what is the probability that the patient has cancer? When asked this question, physicians with an average of 14 years experience in medical practice ranged in their answers from 1-99%, with most answers being 47% or 50%. (The correct answer is 5%.) [98] This observation of clinical base rate neglect and conditional probability error has been replicated in multiple empirical studies. [96] [100] Physicians' judgments in similar problems, however, improved substantially when the rates were re-formulated as natural frequencies. [101]

Related Research Articles

<span class="mw-page-title-main">Cognitive bias</span> Systematic pattern of deviation from norm or rationality in judgment

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.

A heuristic, or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of known prototypical event. It is one of a group of heuristics proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.

The conjunction fallacy is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy.

<span class="mw-page-title-main">Base rate fallacy</span> Error in thinking which involves under-valuing base rate information

The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate in favor of the individuating information . Base rate neglect is a specific form of the more general extension neglect.

<span class="mw-page-title-main">Gerd Gigerenzer</span> German cognitive psychologist

Gerd Gigerenzer is a German psychologist who has studied the use of bounded rationality and heuristics in decision making. Gigerenzer is director emeritus of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development and director of the Harding Center for Risk Literacy, both in Berlin.

The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

Concept learning, also known as category learning, concept attainment, and concept formation, is defined by Bruner, Goodnow, & Austin (1967) as "the search for and listing of attributes that can be used to distinguish exemplars from non exemplars of various categories". More simply put, concepts are the mental categories that help us classify objects, events, or ideas, building on the understanding that each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features.

The theory-theory is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation.

In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.

The psychology of reasoning is the study of how people reason, often broadly defined as the process of drawing conclusions to inform how people solve problems and make decisions. It overlaps with psychology, philosophy, linguistics, cognitive science, artificial intelligence, logic, and probability theory.

Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.

Heuristics is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. In that sense they can differ from answers given by logic and probability.

Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.

The frequency format hypothesis is the idea that the brain understands and processes information better when presented in frequency formats rather than a numerical or probability format. Thus according to the hypothesis, presenting information as 1 in 5 people rather than 20% leads to better comprehension. The idea was proposed by German scientist Gerd Gigerenzer, after compilation and comparison of data collected between 1976 and 1997.

Social heuristics are simple decision making strategies that guide people's behavior and decisions in the social environment when time, information, or cognitive resources are scarce. Social environments tend to be characterised by complexity and uncertainty, and in order to simplify the decision-making process, people may use heuristics, which are decision making strategies that involve ignoring some information or relying on simple rules of thumb.

Ecological rationality is a particular account of practical rationality, which in turn specifies the norms of rational action – what one ought to do in order to act rationally. The presently dominant account of practical rationality in the social and behavioral sciences such as economics and psychology, rational choice theory, maintains that practical rationality consists in making decisions in accordance with some fixed rules, irrespective of context. Ecological rationality, in contrast, claims that the rationality of a decision depends on the circumstances in which it takes place, so as to achieve one's goals in this particular context. What is considered rational under the rational choice account thus might not always be considered rational under the ecological rationality account. Overall, rational choice theory puts a premium on internal logical consistency whereas ecological rationality targets external performance in the world. The term ecologically rational is only etymologically similar to the biological science of ecology.

<span class="mw-page-title-main">Ralph Hertwig</span> German psychologist

Ralph Hertwig is a German psychologist whose work focuses on the psychology of human judgment and decision making. Hertwig is Director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin, Germany. He grew up with his brothers Steffen Hertwig and Michael Hertwig in Talheim, Heilbronn.

<span class="mw-page-title-main">Fei Xu</span> American Professor of Psychology

Fei Xu is an American developmental psychologist and cognitive scientist who is currently a professor of psychology and the director of the Berkeley Early Learning Lab at UC Berkeley. Her research focuses on cognitive and language development, from infancy to middle childhood.

References

  1. 1 2 3 4 5 6 7 8 Gigerenzer, Gerd; Murray, David J. (2015). Cognition as intuitive statistics. London: Psychology Press. ISBN   9781138950221. OCLC   918941179.
  2. 1 2 Gopnik, Alison; Tenenbaum, Joshua B. (May 2007). "Bayesian networks, Bayesian learning and cognitive development". Developmental Science. 10 (3): 281–287. doi:10.1111/j.1467-7687.2007.00584.x. ISSN   1363-755X. PMID   17444969.
  3. Gopnik, Alison (August 2011). "The Theory Theory 2.0: Probabilistic Models and Cognitive Development". Child Development Perspectives. 5 (3): 161–163. doi:10.1111/j.1750-8606.2011.00179.x. ISSN   1750-8592.
  4. 1 2 3 Hofstadter, Douglas R.; Sander, Emmanuel (2012). Surfaces and essences : Analogy as the fuel and fire of thinking. New York: Basic Books. ISBN   9780465018475. OCLC   841172513.
  5. 1 2 Goodman, N.D.; Tenenbaum, J.B.; Gerstenberg, T. (February 15, 2014). "Concepts in a Probabilistic Language of Thought" (PDF). Archived (PDF) from the original on 2017-04-27. Retrieved 2018-07-24.
  6. 1 2 3 Nickerson, Raymond S. (2004). Cognition and chance : the psychology of probabilistic reasoning. Mahwah, N.J.: Lawrence Erlbaum Associates. ISBN   9780805848991. OCLC   56115142.
  7. Neisser, Ulric, ed. (1987). "Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization". Concepts and conceptual development : ecological and intellectual factors in categorization. First Emory Cognition Project Conference, Oct. 11-12, 1984, Emory University. Cambridge: Cambridge University Press. ISBN   0521322197. OCLC   14905937.
  8. 1 2 3 Goodman, N.D.; Tenenbaum, J.B. (2016). Probabilistic Models of Cognition (2nd ed.). Archived from the original on 2018-05-11. Retrieved 2018-04-24 via https://probmods.org/.{{cite book}}: External link in |via= (help)
  9. Hume, David; Levine, Michael P. (2005). A treatise of human nature. New York, N.Y.: Barnes & Noble. ISBN   0760771723. OCLC   404025944.
  10. Hacking, Ian (2001). An introduction to probability and inductive logic. Cambridge, U.K.: Cambridge University Press. ISBN   0521772877. OCLC   44969631.
  11. Savage, Leonard J. (1954). The Foundations of Statistics . New York: Wiley.
  12. 1 2 Von Mises, Richard (1981). Probability, statistics, and truth (2nd ed.). New York: Dover Publications. ISBN   0486242145. OCLC   8320292.
  13. 1 2 3 4 Cosmides, L; Tooby, J (January 1996). "Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty". Cognition. 58 (1): 1–73. doi:10.1016/0010-0277(95)00664-8. S2CID   18631755.
  14. Pearl, Judea (2000). Causality : models, reasoning, and inference. Cambridge, U.K.: Cambridge University Press. ISBN   9781139649360. OCLC   834142635.
  15. 1 2 Clark, Andy (June 2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science". Behavioral and Brain Sciences. 36 (3): 181–204. doi: 10.1017/s0140525x12000477 . ISSN   0140-525X. PMID   23663408.
  16. Edwards W (1968). "Conservatism in human information processing". In Kleinmuntz B (ed.). Formal representation of human judgment. Symposium on Cognition. New York: Wiley.
  17. Haselton, Martie G.; Buss, David M. (2000). "Error management theory: A new perspective on biases in cross-sex mind reading". Journal of Personality and Social Psychology. 78 (1): 81–91. doi:10.1037/0022-3514.78.1.81. PMID   10653507.
  18. Nesse, R. M. (May 2001). "The smoke detector principle. Natural selection and the regulation of defensive responses". Annals of the New York Academy of Sciences. 935: 75–85. doi:10.1111/j.1749-6632.2001.tb03472.x. hdl: 2027.42/75092 . PMID   11411177. S2CID   20128143.
  19. Barrett, Justin L.; Johnson, Amanda Hankes (August 2003). "The Role of Control in Attributing Intentional Agency to Inanimate Objects" (PDF). Journal of Cognition and Culture. 3 (3): 208–217. doi:10.1163/156853703322336634.
  20. Gray, Kurt; Wegner, Daniel M. (February 2010). "Blaming god for our pain: human suffering and the divine mind". Personality and Social Psychology Review. 14 (1): 7–16. doi:10.1177/1088868309350299. PMID   19926831. S2CID   18463294.
  21. 1 2 3 4 5 Gigerenzer, G (1991). "How to make cognitive illusions disappear: Beyond "heuristics and biases"". European Review of Social Psychology. 2 (1): 83–115. doi:10.1080/14792779143000033. hdl: 21.11116/0000-0000-BD7A-3 .
  22. Gilovich, Thomas; Griffin, Dale W.; Kahneman, Daniel (2002). Heuristics and biases : the psychology of intuitive judgment. Cambridge, U.K.: Cambridge University Press. ISBN   9780521796798. OCLC   47364085.
  23. Kahneman, Daniel; Tversky, Amos (1979). "Prospect Theory: An Analysis of Decision under Risk". Econometrica. 47 (2): 263–291. doi:10.2307/1914185. JSTOR   1914185.
  24. 1 2 Tversky, Amos; Kahneman, Daniel (1983). "Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment". Psychological Review. 90 (4): 293–315. doi:10.1037/0033-295x.90.4.293.
  25. 1 2 Gilovich, Thomas; Vallone, Robert; Tversky, Amos (July 1985). "The hot hand in basketball: On the misperception of random sequences". Cognitive Psychology. 17 (3): 295–314. doi:10.1016/0010-0285(85)90010-6. ISSN   0010-0285. S2CID   317235.
  26. Nisbett, Richard E.; Ross, Lee (1980). Human inference : strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice-Hall. ISBN   0134451309. OCLC   5411525.
  27. Simon, H.A. (March 1956). "Rational choice and the structure of the environment". Psychological Review. 63 (2): 129–138. doi:10.1037/h0042769. PMID   13310708. S2CID   8503301.
  28. Kahneman, Daniel; Tversky, Amos (1973). "On the psychology of prediction". Psychological Review. 80 (4): 237–251. doi:10.1037/h0034747.
  29. Ajzen, Icek (1977). "Intuitive theories of events and the effects of base-rate information on prediction". Journal of Personality and Social Psychology. 35 (5): 303–314. doi:10.1037/0022-3514.35.5.303.
  30. Hacking (1965). Logic of Statistical Inference. Cambridge University Press. ISBN   9780521051651. OCLC   704034945.
  31. 1 2 Kahneman, Daniel; Tversky, Amos (1996). "On the reality of cognitive illusions". Psychological Review. 103 (3): 582–591. doi:10.1037/0033-295x.103.3.582. PMID   8759048.
  32. Haselton, Martie; Bryant, Gregory A.; Wilke, Andreas; Frederick, David; Galperin, Andrew; Frankenhuis, Willem E.; Moore, Tyler (2009). "Adaptive rationality: An evolutionary perspective on cognitive bias". Social Cognition. 27 (5): 733–763. doi:10.1521/soco.2009.27.5.733.
  33. Kahneman, Daniel; Tversky, Amos (1973). "On the psychology of prediction". Psychological Review. 80 (4): 237–251. doi:10.1037/h0034747.
  34. Bar-Hillel, Maya (May 1980). "The base-rate fallacy in probability judgments" (PDF). Acta Psychologica. 44 (3): 211–233. doi:10.1016/0001-6918(80)90046-3. ISSN   0001-6918.
  35. Gigerenzer, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 129-161). Oxford, England: John Wiley & Sons.
  36. Gluck, M., & Bower, G. H. (1988). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology: General. 117. 227-247
  37. 1 2 Fiedler, Klaus (September 1988). "The dependence of the conjunction fallacy on subtle linguistic factors". Psychological Research. 50 (2): 123–129. doi:10.1007/bf00309212. S2CID   144369350.
  38. Elman, Jeffrey L. (1996). Rethinking innateness : a connectionist perspective on development. Cambridge, Mass.: MIT Press. ISBN   0585020345. OCLC   42854159.
  39. O'Reilly, Randall C.; Munakata, Yuko (2000). Computational explorations in cognitive neuroscience : understanding the mind by simulating the brain. Cambridge, Mass.: MIT Press. ISBN   0262650541. OCLC   43083232.
  40. Marcus, Gary F. (2001). The algebraic mind : integrating connectionism and cognitive science. Cambridge, Mass.: MIT Press. ISBN   0262133792. OCLC   43954048.
  41. Lakoff, George (1987). Women, fire, and dangerous things : what categories reveal about the mind. Chicago: University of Chicago Press. ISBN   9780226468044. OCLC   14001013.
  42. Griffiths, Thomas L.; Tenenbaum, Joshua B. (September 2006). "Optimal predictions in everyday cognition". Psychological Science. 17 (9): 767–773. doi:10.1111/j.1467-9280.2006.01780.x. ISSN   0956-7976. PMID   16984293. S2CID   12834830.
  43. Clark, Andy (January 2015). Embodied Prediction. Frankfurt am Main, Germany: MIND Group. doi:10.15502/9783958570115. ISBN   9783958570115.{{cite book}}: |journal= ignored (help)
  44. Goodman, Noah; Tenenbaum, Joshua; Feldman, Jacob; Griffiths, Thomas (January 2008). "A Rational Analysis of Rule-Based Concept Learning". Cognitive Science. 32 (1): 108–154. doi: 10.1080/03640210701802071 . ISSN   0364-0213. PMID   21635333.
  45. Piaget, Jean; Inhelder, Bärbel (1976). Origin of the Idea of Chance in Children. New York: W.W. Norton & Co. ISBN   9780393008036. OCLC   753479728.
  46. Piaget, J. (1983). "Piaget's Theory". In P. Mussen (ed.). Handbook of Child Psychology. Vol. 1: History, Theory and Methods. W. Kessen (vol. ed.) (4th ed.). New York: John Wiley. OCLC   863228206.
  47. 1 2 Xu, Fei; Garcia, Vashti (April 1, 2008). "Intuitive statistics by 8-month-old infants". Proceedings of the National Academy of Sciences. 105 (13): 5012–5015. doi: 10.1073/pnas.0704450105 . PMC   2278207 . PMID   18378901.
  48. Denison, Stephanie; Xu, Fei (July 2010). "Integrating Physical Constraints in Statistical Inference by 11-Month-Old Infants". Cognitive Science. 34 (5): 885–908. doi: 10.1111/j.1551-6709.2010.01111.x . ISSN   1551-6709. PMID   21564238.
  49. Denison, Stephanie; Reed, Christie; Xu, Fei (February 2013). "The emergence of probabilistic reasoning in very young infants: Evidence from 4.5- and 6-month-olds". Developmental Psychology. 49 (2): 243–249. doi:10.1037/a0028278. PMID   22545837.
  50. 1 2 Denison, Stephanie; Xu, Fei (March 2014). "The origins of probabilistic inference in human infants". Cognition. 130 (3): 335–347. doi:10.1016/j.cognition.2013.12.001. PMID   24384147. S2CID   14917596.
  51. Denison, Stephanie; Xu, Fei (September 2010). "Twelve- to 14-month-old infants can predict single-event probability with large set sizes". Developmental Science. 13 (5): 798–803. doi: 10.1111/j.1467-7687.2009.00943.x . PMID   20712746.
  52. Gerken, LouAnn (May 2010). "Infants use rational decision criteria for choosing among models of their input". Cognition. 115 (2): 362–366. doi:10.1016/j.cognition.2010.01.006. PMC   2835817 . PMID   20144828.
  53. Gopnik, A.; Meltzoff, A.N. (1997). Words, thoughts, and theories. Cambridge, MA: MIT Press. ISBN   9780262571265. OCLC   438680292.
  54. Gopnik, A. & Wellman, H. (1994). "The theory theory". In A. Hirschfeld & S. A. Gelman (eds.). Mapping the Mind: Domain Specificity in Cognition and Culture . New York: Cambridge University Press. pp.  257–293. ISBN   0521419662. OCLC   27937150.
  55. Gopnik, Alison; Meltzoff, Andrew N.; Kuhl, Patricia K. (2001). The scientist in the crib : what early learning tells us about the mind (1st ed.). New York: HarperPerennial. ISBN   9780688177881. OCLC   46453058.
  56. Gopnik, Alison (28 September 2012). "Scientific Thinking in Young Children: Theoretical Advances, Empirical Research, and Policy Implications". Science. 337 (6102): 1623–1627. Bibcode:2012Sci...337.1623G. doi:10.1126/science.1223416. PMID   23019643. S2CID   12773717.
  57. Denison, Stephanie; Bonawitz, Elizabeth; Gopnik, Alison; Griffiths, Thomas L. (February 2013). "Rational variability in children's causal inferences: The Sampling Hypothesis". Cognition. 126 (2): 285–300. doi:10.1016/j.cognition.2012.10.010. PMID   23200511. S2CID   10826483.
  58. Gweon, Hyowon; Tenenbaum, Joshua B.; Schulz, Laura E. (May 2010). "Infants consider both the sample and the sampling process in inductive generalization". Proceedings of the National Academy of Sciences. 107 (20): 9066–9071. Bibcode:2010PNAS..107.9066G. doi: 10.1073/pnas.1003095107 . PMC   2889113 . PMID   20435914.
  59. Siegler, Robert S. (January 2000). "The Rebirth of Children's Learning". Child Development. 71 (1): 26–35. doi:10.1111/1467-8624.00115. PMID   10836555.
  60. 1 2 Barrett, H. Clark (2015). The shape of thought : how mental adaptations evolve. New York. ISBN   9780199348312. OCLC   881386175.{{cite book}}: CS1 maint: location missing publisher (link)
  61. Sperber, D. (1994). "The modularity of thought and the epidemiology of representations". In Hirschfeld, L.; Gelman, S. (eds.). Mapping the Mind. Cambridge University Press.
  62. Tataru, Paula; Hobolth, Asger (5 December 2011). "Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains". BMC Bioinformatics. 12: 465. doi: 10.1186/1471-2105-12-465 . PMC   3329461 . PMID   22142146.
  63. Xu, Fei (September 1999). "Object individuation and object identity in infancy: the role of spatiotemporal information, object property information, and language". Acta Psychologica. 102 (2–3): 113–136. doi:10.1016/s0001-6918(99)00029-3. PMID   10504878.
  64. Quinn, Paul C.; Bhatt, Ramesh S. (July 2005). "Learning perceptual organization in infancy". Psychological Science. 16 (7): 511–515. doi:10.1111/j.0956-7976.2005.01567.x. ISSN   0956-7976. PMID   16008781. S2CID   11886384.
  65. Gomez, Rebecca L.; Lakusta, Laura (November 2004). "A first step in form-based category abstraction by 12-month-old infants". Developmental Science. 7 (5): 567–580. doi:10.1111/j.1467-7687.2004.00381.x. PMID   15603290.
  66. Quinn, Paul C.; Bhatt, Ramesh S.; Brush, Diana; Grimes, Autumn; Sharpnack, Heather (July 2002). "Development of form similarity as a Gestalt grouping principle in infancy". Psychological Science. 13 (4): 320–328. doi:10.1111/1467-9280.00459. ISSN   0956-7976. PMID   12137134. S2CID   8068752.
  67. Needham, Amy; Dueker, Gwenden; Lockhead, Gregory (January 2005). "Infants' formation and use of categories to segregate objects". Cognition. 94 (3): 215–240. doi:10.1016/j.cognition.2004.02.002. PMID   15617672. S2CID   2674246.
  68. Gómez, Rebecca L. (September 2002). "Variability and detection of invariant structure". Psychological Science. 13 (5): 431–436. doi:10.1111/1467-9280.00476. ISSN   0956-7976. PMID   12219809. S2CID   9058661.
  69. Thiessen, Erik D. (5 January 2017). "What's statistical about learning? Insights from modelling statistical learning as a set of memory processes". Phil. Trans. R. Soc. B. 372 (1711): 20160056. doi:10.1098/rstb.2016.0056. PMC   5124081 . PMID   27872374. 20160056.
  70. Marcus, Gary F.; Fernandes, Keith J.; Johnson, Scott P. (May 2007). "Infant rule learning facilitated by speech". Psychological Science. 18 (5): 387–391. doi:10.1111/j.1467-9280.2007.01910.x. PMID   17576276. S2CID   8261527.
  71. Gerken, LouAnn; Wilson, Rachel; Lewis, William (May 2005). "Infants can use distributional cues to form syntactic categories". Journal of Child Language. 32 (2): 249–268. doi:10.1017/S0305000904006786. PMID   16045250. S2CID   15679161.
  72. 1 2 Marcus, G.F.; Vijayan, S.; Rao, S. Bandi; Vishton, P.M. (1 January 1999). "Rule Learning by Seven-Month-Old Infants". Science. 283 (5398): 77–80. Bibcode:1999Sci...283...77M. doi:10.1126/science.283.5398.77. PMID   9872745. S2CID   6261323.
  73. Tenenbaum, J.B.; Griffiths, T.L. (August 2001). "Generalization, similarity, and Bayesian inference". The Behavioral and Brain Sciences. 24 (4): 629–640, discussion 652–791. doi:10.1017/s0140525x01000061. ISSN   0140-525X. PMID   12048947.
  74. Xu, Fei; Tenenbaum, Joshua B. (April 2007). "Word learning as Bayesian inference". Psychological Review. 114 (2): 245–272. doi:10.1037/0033-295X.114.2.245. PMID   17500627.
  75. Gerken, L. (October 2004). "Nine-month-olds extract structural principles required for natural language". Cognition. 93 (3): B89–B96. doi:10.1016/j.cognition.2003.11.005. ISSN   0010-0277. PMID   15178379. S2CID   5939461.
  76. Gerken, L.; Bollt, A. (2008). "Three Exemplars Allow at Least Some Linguistic Generalizations: Implications for Generalization Mechanisms and Constraints". Language Learning and Development. 4 (3): 228–248. doi:10.1080/15475440802143117. S2CID   8068917.
  77. Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Joshua (April 2014). "Surprise! Infants consider possible bases of generalization for a single input example". Developmental Science. 18 (1): 80–89. doi:10.1111/desc.12183. PMC   4188806 . PMID   24703007.
  78. Gomez, Rebecca L.; Gerken, LouAnn (March 1999). "Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge". Cognition. 70 (2): 109–135. doi:10.1016/s0010-0277(99)00003-7. PMID   10349760. S2CID   7447597.
  79. Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia (March 2003). "Infants learn phonotactic regularities from brief auditory experience". Cognition. 87 (2): B69–B77. doi:10.1016/s0010-0277(02)00233-0. ISSN   0010-0277. PMID   12590043. S2CID   15757428.
  80. Gerken, LouAnn (May 2010). "Infants use rational decision criteria for choosing among models of their input". Cognition. 115 (2): 362–366. doi:10.1016/j.cognition.2010.01.006. PMC   2835817 . PMID   20144828.
  81. Gerken, LouAnn (January 2006). "Decisions, decisions: infant language learning when multiple generalizations are possible". Cognition. 98 (3): B67–B74. doi:10.1016/j.cognition.2005.03.003. PMID   15992791. S2CID   14889642.
  82. Pepperberg, Irene M.; Carey, Susan (November 2012). "Grey parrot number acquisition: The inference of cardinal value from ordinal position on the numeral list". Cognition. 125 (2): 219–232. doi:10.1016/j.cognition.2012.07.003. PMC   3434310 . PMID   22878117.
  83. Pepperberg, Irene M. (February 2013). "Abstract concepts: Data from a Grey parrot". Behavioural Processes. 93: 82–90. doi:10.1016/j.beproc.2012.09.016. PMID   23089384. S2CID   33278680.
  84. Pepperberg, Irene M.; Gordon, Jesse D. (May 2005). "Number Comprehension by a Grey Parrot (Psittacus erithacus), Including a Zero-Like Concept". Journal of Comparative Psychology. 119 (2): 197–209. doi:10.1037/0735-7036.119.2.197. PMID   15982163.
  85. Clements, Katherine A.; Gray, Suzanne L.; Gross, Brya; Pepperberg, Irene M. (May 2018). "Initial evidence for probabilistic reasoning in a grey parrot (Psittacus erithacus)". Journal of Comparative Psychology. 132 (2): 166–177. doi:10.1037/com0000106. PMID   29528667. S2CID   42149969.
  86. Tecwyn, Emma C.; Denison, Stephanie; Messer, Emily J.E.; Buchsbaum, Daphna (March 2017). "Intuitive probabilistic inference in capuchin monkeys". Animal Cognition. 20 (2): 243–256. doi:10.1007/s10071-016-1043-9. PMID   27744528. S2CID   12347189.
  87. Yang, Tianming; Shadlen, Michael N. (28 June 2007). "Probabilistic reasoning by neurons". Nature. 447 (7148): 1075–1080. Bibcode:2007Natur.447.1075Y. doi:10.1038/nature05852. ISSN   0028-0836. PMID   17546027. S2CID   4343931.
  88. Rakoczy, Hannes; Clüver, Annette; Saucke, Liane; Stoffregen, Nicole; Gräbener, Alice; Migura, Judith; Call, Josep (April 2014). "Apes are intuitive statisticians". Cognition. 131 (1): 60–68. doi:10.1016/j.cognition.2013.12.011. ISSN   0010-0277. PMID   24440657. S2CID   10393347.
  89. 1 2 3 ten Cate, Olle; Custers, Eugène J.F.M.; Durning, Steven J., eds. (2011). Principles and Practice of Case-based Clinical Reasoning Education A Method for Preclinical Students. Cham: Springer International. ISBN   9783319648286. OCLC   1017833633.
  90. Holyoak, Keith James; Morrison, Robert G., eds. (2005). The Cambridge handbook of thinking and reasoning. New York: Cambridge University Press. ISBN   9780521824170. OCLC   56011371.
  91. Custers, Eugène J. F. M. (May 2015). "Thirty years of illness scripts: Theoretical origins and practical applications". Medical Teacher. 37 (5): 457–462. doi:10.3109/0142159X.2014.956052. PMID   25180878. S2CID   207432268.
  92. Custers, E. J.; Regehr, G.; Norman, G. R. (October 1996). "Mental representations of medical diagnostic knowledge: a review". Academic Medicine. 71 (10 Suppl): S55–561. doi: 10.1097/00001888-199610000-00044 . PMID   8940935.
  93. 1 2 Shortliffe, Edward H.; Buchanan, Bruce G. (April 1975). "A model of inexact reasoning in medicine". Mathematical Biosciences. 23 (3–4): 351–379. doi:10.1016/0025-5564(75)90047-4. S2CID   118063112.
  94. Heckerman, David E.; Shortliffe, Edward H. (February 1992). "From certainty factors to belief networks". Artificial Intelligence in Medicine. 4 (1): 35–52. doi:10.1016/0933-3657(92)90036-o.
  95. Milofsky, Carl; Elstein, Arthur S.; Shulman, Lee S.; Sprafka, Sarah A. (November 1979). "Medical Problem Solving: An Analysis of Clinical Reasoning". American Journal of Sociology. 85 (3): 703–705. doi:10.1086/227074. ISSN   0002-9602.
  96. 1 2 Gigerenzer, Gerd; Gray, J. A. Muir, eds. (2011). Better doctors, better patients, better decisions : envisioning health care 2020. Ernst Strüngmann Forum 2009: Frankfurt am Main, Germany. Cambridge, Mass.: MIT Press. ISBN   9780262298957. OCLC   733263173.
  97. Wegwarth, Odette; Wagner, Gert G.; Gigerenzer, Gerd (August 23, 2017). "Can facts trump unconditional trust? Evidence-based information halves the influence of physicians' non-evidence-based cancer screening recommendations". PLOS ONE. 12 (8): e0183024. Bibcode:2017PLoSO..1283024W. doi: 10.1371/journal.pone.0183024 . PMC   5568103 . PMID   28832633. e0183024.
  98. 1 2 Gigerenzer, Gerd; Edwards, Adrian (September 2003). "Simple tools for understanding risks: from innumeracy to insight". BMJ. 327 (7417): 741–744. doi:10.1136/bmj.327.7417.741. ISSN   0959-8138. PMC   200816 . PMID   14512488.
  99. Gigerenzer, Gerd (2008). Rationality for mortals : how people cope with uncertainty. Oxford; New York: Oxford University Press. ISBN   9780199747092. OCLC   156902311.
  100. Casscells, W.; Schoenberger, A.; Graboys, T. B. (November 1978). "Interpretation by physicians of clinical laboratory results". The New England Journal of Medicine. 299 (18): 999–1001. doi:10.1056/NEJM197811022991808. ISSN   0028-4793. PMID   692627.
  101. Hoffrage, U.; Gigerenzer, G. (May 1998). "Using natural frequencies to improve diagnostic inferences". Academic Medicine. 73 (5): 538–540. doi:10.1097/00001888-199805000-00024. hdl: 11858/00-001M-0000-0025-A092-2 . PMID   9609869. S2CID   15957015.