An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion , which will decide whether to keep it. |
This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Moral dumbfounding is a phenomenon described by some philosophers and psychologists where a person will judge an action as morally wrong, while unable to provide a rationale or evidence for their position. [1]
Initially proposed by social psychologist Jonathan Haidt and colleagues in 2000, moral dumbfounding challenged previously held ideas about the nature of moral judgment, suggesting, for example, that moral judgment may be intuitive rather than rational. It has since been challenged, however, by numerous scholars, and remains contentious in the fields of philosophy and moral psychology.
The original experimental evidence was presented by Jonathan Haidt and colleagues in a 2000 scientific paper. The researchers defined "dumbfounding" as "the stubborn and puzzled maintenance of a judgment without supporting reasons." In the experiment, they presented participants with taboo scenarios designed to trigger a negative emotional reaction but were "carefully written to be harmless," [2] making it difficult for participants to justify their negative intuition. The most famous of these scenarios depicted consensual incest between two siblings, Julie and Mark:
Julie and Mark, who are brother and sister are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex? [2]
The key finding was that the majority of participants judged the scenario as morally wrong within a matter of seconds but subsequently struggled to provide valid reasons for their belief. When pressed to explain why the act is wrong, they often made arguments about the harmful consequences of incest that were explicitly ruled out by the story: Julie and Mark both consented, used contraceptives, enjoyed the experience, and told no one. When participants' reasons were challenged, they often resorted to "unsupported declarations" like "It's just wrong to do that!" or directly admitted to having no reasons. At the end of the conversation, many participants reported being confused but nonetheless confident in their original judgment that consensual incest is morally wrong. The researchers concluded that participants were dumbfounded because they "often directly state that they know or believe something, but cannot find reasons to support their belief." [2]
Other scenarios used to illustrate this phenomenon included a story about a man who purchases a dead chicken, has sexual intercourse with it, and subsequently cooks and eats it. Similar to the incest story, this action involves no harm to others or waste of resources, as the chicken is cleaned and consumed. Nevertheless, participants consistently judged the action as wrong based on feelings of disgust, despite the absence of rational harm-based arguments. [2]
The findings of the 2000 study laid the groundwork for social intuitionism, which states that moral judgment is driven primarily by automatic intuitions, while the subject's explanation of the intuitions is largely post-hoc justification for them. [3]
The discovery of moral dumbfounding was highly influential to moral psychology and gave rise to two primary claims about the nature of moral judgment. The first was that moral judgments are automatic, arising from effortless "moral intuitions" [3] rather than careful reasoning. [2] [3] [4] [5] The second claim was that the moral domain extends beyond concerns about harm, given that participants condemned an act that was explicitly harmless. [3] [4] [5] Both conclusions challenged dominant theories of moral judgment but also inspired their own sets of critics. [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
The moral dumbfounding theses challenged rationalism, the view that people arrive at moral conclusions by reasoning through possible outcomes and weighing pertinent principles. [2] [3] The authors of the original dumbfounding paper contended that "the existence of moral dumbfounding calls into question models in which moral judgment is produced by moral reasoning." [2]
Haidt formalized this critique of rationalism one year later in a paper titled "The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment." [3] Whereas the rationalists viewed moral reasoning as an essential component to moral judgment, social intuitionism contended that people reason like "a lawyer trying to build a case rather than a judge searching for the truth." [3]
A second conclusion drawn from moral dumbfounding was that people have moral values that extend beyond preventing harm. [3] [5] [4] [16] [17] [18] [19] This challenged the longstanding view in moral philosophy and moral psychology that immoral acts are acts that harm the welfare of other people. [20] [21] [22] [23] [24] [12] The Julie and Mark scenario provided an example of a "harmless wrong," an act that was judged as morally wrong even though it harmed no one. [2] [3] [5] This research was interpreted by some as evidence that people consider values other than harm when making moral judgments. [3] [5] [19] Other research on harmless wrongs verified that people condemned a wide range of ostensibly victimless acts including eating a dead dog, [25] [26] [11] bizarre forms of masturbation, [27] [11] smearing a bible with feces, [11] and burning an American flag. [25] [11] Haidt and his collaborators developed this view into moral foundations theory, which proposed five foundational moral intuitions, only one of which referred to harm: care (i.e., harm), fairness, loyalty, authority, and purity. [18] [19] [5]
Although the discovery of moral dumbfounding was highly influential for moral psychology, it later became the subject of widespread scientific scrutiny. [11] [10] [9] [28] [8] [29] [30] [31] [13] [32] [14]
A recurring criticism of moral dumbfounding is that the foundational 2000 paper demonstrating the effect [2] —originally an undergraduate honors thesis by Haidt's student Scott Murphy—never passed through the formal peer-review process and remains an unpublished manuscript over 20 years later. [11] [15] The paper begins with a note from the authors stating that "in social psychology one cannot simply publish a description of an interesting phenomenon, which is what this report is." [2] Critics have pointed out that the study included a small sample of 30 undergraduate students at the University of Virginia, which raised questions about the validity and generalizability of the effect. [15]
Over the following decades, a wave of scholars have criticized the moral dumbfounding effect on both methodological and theoretical grounds.
One line of criticism comes from rationalist scholars who rejected Haidt's claim that "moral reasoning does not cause moral judgment" [3] by arguing that so-called "dumbfounded" participants actually had valid reasons for their moral views. [9] [10] [29] [8] [33]
Royzman, Kim and Leeman (2015) presented a major critique of the original dumbfounding study [2] in their paper "The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect." [9] In a modified replication of the Julie and Mark paradigm, they demonstrated that many participants explicitly rejected the researcher's stipulation that the scenario was harmless. Even though the scenario was "carefully written to be harmless," [2] many participants still believed that Julie and Mark's decision to have sex would have negative consequences for their relationship and broader life. Royzman and colleagues also pointed out that what Haidt had interpreted as "unsupported declarations" by participants, like "It's just wrong," [2] could also be interpreted as endorsements of a consequence-insensitive moral position. Supporting this, they found that when participants were asked to elaborate on these statements, they provided "logically coherent deontological claims." [9] In a final study, they attempted to replicate the original dumbfounding finding after first excluding participants who 1) rejected the stipulated harmlessness of the scenario or 2) endorsed a consequence-insensitive or norm-based moral stance. After excluding these participants, who in their view had valid reasons for condemning the scenario, they found that only one of their 53 participants could be truly classified as dumbfounded. [9]
Stanley, Yin, and Sinnot-Armstrong (2019) proposed a related critique in their paper "A Reason-Based Explanation for Moral Dumbfounding." [10] They argued that it is often rational to condemn acts that pose a high risk of harm even when no harm actually occurs, like condemning a drunk driver even though the driver did not cause an accitent. They conducted a study demonstrating that participants' moral condemnation of the Julie and Mark scenario and other harmless wrongs were predicted by their perception that harm could have occurred. In another experiment, they found that reminding participants about the many ways in which the Julie and Mark scenario could have caused harm led participants to condemn the act more strongly. [10] The authors concluded that "many participants who supposedly experienced dumbfounding in prior studies could actually have been making their judgments based on the perceived risk of causing harm." [10]
One of the most prominent criticisms of moral dumbfounding comes from Gray, Schein, and Ward's 2014 paper "The Myth of Harmless Wrongs in Moral Cognition." [11] Drawing on the theory of dyadic morality, [6] [12] [34] they argued that harmless wrongs like the Julie and Mark scenario are a "psychological impossibility" because perceptions of immorality are always accompanied by perceptions of harm. [11] In their view, perceptions of harm are not an artifact of post-hoc reasoning but rather an intuitive perception that drives the moral condemnation of even ostensibly victimless acts. [6] [11]
To demonstrate this, they had participants read a variety of harmless wrong scenarios and rate their perceptions of harm and immorality while under time pressure, preventing them from using conscious reasoning. [11] A second set of participants rated the same acts without time pressure. If perceptions of harm were mere "post hoc justifications," [3] then participants should only rate the acts as harmful when they had ample time to think. [11] [15] [6] However, participants rated the ostensibly harmless acts as being even more harmful when they were under time pressure, supporting the view that people automatically perceive victims in "objectively" harmless wrongs. [6] In another study, Gray and colleagues used an implicit-association test to demonstrate that participants automatically associated harmless wrongs with harm-related concepts, and this predicted the extent to which they morally condemned these acts. [11] These findings were taken as evidence that all moral judgments are rooted in intuitive perceptions of harm—even judgments about acts that researchers had "carefully written to be harmless." [2] [11]
Cillian McHugh and his collaborators defended the moral dumbfounding effect in a series of studies published between 2017 and 2023. [35] [36] [37] [38] They conducted a systematic replication of the original effect, finding high rates of moral dumbfounding in response to the Julie and Mark scenario and other taboo vignettes (e.g., cannibalizing an already dead body). [35] They criticized Royzman and colleagues' rationalist interpretation of dumbfounding, [9] arguing that simply showing that participants endorse harm-based or norm-based reasons consistent with their moral judgments does not prove that those reasons caused their moral judgments—one would need to also show that participants are able to articulate and apply these reasons consistently. [35] [36] In a 2020 paper, they contended that few participants who espoused reasons for their judgments could clearly articulate these reasons and apply them consistently across contexts. [36]
In 2023, McHugh and colleagues conducted a cross-cultural replication in which they found evidence for moral dumbfounding in samples of Chinese, Indian, North African, and Middle Eastern participants. [38] They suggested that moral dumbfounding is a widespread phenomenon that generalizes beyond Western contexts. They also identified cultural variability in the kinds of scenarios that elicited moral dumbfounding. For example, Western samples were most strongly dumbfounded by the incest scenario, whereas Chinese samples were more strongly dumbfounded by the cannibalism scenario. [38]
A quarter century after the original study on moral dumbfounding, moral psychologists and philosophers remain divided about its existence and what it means regarding the nature of moral judgment. [39] According to McHugh and colleagues (2022), one way to resolve this debate is to recognize that dumbfounding is a broader psychological phenomenon not unique to moral judgment. [40] They argue that in nonmoral domains, people often lack insight into the causes of their own judgments [41] and overestimate their knowledge of even simple topics [42] (illusion of explanatory depth). Moral dumbfounding might be caused by participants' general inability to introspect their own cognitive processes. [40]
According to psychologists and some philosophers, agents engage in moral dumbfounding when they confidently insist on the truth of their moral judgements but are unable to produce evidential considerations in their support. Some theorists have found the phenomenon interesting insofar as it seems to indicate that ordinary agents reach some of their moral judgements on the basis of something other than moral reasoning.