Joshua Greene | |
---|---|
Born | 1974 (age 49–50) |
Alma mater | Harvard University (BA) Princeton University (PhD) |
Known for | Dual process theory |
Scientific career | |
Fields | experimental psychology, moral psychology, neuroscience, social psychology, philosophy |
Institutions | Harvard University |
Thesis | The Terrible, Horrible, No Good, Very Bad Truth About Morality and What to Do About It (2002) |
Doctoral advisor | David Lewis Gilbert Harman |
Website | www |
Joshua David Greene (born 1974) [1] is an American experimental psychologist, neuroscientist, and philosopher. He is a professor of psychology at Harvard University. Most of his research and writing has been concerned with moral judgment and decision-making. His recent research focuses on fundamental issues in cognitive science. [2] [3]
Greene attended high school in Fort Lauderdale, Broward County, Florida. [4] He briefly attended the Wharton School of the University of Pennsylvania before transferring to Harvard University. [5] He earned a bachelor's degree in philosophy from Harvard in 1997, [6] followed by a Ph.D. in philosophy at Princeton University under the supervision of David Lewis and Gilbert Harman. Peter Singer also served on his dissertation committee. His 2002 dissertation, The Terrible, Horrible, No Good, Very Bad Truth About Morality and What to Do About It, argues against moral-realist language and in defense of non-realist utilitarianism as a better framework for resolving disagreements. [7] Greene served as a postdoctoral fellow at Princeton in the Neuroscience of Cognitive Control Laboratory before returning to Harvard in 2006 as an assistant professor. In 2011, he became the John and Ruth Hazel Associate Professor of the Social Sciences. Since 2014, he has been a professor of psychology.
Greene and colleagues have advanced a dual process theory of moral judgment, suggesting that moral judgments are determined by both automatic, emotional responses and controlled, conscious reasoning. In particular, Greene argues that the "central tension" in ethics between deontology (rights- or duty-based moral theories) and consequentialism (outcome-based theories) reflects the competing influences of these two types of processes:
Characteristically deontological judgments are preferentially supposed by automatic emotional responses, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control. [8]
In one of the first experiments to suggest a moral dual-process model, [4] Greene and colleagues showed that people making judgments about "personal" moral dilemmas (like whether to push one person in front of an oncoming trolley in order to save five others) engaged several brain regions associated with emotion that were not activated by judgments that were more "impersonal" (like whether to pull a switch to redirect a trolley from a track on which it would kill five people onto a track on which it would kill one other person instead). [9] They also found that for the dilemmas involving "personal" moral questions, those who did make the intuitively unappealing choice had longer reaction times than those who made the more emotionally pleasant decision.
A follow-up study compared "easy" personal moral questions to which subjects had fast reaction times against "hard" dilemmas (like the footbridge problem) to which they had slow reaction times. [10] When responding to the hard problems, subjects displayed increased activity in the anterior dorsolateral prefrontal cortex (DLPFC) and inferior parietal lobes—areas associated with cognitive processing—as well as the anterior cingulate cortex—which has been implicated in error detection between two confusing inputs, as in the Stroop task). This comparison demonstrated that harder problems activated different brain regions, but it did not prove differential activity for the same moral problem depending on the answer given. This was done in the second part of the study, in which the authors showed that for a given question, those subjects who made the utilitarian choices did have higher activity in the anterior DLPFC and the right inferior parietal lobe than subjects making non-utilitarian choices.
These two studies were correlational, but others have since suggested a causal impact of emotional vs. cognitive processing on deontological vs. utilitarian judgments. [11] [12] [13] A 2008 study [14] by Greene showed that cognitive load caused subjects to take longer to respond when they made a utilitarian moral judgment but had no effect on response time when they made a non-utilitarian judgment, suggesting that the utilitarian thought processes required extra cognitive effort.
Greene's 2008 article "The Secret Joke of Kant's Soul" [15] argues that Kantian/deontological ethics tends to be driven by emotional respondes and is best understood as rationalization rather than rationalism—an attempt to justify intuitive moral judgments post-hoc, although the author states that his argument is speculative and will not be conclusive. Several philosophers have written critical responses. [16] [17] [18] [19] [20] [21]
Drawing on dual-process theory, as well as evolutionary psychology and other neuroscience work, Greene's book Moral Tribes (2013) explores how our ethical intuitions play out in the modern world. [22]
Greene posits that humans have an instinctive, automatic tendency to cooperate with others in their social group on tragedy of the commons scenarios ("me versus us"). For example, in a cooperative investment game, people are more likely to do what's best for the group when they're under time pressure or when they're primed to "go with their gut", and inversely, cooperation can be inhibited by rational calculation. [23] However, on questions of inter-group harmony ("us versus them"), automatic intuitions run into a problem, which Greene calls the "tragedy of commonsense morality". The same ingroup loyalty that achieves cooperation within a community leads to hostility between communities. In response, Greene proposes a "metamorality" based on a "common currency" that all humans can agree upon and suggests that utilitarianism—or as he calls it, "deep pragmatism"—is up to the task. [24]
Moral Tribes received multiple positive reviews. [25] [26] [27] [28]
Thomas Nagel critiques the book by suggesting that Greene is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Immanuel Kant and John Rawls offer other impartial approaches to ethical questions. [24]
Robert Wright calls [29] Greene's proposal for global harmony ambitious and adds, "I like ambition!" But he also claims that people have a tendency to see facts in a way that serves their ingroup, even if there's no disagreement about the underlying moral principles that govern the disputes. "If indeed we're wired for tribalism", Wright explains, "then maybe much of the problem has less to do with differing moral visions than with the simple fact that my tribe is my tribe and your tribe is your tribe. Both Greene and Paul Bloom cite studies in which people were randomly divided into two groups and immediately favored members of their own group in allocating resources—even when they knew the assignment was random." Instead, Wright proposes that "nourishing the seeds of enlightenment indigenous to the world's tribes is a better bet than trying to convert all the tribes to utilitarianism—both more likely to succeed, and more effective if it does."
Greene's metamorality of deep pragmatism has recently been criticized by Steven Kraaijeveld and Hanno Sauer for being based on conflicting arguments about moral truth. [30]
In Moral Tribes, Greene argues that reasoned thought is important in moral decision-making, while also acknowledging the significant role that emotions play in the process. He supports this claim with compelling evidence, including results from neurobiological studies. Greene's willingness to recognize the importance of emotional-based moral reasoning is a significant development in bridging the gap between the continental and analytic schools of philosophy, as the latter tends to prioritize objective reasoning over subjective, emotional approaches. [31]
Greene received the 2012 Stanton Prize from the Society for Philosophy and Psychology. [32]
In 2013, Greene was awarded the Roslyn Abramson Award, given annually to Harvard faculty "in recognition of his or her excellence and sensitivity in teaching undergraduates". [6]
Emotions are physical and mental states brought on by neurophysiological changes, variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. There is no scientific consensus on a definition. Emotions are often intertwined with mood, temperament, personality, disposition, or creativity.
In ethical philosophy, utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for the affected individuals. In other words, utilitarian ideas encourage actions that ensure the greatest good for the greatest number. Although different varieties of utilitarianism admit different characterizations, the basic idea behind all of them is, in some sense, to maximize utility, which is often defined in terms of well-being or related concepts. For instance, Jeremy Bentham, the founder of utilitarianism, described utility as the capacity of actions or objects to produce benefits, such as pleasure, happiness, and good, or to prevent harm, such as pain and unhappiness, to those affected.
Morality is the categorization of intentions, decisions and actions into those that are proper, or right, and those that are improper, or wrong. Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that is understood to be universal. Morality may also be specifically synonymous with "goodness", "appropriateness" or "rightness".
The trolley problem is a series of thought experiments in ethics, psychology, and artificial intelligence involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. The series usually begins with a scenario in which a runaway tram or trolley is on course to collide with and kill a number of people down the track, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially "safe" person to save the others.
Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy, and is the foundation of descriptive ethics.
Moral psychology is a field of study in both philosophy and psychology. Historically, the term "moral psychology" was used relatively narrowly to refer to the study of moral development. Moral psychology eventually came to refer more broadly to various topics at the intersection of ethics, psychology, and philosophy of mind. Some of the main topics of the field are moral judgment, moral reasoning, moral sensitivity, moral responsibility, moral motivation, moral identity, moral action, moral development, moral diversity, moral character, altruism, psychological egoism, moral luck, moral forecasting, moral emotion, affective forecasting, and moral disagreement.
Experimental philosophy is an emerging field of philosophical inquiry that makes use of empirical data—often gathered through surveys which probe the intuitions of ordinary people—in order to inform research on philosophical questions. This use of empirical data is widely seen as opposed to a philosophical methodology that relies mainly on a priori justification, sometimes called "armchair" philosophy, by experimental philosophers. Experimental philosophy initially began by focusing on philosophical questions related to intentional action, the putative conflict between free will and determinism, and causal vs. descriptive theories of linguistic reference. However, experimental philosophy has continued to expand to new areas of research.
In moral psychology, social intuitionism is a model that proposes that moral positions are often non-verbal and behavioral. Often such social intuitionism is based on "moral dumbfounding" where people have strong moral reactions but fail to establish any kind of rational principle to explain their reaction.
John M. Darley was an American social psychologist and professor of psychology and public affairs at Princeton University. Darley is best known, in collaboration with Bibb Latané, for developing theories that aim to explain why people might not intervene at the scene of an emergency when others are present; this phenomenon is known as the bystander effect and the accompanying diffusion of responsibility effect. This work stemmed from the tragic case of Kitty Genovese, a New Yorker who was murdered in March 1964 while 38 people either witnessed or heard her struggling with the assailant. Darley also studied the effect of assessment on performance and proposed Darley's Law, which states that “The more any quantitative performance measure is used to determine an individual’s rewards, the more subject it will be to corruption pressures and the more it will distort the action and thought patterns of those it is intended to monitor.”
An emotional bias is a distortion in cognition and decision making due to emotional factors.
The ventromedial prefrontal cortex (vmPFC) is a part of the prefrontal cortex in the mammalian brain. The ventral medial prefrontal is located in the frontal lobe at the bottom of the cerebral hemispheres and is implicated in the processing of risk and fear, as it is critical in the regulation of amygdala activity in humans. It also plays a role in the inhibition of emotional responses, and in the process of decision-making and self-control. It is also involved in the cognitive evaluation of morality.
Jonathan David Haidt is an American social psychologist and author. He is the Thomas Cooley Professor of Ethical Leadership at the New York University Stern School of Business. His main areas of study are the psychology of morality and moral emotions.
Moral perception is a term used in ethics and moral psychology to denote the discernment of the morally salient qualities in particular situations. Moral perceptions are argued to be necessary to moral reasoning, the deliberation of what is the right thing to do. Moral perception is variously conceptualized by Aristotle, Hannah Arendt, and Martha C. Nussbaum. Lawrence Blum (1994) distinguishes moral perception from moral judgment. Whereas a person's judgment about what the moral course of action would be is the result of a conscious deliberation, the basis for that process is the perception of aspects of one's situation, which is different for each person. Moral perceptions are also particular in nature.
Rebecca Saxe is a professor of cognitive neuroscience and associate Dean of Science at MIT. She is an associate member of the McGovern Institute for Brain Research and a board member of the Center for Open Science. She is known for her research on the neural basis of social cognition. She received her BA from Oxford University where she studied Psychology and Philosophy, and her PhD from MIT in Cognitive Science. She is the granddaughter of Canadian coroner and politician Morton Shulman.
Moral development focuses on the emergence, change, and understanding of morality from infancy through adulthood. The theory states that morality develops across a lifespan in a variety of ways and is influenced by an individual's experiences and behavior when faced with moral issues through different periods of physical and cognitive development. Morality concerns an individual's reforming sense of what is right and wrong; it is for this reason that young children have different moral judgment and character than that of a grown adult. Morality in itself is often a synonym for "rightness" or "goodness." It also refers to a specific code of conduct that is derived from one's culture, religion, or personal philosophy that guides one's actions, behaviors, and thoughts.
Moral foundations theory is a social psychological theory intended to explain the origins of and variation in human moral reasoning on the basis of innate, modular foundations. It was first proposed by the psychologists Jonathan Haidt, Craig Joseph, and Jesse Graham, building on the work of cultural anthropologist Richard Shweder. More recently, Mohammad Atari, Jesse Graham, and Jonathan Haidt have revised some aspects of the theory and developed new measurement tools. The theory has been developed by a diverse group of collaborators and popularized in Haidt's book The Righteous Mind. The theory proposes that morality is "more than one thing", first arguing for five foundations, and later expanding for six foundations :
Dual process theory within moral psychology is an influential theory of human moral judgement that posits that human beings possess two distinct cognitive subsystems that compete in moral reasoning processes: one fast, intuitive and emotionally-driven, the other slow, requiring conscious deliberation and a higher cognitive load. Initially proposed by Joshua Greene along with Brian Sommerville, Leigh Nystrom, John Darley, Jonathan David Cohen and others, the theory can be seen as a domain-specific example of more general dual process accounts in psychology, such as Daniel Kahneman's "system1"/"system 2" distinction popularised in his book, Thinking, Fast and Slow. Greene has often emphasized the normative implications of the theory, which has started an extensive debate in ethics.
Neuromorality is an emerging field of neuroscience that studies the connection between morality and neuronal function. Scientists use fMRI and psychological assessment together to investigate the neural basis of moral cognition and behavior. Evidence shows that the central hub of morality is the prefrontal cortex guiding activity to other nodes of the neuromoral network. A spectrum of functional characteristics within this network to give rise to both altruistic and psychopathological behavior. Evidence from the investigation of neuromorality has applications in both clinical neuropsychiatry and forensic neuropsychiatry.
Moral emotions are a variety of social emotions that are involved in forming and communicating moral judgments and decisions, and in motivating behavioral responses to one's own and others' moral behavior. As defined by Jonathan Haidt, moral emotions "are linked to the interests or welfare either of a society as a whole or at least of persons other than the judge or agent". A person may not always have clear words to articulate, yet simultaneously knows it to be true.