Moral Machine

Last updated
Screenshot of a Moral Machine dilemma Moral Machine Screenshot.png
Screenshot of a Moral Machine dilemma

Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes. [1] [2] The platform is the idea of Iyad Rahwan and social psychologists Azim Shariff and Jean-François Bonnefon, [3] who conceived of the idea ahead of the publication of their article about the ethics of self-driving cars. [4] The key contributors to building the platform were MIT Media Lab graduate students Edmond Awad and Sohan Dsouza.

Contents

The presented scenarios are often variations of the trolley problem, and the information collected would be used for further research regarding the decisions that machine intelligence must make in the future. [5] [6] [7] [8] [9] [10] For example, as artificial intelligence plays an increasingly significant role in autonomous driving technology, research projects like Moral Machine help to find solutions for challenging life-and-death decisions that will face self-driving vehicles. [11]

Moral Machine was active from January 2016 to July 2020. The Moral Machine continues to be available on their website for people to experience. [1] [7]

The experiment

The Moral Machine was an ambitious project; it was the first attempt at using such an experimental design to test a large number of humans in over 200 countries worldwide. The study was approved by the Institute Review Board (IRB) at Massachusetts Institute of Technology (MIT). [12] [13]

The setup of the experiment asks the viewer to make a decision on a single scenario in which a self-driving car is about to hit pedestrians. The user can decide to have the car either swerve to avoid hitting the pedestrians or keep going straight to preserve the lives it is transporting.

Participants can complete as many scenarios as they want to, however the scenarios themselves are generated in groups of thirteen. Within this thirteen, a single scenario is entirely random while the other twelve are generated from a space in a database of 26 million different possibilities. They are chosen with two dilemmas focused on each of six dimensions of moral preferences: character gender, character age, character physical fitness, character social status, character species, and character number. [7] [13]

The experiment setup remains the same throughout multiple scenarios but each scenario tests a different set of factors. Most notably, the characters involved in the scenario are different in each one. Characters may include ones such as: Stroller, girl, boy, pregnant, Male Doctor, Female Doctor, Female Athlete, Executive Female, Male Athlete, Executive Male, Large Woman, Large Man, homeless, old man, old woman, dog, criminal, and a cat. [7]

Through these different characters researchers were able to understand how a wide variety of people will judge scenarios based on those involved.

Analysis

Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries, and correlations between these preferences and various national metrics. [14] [13]

The data was synthesized by a conjoint analysis to compute the average marginal component effect (AMCE) of each attribute that the Moral Machine tested. These attributes tested nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status). Some characters possessed other attributes (such as pregnancy, doctors, criminals, etc.) That did not fall into these tested factors. [13]

Globally, participants favored human lives over lives of animals like dogs and cats. They wanted to spare more lives than less and also wanted to spare younger lives as compared to older. Babies were most often spared with cats being the least spared. In terms of gender variations, male doctors and old men were spared moreso than female doctors and old women. While female athletes and larger females were spared moreso than male athletes and larger men. All three clusters shared the preference to spare pedestrians over passengers and law-abiders over criminals.

Cultural clusters

Because the experiment was run on a global scale, researchers were able to further breakdown data to see what separate cultures and regions value. To conduct this detailed analysis, researchers looked at 130 countries with at least 100 people who gave data to the Morale Machine. [13]

Researchers were able to separate out similar findings into multiple groups of regions on earth which they termes ‘cultural clusters’.

The first cluster, which researchers dubbed the Western cluster, contains North America and European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The second cluster, termed the Eastern cluster, contains eastern countries such as Japan and Taiwan as well as Islamic countries such as Indonesia, Pakistan, and Saudi Arabia. The third cluster, termed the Southern cluster, consists of the Latin American countries of Central and South America as well as some countries with French influence such as French overseas territories, and ones that were at some point under French leadership. [13]

Being able to show cultural clusters of information suggests that there are regional and cultural specific moral patterns that may allow groups of territories to have a shared standard of ethics when it comes to machines. [13]

Clusters By Region

Researchers found that cultural clusters varied in a few ways. The Eastern cluster, for example, did not have as much preference to spare younger humans compared to the other two clusters and had a higher preference for sparing law-abiding humans. The Western cluster had a higher preference for inaction on the part of the driver and thus had less of a preference for sparing pedestrians as compared to other clusters. The Southern cluster had a higher preference for sparing females, the young, and those of higher status. Those in the Southern cluster had a much higher preference for sparing humans over pets or other animals. They also strongly preferred sparing females over males, as well as people who were deemed more “fit” as opposed to those who are “unfit” (for example, athletes over ‘large’ individuals”).

Individual vs Collectivist Cultures

Participants from individualistic cultures had a higher preference to spare the greater number of people. This may be due to an individualistic society’s emphasis on the value of each individual. On the other hand, respondents from cultures that are more collectivist had a stronger preference to spare old lives over younger ones. This is likely explained by collectivism’s priority on group well-being over individual value, as well as the collectivistic culture’s tradition of valuing and respecting the elderly population. For instance, China ranked far below the world average for preference to spare the younger over elderly, as well as sparing more lives over less. On the other hand, the average respondent from the US exhibited a much higher tendency to save younger lives and larger groups.

Developed vs Undeveloped countries

Participants from countries that are less wealthy and have weaker institutions showed a higher tendency of sparing pedestrians who crossed illegally compared to those from more wealthy and developed countries. This is most likely due to their experience living in a society where individuals are more likely to deviate from rules due to less stringent enforcement of laws.

Economic Inequality

The extent of economic equality in a country is an accurate predictor of whether they are more likely to prefer sparing those of high versus low status. Countries with a higher Gini Coefficient– used by the World Bank to measure economic inequality in a country– are more likely to spare higher-class individuals. In other words, an individual from a country of higher economic inequality would be more likely to spare an executive over a homeless person. The same relationship can be observed for the preference of sparing wealthy lives over less wealthy ones– countries of higher economic inequality overwhelmingly prefer to save richer lives over poorer ones.


Source data and code to reproduce results of the analysis can be found on the existing Morale Machine site. [12] The data can be used by other researchers to draw different conclusions and analysis. Be sure to check the source for licensing concerns.

Applications of the data

The findings from the moral machine can help decision makers when designing self-driving automotive systems. Designers must make sure that these vehicles are able to solve problems on the road that aligns with the moral values of humans around it. [13] [12]

This is a challenge because of the complex nature of humans who may all make different decisions based on their personal values. However, by collecting a large amount of decisions from humans all over the world, researchers can begin to understand patterns in the context of a particular culture, community, and people.

Other features

The Moral Machine was deployed in June 2016. In October 2016, a feature was added that offered users the option to fill a survey about their demographics, political views, and religious beliefs. Between November 2016 and March 2017, the website was progressively translated into nine languages in addition to English (Arabic, Chinese, French, German, Japanese, Korean, Portuguese, Russian, and Spanish). [12]

Overall, the Moral Machine offers four different modes (see Supplementary Information), with  the focus being on the data-gathering feature of the website, called the Judge mode. [12]

This means that the Moral Machine, in addition to providing their own scenarios for users to judge, also invites users to create their own scenarios to be submitted and approved so that other people may also judge those scenarios. Data is also open sourced for anyone to explore via an interactive map that is featured on the Moral Machine website.

Related Research Articles

<span class="mw-page-title-main">Morality</span> Differentiation between right and wrong

Morality is the differentiation of intentions, decisions and actions between those that are distinguished as proper (right) and those that are improper (wrong). Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that a person believes should be universal. Morality may also be specifically synonymous with "goodness" or "rightness".

<span class="mw-page-title-main">Pedestrian</span> Person traveling on foot

A pedestrian is a person traveling on foot, whether walking or running. In modern times, the term usually refers to someone walking on a road or pavement, but this was not the case historically.

<span class="mw-page-title-main">Self-driving car</span> Vehicle operated with reduced human input

A self-driving car, also known as an autonomous car (AC), driverless car, or robotic car (robo-car), is a car that is capable of traveling without human input. Self-driving cars are responsible for perceiving the environment, monitoring important systems, and control, including navigation. Perception accepts visual and audio data from outside and inside the car and interpret the input to abstractly render the vehicle and its surroundings. The control system then takes actions to move the vehicle, considering the route, road conditions, traffic controls, and obstacles.

<span class="mw-page-title-main">Trolley problem</span> Thought experiment in ethics

The trolley problem is a series of thought experiments in ethics and psychology, involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. The series usually begins with a scenario in which a runaway tram or trolley is on course to collide with and kill a number of people down the track, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially "safe" person to save the others.

<span class="mw-page-title-main">Road traffic safety</span> Methods and measures for reducing the risk of death and injury on roads

Road traffic safety refers to the methods and measures used to prevent road users from being killed or seriously injured. Typical road users include pedestrians, cyclists, motorists, vehicle passengers, horse riders, and passengers of on-road public transport.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Moral psychology is a field of study in both philosophy and psychology. Historically, the term "moral psychology" was used relatively narrowly to refer to the study of moral development. Moral psychology eventually came to refer more broadly to various topics at the intersection of ethics, psychology, and philosophy of mind. Some of the main topics of the field are moral judgment, moral reasoning, moral sensitivity, moral responsibility, moral motivation, moral identity, moral action, moral development, moral diversity, moral character, altruism, psychological egoism, moral luck, moral forecasting, moral emotion, affective forecasting, and moral disagreement.

<span class="mw-page-title-main">Vehicular automation</span> Automation for various purposes of vehicles

Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent systems to assist the operator of a vehicle such as a car, aircraft, or watercraft. A vehicle using automation for tasks such as navigation to ease but not replace human control, qualify as semi-autonomous, whereas a fully self-operated vehicle is termed autonomous.

Ernst Dieter Dickmanns is a German pioneer of dynamic computer vision and of driverless cars. Dickmanns has been a professor at Bundeswehr University Munich (1975–2001), and visiting professor to Caltech and to MIT, teaching courses on "dynamic vision".

<span class="mw-page-title-main">Jean Decety</span>

Jean Decety is an American–French neuroscientist specializing in developmental neuroscience, affective neuroscience, and social neuroscience. His research focuses on the psychological and neurobiological mechanisms underpinning social cognition, particularly social decision-making, empathy, moral reasoning, altruism, pro-social behavior, and more generally interpersonal relationships. He is Irving B. Harris Distinguished Service Professor at the University of Chicago.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Personal Urban Mobility and Accessibility</span>

The Personal Urban Mobility and Accessibility (PUMA) was an experimental electrically powered road vehicle created by Segway and adopted by General Motors as a concept vehicle representing the future of urban transportation. It operates on two wheels placed side by side, a layout that differs in placement from motorcycles which instead have their two wheels placed at the front and rear.

Filipino values are social constructs within Filipino culture which define that which is socially considered to be desirable. The Filipino value system describes "the commonly shared and traditionally established system of values underlying Filipino behavior" within the context of the larger Filipino cultural system. These relate to the unique assemblage of consistent ideologies, moral codes, ethical practices, etiquette and personal and cultural values that are promoted by Filipino society.

<span class="mw-page-title-main">Princess Nourah Bint Abdul Rahman University</span> Public womens university located in Riyadh, Saudi Arabia

Princess Nourah Bint Abdulrahman University, formerly Riyadh University for Women, is a public women's university in Riyadh, Saudi Arabia. Established in 1970, it is the largest women's university in the world and is named after Noura bint Abdul Rahman, the elder sister of King Abdulaziz ibn Saud.

<span class="mw-page-title-main">Iyad Rahwan</span> Syrian-Australian computational social scientist

Iyad Rahwan, is a Syrian-Australian scientist. He is the director of the Center for Humans and Machines at the Max Planck Institute for Human Development. Between 2015 and 2020, he was an associate professor of Media Arts & Sciences at the MIT Media Lab. Rahwan's work lies at the intersection of the computer and social sciences, where he has investigated topics in computational social science, collective intelligence, large-scale cooperation, and the social aspects of artificial intelligence.

Autonomous things, abbreviated AuT, or the Internet of autonomous things, abbreviated as IoAT, is an emerging term for the technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects.

<span class="mw-page-title-main">Chris Urmson</span> CEO of self-driving technology company Aurora

Chris Urmson is a Canadian engineer, academic, and entrepreneur known for his work on self-driving car technology. He cofounded Aurora Innovation, a company developing self-driving technology, in 2017 and serves as its CEO. Urmson was instrumental in pioneering and advancing the development of self-driving vehicles since the early 2000s.

Cruise LLC is an American self-driving car company headquartered in San Francisco, California. Founded in 2013 by Kyle Vogt and Dan Kan, Cruise tests and develops autonomous car technology. The company is a largely autonomous subsidiary of General Motors.

<span class="mw-page-title-main">Yandex self-driving car</span> Robotaxi project

Yandex self-driving car is an autonomous car project of the Russian-based technology company Yandex. The first driverless prototype launched in May 2017. As of 2018, functional service was launched in Russia with prototypes also being tested in Israel and the United States. In 2019, Yandex revealed autonomous delivery robots based on the same technology stack as the company's self-driving cars. Since 2020, autonomous robots have been delivering food, groceries and parcels in Russia and the United States. In 2020, the self-driving project was spun-off into a standalone company under the name of Yandex Self-Driving Group.

The impact of self-driving cars is anticipated to be wide-ranging on many areas of daily life. Self-driving cars have been the subject of significant research on their environmental, practical, and lifestyle consequences.

References

  1. 1 2 "Driverless cars face a moral dilemma: Who lives and who dies?". NBC News. Retrieved 2017-02-16.
  2. Brogan, Jacob (2016-08-11). "Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?". Slate. ISSN   1091-2339 . Retrieved 2017-02-16.
  3. Awad, Edmond (2018-10-24). "Inside the Moral Machine". Behavioural and Social Sciences at Nature Research. Retrieved 2019-07-04.
  4. Bonnefon, Jean-François; Shariff, Azim; Rahwan, Iyad (2016-06-24). "The social dilemma of autonomous vehicles". Science. 352 (6293): 1573–1576. arXiv: 1510.03346 . Bibcode:2016Sci...352.1573B. doi:10.1126/science.aaf2654. ISSN   0036-8075. PMID   27339987. S2CID   35400794.
  5. "Moral Machine | MIT Media Lab". www.media.mit.edu. Archived from the original on 2016-11-30. Retrieved 2017-02-16.
  6. "MIT Seeks 'Moral' to the Story of Self-Driving Cars". VOA. Retrieved 2017-02-16.
  7. 1 2 3 4 "Moral Machine". Moral Machine. Retrieved 2017-02-16.
  8. Clark, Bryan (2017-01-16). "MIT's 'Moral Machine' wants you to decide who dies in a self-driving car accident". The Next Web. Retrieved 2017-02-16.
  9. "MIT Game Asks Who Driverless Cars Should Kill". Popular Science. Retrieved 2017-02-16.
  10. Constine, Josh (4 October 2016). "Play this killer self-driving car ethics game". TechCrunch. Retrieved 2017-02-16.
  11. Chopra, Ajay. "What's Taking So Long for Driverless Cars to Go Mainstream?". Fortune. Retrieved 2017-08-01.
  12. 1 2 3 4 5 "Moral Machine". Moral Machine. Retrieved 2022-04-13.
  13. 1 2 3 4 5 6 7 8 Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad (24 October 2018). "The Moral Machine experiment". Nature. 563 (7729): 59–64. Bibcode:2018Natur.563...59A. doi:10.1038/s41586-018-0637-6. hdl: 10871/39187 . PMID   30356211. S2CID   256770099.
  14. Karlsson, Carl-Johan (7 July 2021). "What Sweden's Covid failure tells us about ageism". Knowable Magazine. doi: 10.1146/knowable-070621-1 . Retrieved 9 December 2021.