Moral Machine

Last updated
Screenshot of a Moral Machine dilemma Moral Machine Screenshot.png
Screenshot of a Moral Machine dilemma

Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes. [1] [2] The platform is the idea of Iyad Rahwan and social psychologists Azim Shariff and Jean-François Bonnefon, [3] who conceived of the idea ahead of the publication of their article about the ethics of self-driving cars. [4] The key contributors to building the platform were MIT Media Lab graduate students Edmond Awad and Sohan Dsouza.

Contents

The presented scenarios are often variations of the trolley problem, and the information collected would be used for further research regarding the decisions that machine intelligence must make in the future. [5] [6] [7] [8] [9] [10] For example, as artificial intelligence plays an increasingly significant role in autonomous driving technology, research projects like Moral Machine help to find solutions for challenging life-and-death decisions that will face self-driving vehicles. [11]

Moral Machine was active from January 2016 to July 2020. The Moral Machine continues to be available on their website for people to experience. [1] [7]

The experiment

The Moral Machine was an ambitious project; it was the first attempt at using such an experimental design to test a large number of humans in over 200 countries worldwide. The study was approved by the Institute Review Board (IRB) at Massachusetts Institute of Technology (MIT). [12] [13]

The setup of the experiment asks the viewer to make a decision on a single scenario in which a self-driving car is about to hit pedestrians. The user can decide to have the car either swerve to avoid hitting the pedestrians or keep going straight to preserve the lives it is transporting.

Participants can complete as many scenarios as they want to, however the scenarios themselves are generated in groups of thirteen. Within this thirteen, a single scenario is entirely random while the other twelve are generated from a space in a database of 26 million different possibilities. They are chosen with two dilemmas focused on each of six dimensions of moral preferences: character gender, character age, character physical fitness, character social status, character species, and character number. [7] [13]

The experiment setup remains the same throughout multiple scenarios but each scenario tests a different set of factors. Most notably, the characters involved in the scenario are different in each one. Characters may include ones such as: Stroller, girl, boy, pregnant, Male Doctor, Female Doctor, Female Athlete, Executive Female, Male Athlete, Executive Male, Large Woman, Large Man, homeless, old man, old woman, dog, criminal, and a cat. [7]

Through these different characters researchers were able to understand how a wide variety of people will judge scenarios based on those involved.

Analysis

The Moral Machine collected 40 million moral decisions from 4 million participants in 233 countries, [14] [15] [16] analysis of which revealed trends within individual countries and humanity as a whole. It tested for nine factors: preference for sparing humans versus pets, passengers versus pedestrians, men versus women, young versus elderly, fit versus overweight, higher versus lower social status, jaywalkers versus law abiders, larger versus smaller groups, and inaction (i.e. staying on course) versus swerving. [13]

Globally, participants favored human lives over lives of animals like dogs and cats. They preferred to spare more lives if possible, and younger lives as opposed to older. [16] Babies were most often spared with cats being the least spared. In terms of gender variations, people tended to spare men over women for doctors and the elderly. All countries generally shared the preference to spare pedestrians over passengers and law-abiders over criminals.

Participants from less wealthy countries showed a higher tendency of sparing pedestrians who crossed illegally compared to those from more wealthy and developed countries. This is most likely due to their experience living in a society where individuals are more likely to deviate from rules due to less stringent enforcement of laws. Countries of higher economic inequality overwhelmingly prefer to save wealthier individuals over poorer ones. [13]

Cultural differences

Researchers subdivided 130 countries with similar results into three ‘cultural clusters’. North America and European countries with significant Christian populations had a higher preference for inaction on the part of the driver and thus had less of a preference for sparing pedestrians as compared to other clusters. East Asian and Islamic countries, together constituting the second cluster, did not have as much preference to spare younger humans compared to the other two clusters and had a higher preference for sparing law-abiding humans. Latin America and Francophone countries had a higher preference for sparing women, the young, the fit, and those of higher status, but a lower preference for sparing humans over pets or other animals. [13] [16]

Individualistic cultures tended to spare larger groups, and collectivist cultures had a stronger preference for sparing the lives of older people. For instance, China ranked far below the world average for preference to spare the younger over elderly, while the average respondent from the US exhibited a much higher tendency to save younger lives and larger groups. [13]

Applications of the data

The findings from the moral machine can help decision makers when designing self-driving automotive systems. Designers must make sure that these vehicles are able to solve problems on the road that aligns with the moral values of humans around it. [13] [12]

This is a challenge because of the complex nature of humans who may all make different decisions based on their personal values. However, by collecting a large amount of decisions from humans all over the world, researchers can begin to understand patterns in the context of a particular culture, community, and people.

Other features

The Moral Machine was deployed in June 2016. In October 2016, a feature was added that offered users the option to fill a survey about their demographics, political views, and religious beliefs. Between November 2016 and March 2017, the website was progressively translated into nine languages in addition to English (Arabic, Chinese, French, German, Japanese, Korean, Portuguese, Russian, and Spanish). [12]

Overall, the Moral Machine offers four different modes (see Supplementary Information), with  the focus being on the data-gathering feature of the website, called the Judge mode. [12]

This means that the Moral Machine, in addition to providing their own scenarios for users to judge, also invites users to create their own scenarios to be submitted and approved so that other people may also judge those scenarios. Data is also open sourced for anyone to explore via an interactive map that is featured on the Moral Machine website.

In the literature

Studies and research on the Moral Machine have taken a wide variety of approaches. However, theological examinations of the topic are still scarce where two bodies of work that examine such perspective currently exist in this regard: One is Buddhist [17] while the other is Christian. [18]

Related Research Articles

<span class="mw-page-title-main">Morality</span> Differentiation between right and wrong

Morality is the categorization of intentions, decisions and actions into those that are proper, or right, and those that are improper, or wrong. Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that is understood to be universal. Morality may also be specifically synonymous with "goodness", "appropriateness" or "rightness".

<span class="mw-page-title-main">Self-driving car</span> Vehicle operated with reduced human input

A self-driving car, also known as an autonomous car (AC), driverless car, robotaxi, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. Self-driving cars are responsible for all driving activities, such as perceiving the environment, monitoring important systems, and controlling the vehicle, which includes navigating from origin to destination.

<span class="mw-page-title-main">Trolley problem</span> Thought experiment in ethics

The trolley problem is a series of thought experiments in ethics, psychology, and artificial intelligence involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. The series usually begins with a scenario in which a runaway tram or trolley is on course to collide with and kill a number of people down the track, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially "safe" person to save the others.

Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy, and is the foundation of descriptive ethics.

Moral psychology is a field of study in both philosophy and psychology. Historically, the term "moral psychology" was used relatively narrowly to refer to the study of moral development. Moral psychology eventually came to refer more broadly to various topics at the intersection of ethics, psychology, and philosophy of mind. Some of the main topics of the field are moral judgment, moral reasoning, moral sensitivity, moral responsibility, moral motivation, moral identity, moral action, moral development, moral diversity, moral character, altruism, psychological egoism, moral luck, moral forecasting, moral emotion, affective forecasting, and moral disagreement.

<span class="mw-page-title-main">Vehicular automation</span> Automation for various purposes of vehicles

Vehicular automation involves the use of mechatronics, artificial intelligence, and multi-agent systems to assist the operator of a vehicle such as a car, lorry, aircraft, or watercraft. A vehicle using automation for tasks such as navigation to ease but not replace human control, qualify as semi-autonomous, whereas a fully self-operated vehicle is termed autonomous.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Waymo</span> Autonomous car technology company

Waymo LLC, formerly known as the Google Self-Driving Car Project, is an American autonomous driving technology company headquartered in Mountain View, California. It is a subsidiary of Alphabet Inc.

The tunnel problem is a philosophical thought experiment first introduced by Jason Millar in 2014. It is a variation on the classic trolley problem designed to focus on the ethics of autonomous vehicles, as well as the question of who gets to decide how they react in life-and-death scenarios.

<span class="mw-page-title-main">History of self-driving cars</span> Overview of the history of self-driving cars

Experiments have been conducted on self-driving cars since 1939; promising trials took place in the 1950s and work has proceeded since then. The first self-sufficient and truly autonomous cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's Eureka Prometheus Project in 1987. In 1988, William L Kelley patented the first modern collision Predicting and Avoidance devices for Moving Vehicles. then, numerous major companies and research organizations have developed working autonomous vehicles including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc., Bosch, Nissan, Toyota, Audi, Volvo, Vislab from University of Parma, Oxford University and Google. In July 2013, Vislab demonstrated BRAiVE, a vehicle that moved autonomously on a mixed traffic route open to public traffic.

The LUTZ Pathfinder is a prototype autonomous microcar. The two-seater prototype pod has been built by Coventry-based RDM Group, and was first shown to the public in February 2015.

A robotaxi, also known as robot taxi, robo-taxi, self-driving taxi or driverless taxi, is an autonomous car operated for a ridesharing company.

<span class="mw-page-title-main">Yandex Taxi</span> Russian taxi service

Yandex Taxi is an international company operating taxi hailing and food delivery services across Russia, the CIS, Eastern Europe, Africa, and the Middle East. It is owned by Russian tech company Yandex. The company is among the world's leading developers of self-driving technology.

<span class="mw-page-title-main">Argo AI</span> Autonomous driving technology company

Argo AI LLC was an autonomous driving technology company headquartered in Pittsburgh, Pennsylvania. The company was co-founded in 2016 by Bryan Salesky and Peter Rander, veterans of the Google and Uber automated driving programs. Argo AI was an independent company that built software, hardware, maps, and cloud-support infrastructure to power self-driving vehicles. Argo was mostly backed by Ford Motor Co. (2017) and the Volkswagen Group (2020). At its peak, the company was valued at $7 billion.

<span class="mw-page-title-main">Iyad Rahwan</span> Syrian-Australian computational social scientist

Iyad Rahwan, is a Syrian-Australian scientist. He is the director of the Center for Humans and Machines at the Max Planck Institute for Human Development. Between 2015 and 2020, he was an associate professor of Media Arts & Sciences at the MIT Media Lab. Rahwan's work lies at the intersection of the computer and social sciences, where he has investigated topics in computational social science, collective intelligence, large-scale cooperation, and the social aspects of artificial intelligence.

Autonomous things, abbreviated AuT, or the Internet of autonomous things, abbreviated as IoAT, is an emerging term for the technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects.

Cruise LLC is an American self-driving car company headquartered in San Francisco, California. Founded in 2013 by Kyle Vogt and Dan Kan, Cruise tests and develops autonomous car technology. The company is a largely autonomous subsidiary of General Motors. Following a series of incidents, it suspended operations in October 2023, and Kyle Vogt resigned as CEO in November 2023. The company began returning its vehicles to public roads in May 2024.

Drive.ai, a subsidiary of Apple Inc., is an American technology company headquartered in Mountain View, California that uses artificial intelligence to make self-driving systems for cars. It has demonstrated a vehicle driving autonomously with a safety driver only in the passenger seat. To date, the company has raised approximately $77 million in funding. Drive.ai's technology can be modified to turn a vehicle autonomous.

The impact of self-driving cars is anticipated to be wide-ranging in many areas of daily life. Self-driving cars have been the subject of significant research on their environmental, practical, and lifestyle consequences and their impacts remain debated.

Moral outsourcing refers to placing responsibility for ethical decision-making on to external entities, often algorithms. The term is often used in discussions of computer science and algorithmic fairness, but it can apply to any situation in which one appeals to outside agents in order to absolve themselves of responsibility for their actions. In this context, moral outsourcing specifically refers to the tendency of society to blame technology, rather than its creators or users, for any harm it may cause.

References

  1. 1 2 "Driverless cars face a moral dilemma: Who lives and who dies?". NBC News. Retrieved 2017-02-16.
  2. Brogan, Jacob (2016-08-11). "Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?". Slate. ISSN   1091-2339 . Retrieved 2017-02-16.
  3. Awad, Edmond (2018-10-24). "Inside the Moral Machine". Behavioural and Social Sciences at Nature Research. Retrieved 2019-07-04.
  4. Bonnefon, Jean-François; Shariff, Azim; Rahwan, Iyad (2016-06-24). "The social dilemma of autonomous vehicles". Science. 352 (6293): 1573–1576. arXiv: 1510.03346 . Bibcode:2016Sci...352.1573B. doi:10.1126/science.aaf2654. ISSN   0036-8075. PMID   27339987. S2CID   35400794.
  5. "Moral Machine | MIT Media Lab". www.media.mit.edu. Archived from the original on 2016-11-30. Retrieved 2017-02-16.
  6. "MIT Seeks 'Moral' to the Story of Self-Driving Cars". VOA. Retrieved 2017-02-16.
  7. 1 2 3 4 "Moral Machine". Moral Machine. Retrieved 2017-02-16.
  8. Clark, Bryan (2017-01-16). "MIT's 'Moral Machine' wants you to decide who dies in a self-driving car accident". The Next Web. Retrieved 2017-02-16.
  9. "MIT Game Asks Who Driverless Cars Should Kill". Popular Science. Retrieved 2017-02-16.
  10. Constine, Josh (4 October 2016). "Play this killer self-driving car ethics game". TechCrunch. Retrieved 2017-02-16.
  11. Chopra, Ajay. "What's Taking So Long for Driverless Cars to Go Mainstream?". Fortune. Retrieved 2017-08-01.
  12. 1 2 3 4 "Moral Machine". Moral Machine. Retrieved 2022-04-13.
  13. 1 2 3 4 5 6 7 Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad (24 October 2018). "The Moral Machine experiment". Nature. 563 (7729): 59–64. Bibcode:2018Natur.563...59A. doi:10.1038/s41586-018-0637-6. hdl: 10871/39187 . PMID   30356211. S2CID   256770099.
  14. Vincent, James (24 October 2018). "Global preferences for who to save in self-driving car crashes revealed". The Verge. Vox Media. Retrieved 3 August 2024.
  15. Karlsson, Carl-Johan (7 July 2021). "What Sweden's Covid failure tells us about ageism". Knowable Magazine. doi: 10.1146/knowable-070621-1 . Retrieved 9 December 2021.
  16. 1 2 3 Smith, Oliver. "A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable". Forbes. Retrieved 3 August 2024.
  17. Hongladarom, Soraj (2020). The ethics of AI and robotics: A buddhist viewpoint. Lexington Books. ISBN   978-1498597296.{{cite book}}: CS1 maint: date and year (link)
  18. Crook, Nigel (2022). Rise of the Moral Machine: Exploring Virtue Through a Robot's Eyes. Nigel T. Crook. ISBN   978-1739133900.{{cite book}}: CS1 maint: date and year (link)