Moral outsourcing

Last updated

Moral outsourcing refers to placing responsibility for ethical decision-making on to external entities, often algorithms. The term is often used in discussions of computer science and algorithmic fairness, [1] but it can apply to any situation in which one appeals to outside agents in order to absolve themselves of responsibility for their actions. In this context, moral outsourcing specifically refers to the tendency of society to blame technology, rather than its creators or users, for any harm it may cause. [2]

Contents

Definition

The term "moral outsourcing" was first coined by Dr. Rumman Chowdhury, a data scientist concerned with the overlap between artificial intelligence and social issues. [1] Chowdhury used the term to describe looming fears of a so-called “Fourth Industrial Revolution” following the rise of artificial intelligence.

Moral outsourcing is often applied by technologists to shrink away from their part in building offensive products. In her TED Talk, Chowdhury gives the example of a creator excusing their work by saying they were simply doing their job. [1] This is a case of moral outsourcing and not taking ownership for the consequences of creation.

When it comes to AI, moral outsourcing allows for creators to decide when the machine is human and when it is a computer - shifting the blame and responsibility of moral plights off of the technologists and onto the technology. Conversations around AI and bias and its impacts require accountability to bring change. It is difficult to address these biased systems if their creators use moral outsourcing to avoid taking any responsibility for the issue. [2]

One example of moral outsourcing is the anger that is directed at machines for “taking jobs away from humans” rather than companies for employing that technology and jeopardizing jobs in the first place. [1]

The term "moral outsourcing" refers to the concept of outsourcing, or enlisting an external operation to complete specific work for another organization. In the case of moral outsourcing, the work of resolving moral dilemmas or making choices according to an ethical code is supposed to be conducted by another entity. [3]

Real-World Applications

In the medical field, AI is increasingly involved in decision-making processes about which patients to treat, and how to treat them. [4] The responsibility of the doctor to make informed decisions about what is best for their patients is outsourced to an algorithm. Sympathy is also noted to be an important part of medical practice; an aspect that artificial intelligence, glaringly, is missing. [5] This form of moral outsourcing is a major concern in the medical community.

Another field of technology in which moral outsourcing is frequently brought up is autonomous vehicles. [6] California Polytechnic State University professor Keith Abney proposed an example scenario: "Suppose we have some [troublemaking] teenagers, and they see an autonomous vehicle, they drive right at it. They know the autonomous vehicle will swerve off the road and go off a cliff, but should it?" [6] The decision of whether to sacrifice the autonomous vehicle (and any passengers inside) or the vehicle coming at it will be written into the algorithms defining the car's behavior. In the case of moral outsourcing, the responsibility of any damage caused by an accident may be attributed to the autonomous vehicle itself, rather than the creators who wrote protocol the vehicle will use to "decide" what to do. [1]

Moral outsourcing is also used to delegate the consequences of predictive policing algorithms to technology, rather than the creators or the police. There are many ethical concerns with predictive policing due to the fact that it results in the over-policing of low income and minority communities. [7] In the context of moral outsourcing, the positive feedback loop of sending disproportionate police forces into minority communities is attributed to the algorithm and the data being fed into this system--rather than the users and creators of the predictive policing technology.

Outside of Technology

Religion

Moral outsourcing is also commonly seen in appeals to religion to justify discrimination or harm. In his book What It Means to be Moral, sociologist Phil Zuckerman contradicts the popular religious notion that morality comes from God. Religion is oftentimes cited as a foundation for a moral stance without any tangible relation between the religious beliefs and personal stance. [8] In these cases, religious individuals will "outsource" their personal beliefs and opinions by claiming that they are a result of their religious identification. This is seen where religion is cited as a factor for political beliefs, [9] medical beliefs, [10] and in extreme cases an excuse for violence. [11]

Manufacturing

Moral outsourcing can also be seen in the business world in terms of manufacturing goods and avoiding environmental responsibility. Some companies in the United States will move their production process to foreign countries with more relaxed environmental policies to avoid the pollution laws that exist in the US. A study by the Harvard Business Review found that "in countries with tight environmental regulation, companies have 29% lower domestic emissions on average. On the other hand, such a tightening in regulation results in 43% higher emissions abroad." [12] The consequences of higher pollution rates are then attributed to the loose regulations in these countries, rather than on the companies themselves who purposefully moved into these areas to avoid strict pollution policy.

Rumman Chowdhury

Chowdhury has a prominent voice in the discussions about the intersection of ethics and AI. Her ideas have been included in The Atlantic, [13] Forbes, [3] MIT Technology Review, [14] and the Harvard Business Review. [15]

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or other animals. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Obstacle avoidance, in robotics, is a critical aspect of autonomous navigation and control systems. It is the capability of a robot or an autonomous system/machine to detect and circumvent obstacles in its path to reach a predefined destination. This technology plays a pivotal role in various fields, including industrial automation, self-driving cars, drones, and even space exploration. Obstacle avoidance enables robots to operate safely and efficiently in dynamic and complex environments, reducing the risk of collisions and damage.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances its intended objectives. A misaligned AI system pursues some objectives, but not the intended ones.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China.

<span class="mw-page-title-main">Rumman Chowdhury</span> Data scientist, AI specialist

Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former Responsible Artificial Intelligence Lead at Accenture. She was born in Rockland County, New York. She is recognized for her contributions to the field of data science.

Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

Artificial Intelligence is the intelligence of machines or software rather than humans or animals. Culture change is a term used in public policy that emphasizes the influence of cultural capital on individual and community behavior. Artificial Intelligence has become a cultural change in today's world because of how its capacity and efficiency are improving and advancing everything we as humans do in our daily lives.

References

  1. 1 2 3 4 5 Chowdhury, Rumman (2018-06-08), Moral Outsourcing: Humanity in the age of AI , retrieved 2023-11-28
  2. 1 2 "'I do not think ethical surveillance can exist': Rumman Chowdhury on accountability in AI". The Guardian. 2023-05-29. ISSN   0261-3077 . Retrieved 2023-10-14.
  3. 1 2 Chowdhury, Rumman. "Moral Outsourcing: Finding the Humanity in Artificial Intelligence". Forbes. Retrieved 2023-10-14.
  4. Shaikh, Sonia Jawaid (2020). "Artificial Intelligence and Resource Allocation in Health Care: The Process-Outcome Divide in Perspectives on Moral Decision-Making" (PDF). Symposium on AI for Social Good.
  5. Farhud, Dariush D.; Zokaei, Shaghayegh (2021-10-27). "Ethical Issues of Artificial Intelligence in Medicine and Healthcare". Iranian Journal of Public Health. doi:10.18502/ijph.v50i11.7600. ISSN   2251-6093. PMC   8826344 . PMID   35223619.
  6. 1 2 "How will driverless cars make life-or-death decisions?". PBS NewsHour. 2016-05-28. Retrieved 2023-11-28.
  7. VCE (2022-02-17). "Pitfalls of Predictive Policing: An Ethical Analysis". Viterbi Conversations in Ethics. Retrieved 2023-11-30.
  8. McKay, Ryan; Whitehouse, Harvey (March 2015). "Religion and morality". Psychological Bulletin. 141 (2): 447–473. doi: 10.1037/a0038455 . ISSN   1939-1455. PMC   4345965 . PMID   25528346.
  9. Graham, Jesse; Haidt, Jonathan (February 2010). "Beyond Beliefs: Religions Bind Individuals Into Moral Communities". Personality and Social Psychology Review. 14 (1): 140–150. doi:10.1177/1088868309353415. ISSN   1088-8683. PMID   20089848. S2CID   264630346.
  10. Pelčić, Gordana; Karačić, Silvana; Mikirtichan, Galina L.; Kubar, Olga I.; Leavitt, Frank J.; Cheng-tek Tai, Michael; Morishita, Naoki; Vuletić, Suzana; Tomašević, Luka (October 2016). "Religious exception for vaccination or religious excuses for avoiding vaccination". Croatian Medical Journal. 57 (5): 516–521. doi:10.3325/cmj.2016.57.516. ISSN   0353-9504. PMC   5141457 . PMID   27815943.
  11. Gonçalves, Juliane Piasseschi de Bernardin; Lucchetti, Giancarlo; Maraldi, Everton de Oliveira; Fernandez, Paulo Eduardo Lahoz; Menezes, Paulo Rossi; Vallada, Homero (2023). "The role of religiosity and spirituality in interpersonal violence: a systematic review and meta-analysis". Brazilian Journal of Psychiatry. 45 (2): 1–63. doi:10.47626/1516-4446-2022-2832. PMC   10154014 . PMID   36331229.
  12. Ben-David, Itzhak (Zahi); Kleimeier, Stefanie; Viehs, Michael (2019-02-04). "Research: When Environmental Regulations Are Tighter at Home, Companies Emit More Abroad". Harvard Business Review. ISSN   0017-8012 . Retrieved 2023-11-30.
  13. Chowdhury, Rumman (2023-02-17). "I Watched Elon Musk Kill Twitter's Culture From the Inside". The Atlantic. Retrieved 2023-10-14.
  14. "What is an "algorithm"? It depends whom you ask". MIT Technology Review. Retrieved 2023-10-14.
  15. "How to Practice Responsible AI". Harvard Business Review. 2021-06-16. ISSN   0017-8012 . Retrieved 2023-10-14.