Moral patienthood

Last updated

Moral patienthood [1] (also called moral patience, [2] moral patiency, [3] and moral status [4] [5] ) is the state of being eligible for moral consideration by a moral agent. [4] In other words, the morality of an action can depend on how it affects or relates to moral patients.

Contents

The question of what moral patienthood is held by non-human animals [6] [7] and artificial entities [8] [9] has been academically explored. In 2021, Open Philanthropy recommended a grant of $315,500 to "support research related to moral patienthood and moral weight." [10]

Definition

Most authors define moral patients as "beings that are appropriate objects of direct moral concern". [4] This category may include moral agents, and usually does include them. For instance, Charles Taliaferro says: "A moral agent is someone who can bring about events in ways that are praiseworthy or subject to blame. A moral patient is someone who can be morally mistreated. All moral agents are moral patients, but not all moral patients (human babies, some nonhuman animals) are moral agents." [11]

Narrow usage

Some authors use the term in a more narrow sense, according to which moral patients are "beings who are appropriate objects of direct moral concern but are not (also) moral agents". [4] Tom Regan's The Case for Animal Rights used the term in this narrow sense. [12] This usage was shared by other authors who cited Regan, such as Nicholas Bunnin and Jiyuan Yu's Blackwell Dictionary of Western Philosophy, [12] Dinesh Wadiwel's The War Against Animals, [13] and the Encyclopedia of Population. [14] These authors did not think that moral agents are not eligible for moral consideration, they simply had a different view on how a "moral patient" is defined.

Relationship with moral agency

The paper by Luciano Floridi and J.W. Sanders, On the Morality of Artificial Agents, defines moral agents as "all entities that can in principle qualify as sources of moral action", and defines moral patients, in accordance with the common usage, as "all entities that can in principle qualify as receivers of moral action". [15] However, they note that besides inclusion of agents within patients, other relationships of moral patienthood with moral agency are possible. Marian Quigley's Encyclopedia of Information Ethics and Security summarizes the possibilities that they gave:

How can we characterize the relationship between ethical agents and patients? According to Floridi and Sanders (2004), there are five logical relationships between the class of ethical agents and the class of patients: (1) agents and patients are disjoint, (2) patients can be a proper subset of agents, (3) agents and patients can intersect, (4) agents and patients can be equal, or (5) agents can be a proper subset of patients. Medical ethics, bioethics, and environmental ethics “typify” agents and patients when the patient is specified as any form of life. Animals, for example, can be moral patients but not moral agents. Also, there are ethics that typify moral agenthood to include legal entities (especially human-based entities) such as companies, agencies, and artificial agents, in addition to humans. [16]

Mireille Hildebrandt notes that Floridi and Sanders, in their paper, spoke of "damage" instead of "harm", and that in doing so, they "avoid the usual assumption that an entity must be sentient to count as a patient." [17]

See also

Related Research Articles

Axiology is the philosophical study of value. It includes questions about the nature and classification of values and about what kinds of things have value. It is intimately connected with various other philosophical fields that crucially depend on the notion of value, like ethics, aesthetics or philosophy of religion. It is also closely related to value theory and meta-ethics. The term was first used by Eduard von Hartmann in 1887 and by Paul Lapie in 1902.

<span class="mw-page-title-main">Consequentialism</span> Ethical theory based on consequences

In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good".

<span class="mw-page-title-main">Ethics</span> Philosophical study of morality

Ethics or moral philosophy is the philosophical study of moral phenomena. It investigates normative questions about what people ought to do or which behavior is morally right. It is usually divided into three major fields: normative ethics, applied ethics, and metaethics.

A mental event is any event that happens within the mind of a conscious individual. Examples include thoughts, feelings, decisions, dreams, and realizations. These events often make up the conscious life that are associated with cognitive function.

In developmental psychology and moral, political, and bioethical philosophy, autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. Autonomy can also be defined from a human resources perspective, where it denotes a level of discretion granted to an employee in his or her work. In such cases, autonomy is known to generally increase job satisfaction. Self-actualized individuals are thought to operate autonomously of external expectations. In a medical context, respect for a patient's personal autonomy is considered one of many fundamental ethical principles in medicine.

<span class="mw-page-title-main">Virtue ethics</span> Normative ethical theories

Virtue ethics is an approach that treats virtue and character as the primary subjects of ethics, in contrast to other ethical systems that put consequences of voluntary acts, principles or rules of conduct, or obedience to divine authority in the primary role.

In moral philosophy, deontological ethics or deontology is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules and principles, rather than based on the consequences of the action. It is sometimes described as duty-, obligation-, or rule-based ethics. Deontological ethics is commonly contrasted to consequentialism, utilitarianism, virtue ethics, and pragmatic ethics. In this terminology, action is more important than the consequences.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Moral agency is an individual's ability to make moral choices based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong."

In philosophy, moral responsibility is the status of morally deserving praise, blame, reward, or punishment for an act or omission in accordance with one's moral obligations. Deciding what counts as "morally obligatory" is a principal concern of ethics.

Information ethics has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society". It examines the morality that comes from information as a resource, a product, or as a target. It provides a critical framework for considering moral issues concerning informational privacy, moral agency, new environmental issues, problems arising from the life-cycle of information. It is very vital to understand that librarians, archivists, information professionals among others, really understand the importance of knowing how to disseminate proper information as well as being responsible with their actions when addressing information.

Jiyuan Yu was a Chinese moral philosopher noted for his work on virtue ethics. Yu was a long-time and highly admired Professor of Philosophy at the State University of New York at Buffalo, in Buffalo, New York, starting in 1997. Prior to his professorship, Yu completed a three-year post as a research fellow at the University of Oxford, England (1994-1997). He received his education in China at both Shandong University and Renmin University, in Italy at Scuola Normale Superiore di Pisa, and in Canada at the University of Guelph. His primary areas of research and teaching included Ancient Greek Philosophy, and Ancient Chinese Philosophy.

<span class="mw-page-title-main">Kantian ethics</span> Ethical theory of Immanuel Kant

Kantian ethics refers to a deontological ethical theory developed by German philosopher Immanuel Kant that is based on the notion that "I ought never to act except in such a way that I could also will that my maxim should become a universal law.” It is also associated with the idea that “[i]t is impossible to think of anything at all in the world, or indeed even beyond it, that could be considered good without limitation except a good will." The theory was developed in the context of Enlightenment rationalism. It states that an action can only be moral if it is motivated by a sense of duty, and its maxim may be rationally willed a universal, objective law.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Pragmatic ethics</span> Theory of normative philosophical ethics and meta-ethics

Pragmatic ethics is a theory of normative philosophical ethics and meta-ethics. Ethical pragmatists such as John Dewey believe that some societies have progressed morally in much the way they have attained progress in science. Scientists can pursue inquiry into the truth of a hypothesis and accept the hypothesis, in the sense that they act as though the hypothesis were true; nonetheless, they think that future generations can advance science, and thus future generations can refine or replace their accepted hypotheses. Similarly, ethical pragmatists think that norms, principles, and moral criteria are likely to be improved as a result of inquiry.

"Ought implies can" is an ethical formula ascribed to Immanuel Kant that claims an agent, if morally obliged to perform a certain action, must logically be able to perform it:

For if the moral law commands that we ought to be better human beings now, it inescapably follows that we must be capable of being better human beings.

The action to which the "ought" applies must indeed be possible under natural conditions.

<span class="mw-page-title-main">Virginia Held</span> American feminist philosopher (born 1929)

Virginia Potter Held is an American moral, social/political and feminist philosopher whose work on the ethics of care sparked significant research into the ethical dimensions of providing care for others and critiques of the traditional roles of women in society.

<i>The Machine Question</i> 2012 nonfiction book by David J. Gunkel

The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold. The book was awarded as the 2012 Best Single Authored Book by the Communication Ethics Division of the National Communication Association.

Kenneth Einar Himma is an American philosopher, author, lawyer, academic and lecturer.

References

  1. Haji, Ishtiyaque; Bernstein, Mark H. (November 2001). "On Moral Considerability: An Essay on Who Morally Matters". Philosophy and Phenomenological Research. 63 (3): 730. doi:10.2307/3071172. JSTOR   3071172.
  2. Zhou, Xinyue; Guo, Siyuan; Huang, Rong; Ye, Weiling (2020), Wu, Shuang; Pantoja, Felipe; Krey, Nina (eds.), "Think versus Feel: Two Dimensions of Brand Anthropomorphism: An Abstract", Marketing Opportunities and Challenges in a Changing Global Marketplace, Cham: Springer International Publishing, pp. 351–352, doi:10.1007/978-3-030-39165-2_138, ISBN   978-3-030-39164-5 , retrieved 2024-04-16
  3. Danaher, John (March 2019). "The rise of the robots and the crisis of moral patiency". AI & Society. 34 (1): 129–136. doi:10.1007/s00146-017-0773-9. ISSN   0951-5666.
  4. 1 2 3 4 Audi, Robert, ed. (2015). The Cambridge Dictionary of Philosophy (3 ed.). Cambridge: Cambridge University Press. doi:10.1017/cbo9781139057509. ISBN   978-1-139-05750-9.
  5. Jaworska, Agnieszka; Tannenbaum, Julie (2023), "The Grounds of Moral Status", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Spring 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 2024-04-16
  6. Lan T, Sinhababu N, Carrasco LR (2022) Recognition of intrinsic values of sentient beings explains the sense of moral duty towards global nature conservation. PLoS ONE 17(10): e0276614. https://doi.org/10.1371/journal.pone.0276614
  7. Müller, N.D. (2022). Kantian Moral Concern, Love, and Respect. In: Kantianism for Animals. The Palgrave Macmillan Animal Ethics Series. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-01930-2_2
  8. Balle, S.N. Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & Soc 37, 535–548 (2022). https://doi.org/10.1007/s00146-021-01211-2
  9. Harris, J., Anthis, J.R. The Moral Consideration of Artificial Entities: A Literature Review. Sci Eng Ethics 27, 53 (2021). https://doi.org/10.1007/s11948-021-00331-8
  10. Open Philanthropy (March 2021). "Rethink Priorities — Moral Patienthood and Moral Weight Research" . Retrieved December 1, 2023.
  11. Taliaferro, Charles; Marty, Elsa J., eds. (2018). A dictionary of philosophy of religion (2nd ed.). New York: Bloomsbury Academic, An imprint of Bloomsbury Publishing Inc. ISBN   978-1-5013-2523-6.
  12. 1 2 Bunnin, Nicholas; Yu, Jiyuan (2004). The Blackwell dictionary of Western philosophy. Malden, MA: Blackwell Pub. ISBN   978-1-4051-0679-5.
  13. Wadiwel, Dinesh Joseph (2015). The war against animals. Critical animal studies. Leiden ; Boston: Brill. ISBN   978-90-04-30041-5.
  14. "Animal Rights | Encyclopedia.com". www.encyclopedia.com. Retrieved 2024-04-16.
  15. Floridi, Luciano; Sanders, J.W. (August 2004). "On the Morality of Artificial Agents". Minds and Machines. 14 (3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d. hdl: 2299/1822 . ISSN   0924-6495.
  16. Quigley, Marian, ed. (2008). Encyclopedia of information ethics and security. Hershey: Information Science Reference. p. 516. ISBN   978-1-59140-987-8. OCLC   85444168.
  17. Duff, Antony; Green, Stuart P., eds. (2011). Philosophical foundations of criminal law. Oxford ; New York: Oxford University Press. p. 523. ISBN   978-0-19-955915-2.