Moral patienthood [1] (also called moral patience, [2] moral patiency, [3] and moral status [4] [5] ) is the state of being eligible for moral consideration by a moral agent. [4] In other words, the morality of an action can depend on how it affects or relates to moral patients.
The question of what moral patienthood is held by non-human animals [6] [7] and artificial entities [8] [9] has been academically explored. In 2021, Open Philanthropy recommended a grant of $315,500 to "support research related to moral patienthood and moral weight." [10]
Most authors define moral patients as "beings that are appropriate objects of direct moral concern". [4] This category may include moral agents, and usually does include them. For instance, Charles Taliaferro says: "A moral agent is someone who can bring about events in ways that are praiseworthy or subject to blame. A moral patient is someone who can be morally mistreated. All moral agents are moral patients, but not all moral patients (human babies, some nonhuman animals) are moral agents." [11]
Some authors use the term in a more narrow sense, according to which moral patients are "beings who are appropriate objects of direct moral concern but are not (also) moral agents". [4] Tom Regan's The Case for Animal Rights used the term in this narrow sense. [12] This usage was shared by other authors who cited Regan, such as Nicholas Bunnin and Jiyuan Yu's Blackwell Dictionary of Western Philosophy, [12] Dinesh Wadiwel's The War Against Animals, [13] and the Encyclopedia of Population. [14] These authors did not think that moral agents are not eligible for moral consideration, they simply had a different view on how a "moral patient" is defined.
The paper by Luciano Floridi and J.W. Sanders, On the Morality of Artificial Agents, defines moral agents as "all entities that can in principle qualify as sources of moral action", and defines moral patients, in accordance with the common usage, as "all entities that can in principle qualify as receivers of moral action". [15] However, they note that besides inclusion of agents within patients, other relationships of moral patienthood with moral agency are possible. Marian Quigley's Encyclopedia of Information Ethics and Security summarizes the possibilities that they gave:
How can we characterize the relationship between ethical agents and patients? According to Floridi and Sanders (2004), there are five logical relationships between the class of ethical agents and the class of patients: (1) agents and patients are disjoint, (2) patients can be a proper subset of agents, (3) agents and patients can intersect, (4) agents and patients can be equal, or (5) agents can be a proper subset of patients. Medical ethics, bioethics, and environmental ethics “typify” agents and patients when the patient is specified as any form of life. Animals, for example, can be moral patients but not moral agents. Also, there are ethics that typify moral agenthood to include legal entities (especially human-based entities) such as companies, agencies, and artificial agents, in addition to humans. [16]
Mireille Hildebrandt notes that Floridi and Sanders, in their paper, spoke of "damage" instead of "harm", and that in doing so, they "avoid the usual assumption that an entity must be sentient to count as a patient." [17]
Axiology is the philosophical study of value. It includes questions about the nature and classification of values and about what kinds of things have value. It is intimately connected with various other philosophical fields that crucially depend on the notion of value, like ethics, aesthetics or philosophy of religion. It is also closely related to value theory and meta-ethics. The term was first used by Eduard von Hartmann in 1887 and by Paul Lapie in 1902.
In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good".
Ethics or moral philosophy is the philosophical study of moral phenomena. It investigates normative questions about what people ought to do or which behavior is morally right. It is usually divided into three major fields: normative ethics, applied ethics, and metaethics.
A mental event is any event that happens within the mind of a conscious individual. Examples include thoughts, feelings, decisions, dreams, and realizations. These events often make up the conscious life that are associated with cognitive function.
In developmental psychology and moral, political, and bioethical philosophy, autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. Autonomy can also be defined from a human resources perspective, where it denotes a level of discretion granted to an employee in his or her work. In such cases, autonomy is known to generally increase job satisfaction. Self-actualized individuals are thought to operate autonomously of external expectations. In a medical context, respect for a patient's personal autonomy is considered one of many fundamental ethical principles in medicine.
Virtue ethics is an approach that treats virtue and character as the primary subjects of ethics, in contrast to other ethical systems that put consequences of voluntary acts, principles or rules of conduct, or obedience to divine authority in the primary role.
In moral philosophy, deontological ethics or deontology is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules and principles, rather than based on the consequences of the action. It is sometimes described as duty-, obligation-, or rule-based ethics. Deontological ethics is commonly contrasted to consequentialism, utilitarianism, virtue ethics, and pragmatic ethics. In this terminology, action is more important than the consequences.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Moral agency is an individual's ability to make moral choices based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong."
In philosophy, moral responsibility is the status of morally deserving praise, blame, reward, or punishment for an act or omission in accordance with one's moral obligations. Deciding what counts as "morally obligatory" is a principal concern of ethics.
Information ethics has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society". It examines the morality that comes from information as a resource, a product, or as a target. It provides a critical framework for considering moral issues concerning informational privacy, moral agency, new environmental issues, problems arising from the life-cycle of information. It is very vital to understand that librarians, archivists, information professionals among others, really understand the importance of knowing how to disseminate proper information as well as being responsible with their actions when addressing information.
Jiyuan Yu was a Chinese moral philosopher noted for his work on virtue ethics. Yu was a long-time and highly admired Professor of Philosophy at the State University of New York at Buffalo, in Buffalo, New York, starting in 1997. Prior to his professorship, Yu completed a three-year post as a research fellow at the University of Oxford, England (1994-1997). He received his education in China at both Shandong University and Renmin University, in Italy at Scuola Normale Superiore di Pisa, and in Canada at the University of Guelph. His primary areas of research and teaching included Ancient Greek Philosophy, and Ancient Chinese Philosophy.
Kantian ethics refers to a deontological ethical theory developed by German philosopher Immanuel Kant that is based on the notion that "I ought never to act except in such a way that I could also will that my maxim should become a universal law.” It is also associated with the idea that “[i]t is impossible to think of anything at all in the world, or indeed even beyond it, that could be considered good without limitation except a good will." The theory was developed in the context of Enlightenment rationalism. It states that an action can only be moral if it is motivated by a sense of duty, and its maxim may be rationally willed a universal, objective law.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Pragmatic ethics is a theory of normative philosophical ethics and meta-ethics. Ethical pragmatists such as John Dewey believe that some societies have progressed morally in much the way they have attained progress in science. Scientists can pursue inquiry into the truth of a hypothesis and accept the hypothesis, in the sense that they act as though the hypothesis were true; nonetheless, they think that future generations can advance science, and thus future generations can refine or replace their accepted hypotheses. Similarly, ethical pragmatists think that norms, principles, and moral criteria are likely to be improved as a result of inquiry.
"Ought implies can" is an ethical formula ascribed to Immanuel Kant that claims an agent, if morally obliged to perform a certain action, must logically be able to perform it:
For if the moral law commands that we ought to be better human beings now, it inescapably follows that we must be capable of being better human beings.
The action to which the "ought" applies must indeed be possible under natural conditions.
Virginia Potter Held is an American moral, social/political and feminist philosopher whose work on the ethics of care sparked significant research into the ethical dimensions of providing care for others and critiques of the traditional roles of women in society.
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold. The book was awarded as the 2012 Best Single Authored Book by the Communication Ethics Division of the National Communication Association.
Kenneth Einar Himma is an American philosopher, author, lawyer, academic and lecturer.