Reversal test

Last updated

The reversal test is a heuristic designed to spot and eliminate status quo bias, an emotional bias irrationally favouring the current state of affairs. The test is applicable to the evaluation of any decision involving a potential deviation from the status quo along some continuous dimension. The reversal test was introduced in the context of the bioethics of human enhancement by Nick Bostrom and Toby Ord. [1]

Contents

Reversal test

Bostrom and Ord introduced the reversal test to provide an answer to the question of how one can, given that humans might suffer from irrational status quo bias, distinguish between valid criticisms of a proposed increase in some human trait and criticisms merely motivated by resistance to change. [1] The reversal test attempts to do this by asking whether it would be a good thing if the trait was decreased: An example given is that if someone objects that an increase in intelligence would be a bad thing due to more dangerous weapons being made etc., the objector to that position would then ask "Shouldn't we decrease intelligence then?"

"Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias." (p. 664) [1]

Ideally the test will help reveal whether status quo bias is an important causal factor in the initial judgement.

A similar thought experiment in regards to dampening traumatic memories was described by Adam J. Kolber, imagining whether aliens naturally resistant to traumatic memories should adopt traumatic "memory enhancement". [2] The "trip to reality" rebuttal to Nozick's experience machine thought experiment (where one's entire current life is shown to be a simulation and one is offered to return to reality) can also be seen as a form of reversal test. [3]

Double reversal test

A further elaboration on the reversal test is suggested as the double reversal test: [1]

"Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor." (p. 673)

As an example, consider the parameter to be life expectancy, moving in the downward direction because of a sudden natural disease. We might intervene to invest in better health infrastructure to preserve the current life expectancy. Now if the disease is cured, the double reversal test asks: should we reverse our investment and defund the health services we created since the disease, now that it's gone? If not, perhaps we should invest in health infrastructure even if there never is a disease in the first place.

In this case the status quo bias is turned against itself, greatly reducing its impact on the reasoning. It also purports to handle arguments of evolutionary adaptation, transition costs, risk, and societal ethics that can counter the other test.

Criticism

Alfred Nordmann argues that the reversal test merely erects a straw-man argument in favour of enhancement. He claims that the tests are limited to approaches that are consequentialist and deontological. He adds that one cannot view humans as sets of parameters that can be optimized separately or without regard to their history. [4]

Christian Weidemann argues that the double reversal test can muddy the water; guaranteeing and weighing transition costs versus benefits might be the relevant practical ethical question for much human enhancement analysis. [5]

Related Research Articles

Non-cognitivism is the meta-ethical view that ethical sentences do not express propositions and thus cannot be true or false. A noncognitivist denies the cognitivist claim that "moral judgments are capable of being objectively true, because they describe some feature of the world". If moral statements cannot be true, and if one cannot know something that is not true, noncognitivism implies that moral knowledge is impossible.

Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.

<span class="mw-page-title-main">David Pearce (philosopher)</span> British transhumanist

David Pearce is a British transhumanist philosopher. He is the co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+. Pearce approaches ethical issues from a lexical negative utilitarian perspective.

The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. This heuristic, operating on the notion that, if something can be recalled, it must be important, or at least more important than alternative solutions not as readily recalled, is inherently biased toward recently acquired information.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.

In philosophy and neuroscience, neuroethics is the study of both the ethics of neuroscience and the neuroscience of ethics. The ethics of neuroscience concerns the ethical, legal and social impact of neuroscience, including the ways in which neurotechnology can be used to predict or alter human behavior and "the implications of our mechanistic understanding of brain function for society... integrating neuroscientific knowledge with ethical and social thought".

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

Status quo bias is an emotional bias; a preference for the maintenance of one's current or previous state of affairs, or a preference to not undertake any action to change this current or previous state. The current baseline is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.

The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Anders Sandberg</span> Swedish computer scientist, futurist, transhumanist, and philosopher

Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is currently a senior research fellow at the Future of Humanity Institute at the University of Oxford, and a Fellow at Reuben College.

The experience machine or pleasure machine is a thought experiment put forward by philosopher Robert Nozick in his 1974 book Anarchy, State, and Utopia. It is an attempt to refute ethical hedonism by imagining a choice between everyday reality and an apparently preferable simulated reality.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, or the military.

Animal ethics is a branch of ethics which examines human-animal relationships, the moral consideration of animals and how nonhuman animals ought to be treated. The subject matter includes animal rights, animal welfare, animal law, speciesism, animal cognition, wildlife conservation, wild animal suffering, the moral status of nonhuman animals, the concept of nonhuman personhood, human exceptionalism, the history of animal use, and theories of justice. Several different theoretical approaches have been proposed to examine this field, in accordance with the different theories currently defended in moral and political philosophy. There is no theory which is completely accepted due to the differing understandings of what is meant by the term ethics; however, there are theories that are more widely accepted by society such as animal rights and utilitarianism.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Toby Ord</span> Australian philosopher (born 1979)

Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.

<span class="mw-page-title-main">S. Matthew Liao</span> Taiwanese-born American philosopher

S. Matthew Liao is an American philosopher specializing in bioethics and normative ethics. He is internationally known for his work on topics including children’s rights and human rights, novel reproductive technologies, neuroethics, and the ethics of artificial intelligence. Liao currently holds the Arthur Zitrin Chair of Bioethics, and is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University. He has previously held appointments at Oxford, Johns Hopkins, Georgetown, and Princeton.

Bioconservatism is a philosophical and ethical stance that emphasizes caution and restraint in the use of biotechnologies, particularly those involving genetic manipulation and human enhancement. The term "bioconservatism" is a portmanteau of the words biology and conservatism.

<span class="mw-page-title-main">Ethics of uncertain sentience</span> Applied ethics issue

The ethics of uncertain sentience refers to questions surrounding the treatment of and moral obligations towards individuals whose sentience—the capacity to subjectively sense and feel—and resulting ability to experience pain is uncertain; the topic has been particularly discussed within the field of animal ethics, with the precautionary principle frequently invoked in response.

References

  1. 1 2 3 4 Bostrom, Nick; Ord, Toby (July 2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics. 116 (4): 656–679. doi:10.1086/505233. ISSN   0014-1704. PMID   17039628. S2CID   12861892.
  2. Kolber, Adam (2006-10-01). "Therapeutic Forgetting: The Legal and Ethical Implications of Memory Dampening". Vanderbilt Law Review. 59 (5): 1559.
  3. Weijers, Dan (Summer–Autumn 2011). "Intuitive Biases in Judgments about Thought Experiments: The Experience Machine Revisited" (PDF). Philosophical Writings. 50 & 51. Archived from the original (PDF) on 2016-03-03. Retrieved 2012-01-31.
  4. Nordmann, Alfred (2007-03-01). "If and Then: A Critique of Speculative NanoEthics". NanoEthics. 1 (1): 31–46. doi:10.1007/s11569-007-0007-6. ISSN   1871-4765. S2CID   62796380.
  5. Weidemann, Christian. Towards a Heuristic for Nanoethics: The Onus of Proof in Applied Ethics. Uncovering Status Quo and Other Biases. In Size Matters: Ethical, Legal and Social Aspects of Nanbiotechnology and Nanomedicine, Johann S. Ach, Christian Weidemann (eds.), LIT Verlag Münster 2009 (pp. 126–127)